text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
First descriptions of the seasonal habitat use and residency of scalloped hammerhead ( Sphyrna lewini ) and Galapagos sharks ( Carcharhinus galapagensis ) at a coastal seamount off Japan
Background: The Northwestern Pacific is a data-poor region for studies into the movements and habitat use of open ocean and pelagic sharks. However, this region experiences considerable pressure from commercial fishing. Therefore, shark movement data from this region carry significant implications for conservation and management, particularly for threatened species. Here, we provide the first data on seasonal residency and movements of scalloped hammerhead ( Sphyrna lewini ) and Galapagos sharks ( Carcharhinus galapagensis) , using acoustic and satellite telemetry, and dive logbooks, off Japan. Results: Eight female sharks, four of each species, were tagged around a coastal seamount off southeastern Japan (Mikomoto Island) in August 2015, and monitored for a period of up to 363 days using an array of six receivers around the island. Analyses of the more abundant scalloped hammerhead acoustic data suggest high seasonal residency predominantly from August to November associated with lower chlorophyll- a concentrations, before sharks then leave the island and return the following summer. Residency for scalloped hammerhead sharks were highest among those receivers closest to the Kuroshio Current, which produces strong coastal upwelling, however SST was not found to be predictive of occurrence at Mikomoto. Shark presence was corroborated by analysis of dive-log data from a local ecotourism operator. We also produced two unique satellite tracks, whereby a scalloped hammerhead exhibited a 200-km dispersal into a coastal embayment west of the tagging location and a Galapagos shark migrated over 800 km offshore into the high seas. Conclusion: This study provided some of the first behavioral and movement data for scalloped hammerhead and Galapagos sharks in Japan. Our findings suggest varying spatial and temporal visitation of two shark species to a coastal seamount, underscored by some degree of seasonal residency and site fidelity and linked, for scalloped hammerhead sharks at least, to varying productivity. Furthermore, we provided preliminary evidence for long-distance dispersal of these species, and some site fidelity to seamounts in the region. This study highlights the importance of acoustic and satellite tags from sharks which co-occur seasonally at a seamount off coastal Japan (Mikomoto Island). This location also supports seasonal shark diving tour-ism activities, and we thus combine our telemetry data with diver ecotourism logbook data, and remotely sensed environmental data, to present novel insights into shark movement and space use from Southeastern Japan. While the small sample size precludes a quan-titative analysis of subpopulation trends at this location, we hope these results underscore the potential for expanded work on these species at seamounts in the region. describing movements to aid filling of threatened species.
Background
The spatial ecology of threatened, highly migratory marine species is often complex and difficult to capture [36], which can sometimes preclude the establishment of biologically relevant management strategies such as marine reserves [15]. However, identifying and characterizing the areas and times where these species occur regularly and/or may form aggregations, in addition to exploring some of the environmental drivers of these processes, can offer valuable insights to advance conservation efforts. The value in describing and protecting areas of high residency and abundance for threatened marine predators is reinforced by the notion that aggregating species are inherently susceptible to overexploitation [23,32,44]. Sharks are among the most threatened marine fishes due to their inherent vulnerability to overfishing, in combination with k-selected life-history traits impacting their ability to respond quickly to exploitation [43]. Employing behavioral approaches to identify biological hotspots as such remains both a challenge and an opportunity [38], particularly for species that retain high value for their meat or fins, or in regions where there is limited political will for conservation. Even for species that spend months at a time in pelagic, high-seas habitat, identifying areas and associated environmental correlates, where their movements bring them back to the same place on a semi-regular basis, represents an important first step towards conserving these animals.
Seamounts can serve as hotspots for pelagic diversity, particularly for large pelagic fish and sharks (e.g., [40]. Hammerhead sharks (Sphyrna spp.), particularly those from the large species complex, are ecologically specialized and behaviorally complex [17,19], and are known to form large aggregations in discrete tropical and temperate locations worldwide (e.g., [6,21,25]. This group of sharks is also prized in the shark fin industry and typically have high mortality rates when caught as bycatch in longline fisheries [18]. As a result, hammerhead sharks have experienced dramatic population reductions worldwide [9,16,19]. In some areas where they were historically abundant, research has demonstrated that hammerhead sharks have been overharvested and even extirpated (e.g., [24]). Despite a circumglobal temperate and tropical distribution, the scalloped hammerhead shark (Sphyrna lewini) is currently assessed as Critically Endangered on the IUCN Red List [39] and may be critically endangered throughout most of their often-extensive range [19]. Consequently, significant research effort has been dedicated to evaluating the residency and habitat use of scalloped hammerhead sharks, in some of their betterknown core locations throughout the Atlantic [45], and in particular, the Eastern tropical Pacific. [6,7,21,27] The Galapagos shark is similarly distributed circumglobally in tropical and warm temperate waters, and is found in open ocean ecosystems and exhibits a preference for isolated oceanic islands [46]. The species is vulnerable to overfishing throughout its range [13,30] and is currently assessed as Near Threatened on the IUCN Red List [5]. Until recently, knowledge on Galapagos shark habitat use limited to a few discrete locations [31,35]. Both scalloped hammerhead and Galapagos sharks commonly aggregate at isolated oceanic seamounts, and investigating the habitat use of these species in these locations has assisted with regional conservation efforts such as lobbying for, evaluating-and in some cases establishing-marine protected areas that encompass the core habitat of semi-oceanic shark species [20,31]. Novel, integrative descriptions of pelagic shark movements, such as those by hammerhead and Galapagos sharks, among and between remote oceanic regions may serve to strengthen or expand existing conservation policies that seek to protect critical habitat [6-8, 21, 34].
Here, we offer new descriptions of a semi-pelagic shark community, dominated by schools of scalloped hammerhead and Galapagos sharks, from a littleknown area in the Western Pacific, off the coast of Japan. In this typically data-poor region, we gathered telemetry data deployed on a small number of acoustic and satellite tags from sharks which co-occur seasonally at a seamount off coastal Japan (Mikomoto Island). This location also supports seasonal shark diving tourism activities, and we thus combine our telemetry data with diver ecotourism logbook data, and remotely sensed environmental data, to present novel insights into shark movement and space use from Southeastern Japan. While the small sample size precludes a quantitative analysis of subpopulation trends at this location, we hope these results underscore the potential for expanded work on these species at seamounts in the region.
describing shark movements to aid in filling critical data gaps for poorly understood, endemic populations of threatened species.
Study area
Mikomoto Island is an isolated seamount situated north of the Philippine Sea in the North Pacific Ocean (34.57°N, 138.94°E), approximately 10 km southeast of the Izu Peninsula, Mikomoto, Shizuoka, Japan, and approximately 250 km west from the Japan Trench (Fig. 1). Mikomoto Island rises vertically from 2500 to 10 m above sea level, whereby the ~ 0.7 km 2 rock island is situated in the middle of an elevated depth contour of 30-40 m in all directions (Fig. 1). The subsurface topography is composed of hard rock and boulders, there is limited subaquatic vegetation or algal cover due to the strong Kuroshio Current which passes over the seamount. Locally, the area is subject to a few small-scale net fisheries targeting small pelagic fishes as well as sea cucumbers. There is currently no commercial fishery targeting sharks in this region, although sharks are infrequently caught locally as bycatch in regional set-nets. A small fleet of local dive operators (four companies) specialize in local shark diving, as scalloped hammerhead sharks are reliably observed aggregating in large numbers throughout the summer and fall year after year; other species of semi-oceanic sharks such as Galapagos sharks are commonly observed. All shark diving is achieved through natural, non-baited viewings around the seamount. We used passive acoustic telemetry in addition to two satellite tags (detailed below) to describe preliminary patterns of habitat use and movement by these two species at the seamount.
Animal tagging
All field work was conducted from 15 to 22 August 2015 around Mikomoto Island, using a 20-m Japanese fishing vessel from which in-water operations were launched. All animal tagging was non-invasive and performed in water with a trained free-diver as research fishing for sharks is prohibited by the local managers, as well as due to concerns related to hammerhead capture stress and postrelease mortality [19]. A free-diver (M. Healey) tagged both scalloped hammerhead and Galapagos sharks with external acoustic-coded V16 4H transmitters potted in a PVC casing with attachment holes at either end (74 mm length × 16 mm diameter, weight in water: 8.1 g, transmission off times: random between 40 and 80 s; battery life estimated at 5 years [although tag retention was certainly less than this]; Vemco, Innovasea, Halifax, Canada). A 10-cm tether of parachute cord was threaded through the external case hole and crimped, which terminated at a 2.5-cm stainless steel anchor to be embedded into the shark's body. Acoustic tags were loaded into a band-powered speargun on the surface and the tagger would locate and free-dive into a group of schooling sharks (Fig. 2a). All tags were shot into the dorsal flank musculature. Each acoustic tag then broadcasted unique identification 'pings' at semi-random intervals all at 69 kHz. Since all tagging were done in the water, shark total length was visually estimated by the sole free-diver using the tagging gun (120 cm in length) as a reference. A total of four scalloped hammerhead sharks and four Galapagos sharks were tagged with external acoustic transmitters. A further single individual of each species was equipped with a satellite tag (Desert Star-GEO, 132 mm length, 13 mm diameter, weight in air 29 g), attached externally and in situ using the same methods as the acoustic tags, bringing the total tags deployed to 10. Briefly, the satellite tags' internal memory allowed for recordings of temperature (− 40 to + 85 °C, 0.2 °C accuracy) and geomagnetic field values (3 axes) three to four times per day across a 90-day deployment. Onboard light sensors recorded 12-h position estimates using light-based geolocation, and each tag was programmed to transmit raw data and daily average data for the 3-month deployment. The satellite tags have a solar battery and the potential to transmit their data continuously when the tagged animal neared the surface such that the PSAT float antenna could be picked up by an ARGOS low Earth orbiting satellite (http:// www. argos-system. org/) [4]. Once detached or shed, all archived data were transmitted from the tag via ARGOS satellite. Similar to other studies utilizing spearguns to externally affix tags on scalloped hammerhead sharks (e.g., [6,7]), tag retention was assumed to be relatively low (e.g., < 1 year). Empirical, initial tag retention was validated by in situ re-observation of tagged sharks by tourists a week following tagging (Fig. 2b) and in the longer-term by the longevity of the tracking data (maximum known retention = 363 days).
Telemetry array
To record the occurrence and residency of acoustically tagged sharks around Mikomoto Island, an array of six hydrophones (Vemco, Innovasea VR2W) were deployed in the study area (Fig. 3). The receivers were placed around the island, spaced at 500-800 m intervals at depths between 15 and 25 m to enable maintenance by divers. Due to the lack of soft substrate, all receivers were attached to rocky benthos and tied around large boulders using 2 cm polypropylene line, floating 1 m off the seafloor. Range testing on all receivers was performed using a test tag and multiple passes, with results suggesting an average detection radius of 0.215 km, implying the array maintained coverage of the perimeter of the island. All receivers were retrieved with batteries still operational on 21st October 2016, and data from the 14-month deployment were subsequently downloaded.
Diver logbook data
A collaborative local dive operation maintains a robust, publicly available dive-log containing digital photos on hammerhead shark observations throughout the year, from 2003 to present (Additional file 1: Mikomoto Hammers, http:// www. mikom oto. com/ engli sh/). To gather comparative information on shark presence at the study site, these dive-logs were accessed and analyzed by a native Japanese speaker (co-author YYW) for a period of 16 months inclusive of our study period, from 4 July 2015 to 29 November 2016. Available digital photos for every dive-log record during this period (resolution ~ 640 × 480 pixels) were scored in one of three categories, according to the presence of scalloped hammerhead sharks: "0" where no sharks were seen on that day; "1" where one shark was seen that day; and "2" where multiple scalloped hammerhead sharks were observed on that day. The dive operator visits Mikomoto Island regularly during the diving season (July-November), but did not dive every day during the study period due to weather conditions; furthermore, we were not able to control for the number of photos posted for a given day, nor the variation in photographer or ability to find sharks on a dive. Thus, these records served as a coarse proxy for shark presence over a representative time-frame for which to compare with our acoustic data.
Movement, space use and environmental analyses
The acoustic time-series data were plotted both linearly and in an aggregated form to explore differential patterns in both seasonal and diel occupancy at Mikomoto Island, between individuals and species. Residency at each of our receiver locations was calculated as a proportion of the number of days each individual was recorded at each location relative to their total time at liberty [15]. Sea Surface Temperature (SST) and chlorophyll-a (CHL) concentrations, as a proxy for ocean productivity, are the two most widely used remotely sensed environmental variables for explaining patterns of movement in elasmobranchs [47]. There is also evidence to suggest some association in scalloped hammerhead shark movement and occurrence in relation to seasonal changes in SST [14] and reductions in CHL concentrations [45]. For these reasons, we included these environmental variables in our models. Satellite data were obtained from the AVHRR sensor aboard Polar Operational Environmental Satellites (POES) for daily optimum interpolation SST and from the VIIRS sensor aboard the Suomi National Polar-orbiting Partnership (SNPP) satellites for daily chlorophyll-a concentrations via the National Oceanic and Atmospheric Administration (NOAA) environmental data portal ERDDAP. Data were extracted at a resolution of 0.25° and 0.04°, respectively, for the duration of the tracking period using the R package rerddap [10].
Generalized Additive Modeling (GAM) was used to investigate the temporal occurrence of scalloped hammerhead sharks to Mikomoto island from the acoustic data. Data on Galapagos sharks were limited temporally, so the decision was made to only model scalloped hammerhead sharks given the high numbers of this species regularly observed at this location. The occurrence of tagged scalloped hammerhead sharks (n = 4) on a given day of the year was quantified as "1" if > 1 detection per day was recorded and "0" if ≤ 1 detection was recorded as per Andrzejaczek et al. [3]. The GAM was constructed using a binominal error structure with a log link function using maximum likelihood estimation in the R package mgcv [48]. Binary presence was used as the response, with day of the year, daily SST and CHL concentrations as the continuous, smoothed predictor variables. Combinations of these predictor variables were ranked using Akaike's information criterion (AIC) to determine the best model.
Geolocations for the two PSAT-tagged sharks were estimated using archival data that were transmitted from each tag. Raw light-level data from the tag were run through a state-space model (unscented Kalman filter with sea surface temperature, UKFSST, [28]. A continuous-time correlated random walk (CTCRW) state-space model was then applied to produce a regular (daily) timeseries of interpolated positions, following Queiroz et al. [38]. Tracks were then constructed between the daily estimated geolocations in chronological order using the known deployment and pop-off locations.
Frequency histograms were constructed for the diver log book data for each categorical shark presence level, for each sampling month present. To examine whether there were differences in qualitative observations of shark presence and relative abundance across months, we ran a Kruskal-Wallis ANOVA with pairwise Wilcoxon rank sums test on the raw dive-log data.
Results
The eight sharks fitted with external acoustic tags were monitored for between 1 and 363 days, by the six receivers deployed around Mikomoto Island (tag metadata are included in Table 1). Based on visual estimation via the in situ diver, hammerhead shark size was estimated between 210 and 280 cm (total length. ± 40 cm error), whereby Galapagos shark size was estimated between 170 and 230 cm (total length, ± 40 cm error). All sharks tagged across species were identified as female. During this monitoring period, a total of 25,779 detections were recorded for scalloped hammerhead sharks (mean ± SD = 6445 ± 11,270) and 4611 detections for Galapagos sharks (1153 ± 815) across the array with most individuals being detected on all receivers (Fig. 3).
Diel and seasonal patterns of residency to Mikomoto Island
Small sample sizes for both shark species, in addition to short times at liberty for some individuals preclude our ability to measure long-term visitation and residency of these species to Mikomoto Island. The one individual scalloped hammerhead shark that we do have long-term data for however (i.e., ~ 12 months), suggests high residency to this small area between Aug-Feb followed by a long period of absence before a return to this location at the same time the following year (Aug 2016) (H3, Fig. 3). Further research on many more individuals would be required to confirm whether this was a population trend. However, dive center sightings data suggest that aggregations are most likely during the months of August and September, suggestive that numerous individuals might indeed return to this location around the same time to aggregate. The best fitting GAM, identified by the lowest AIC score, retained all three explanatory variables within the model and explained 45.4% of the variation in these data. The model indicated that only CHL concentration had a significant effect on the probability of tagged scalloped hammerhead sharks occurring at Mikomoto Island (df = 3.7, p(χ 2 ) = 0.004), with probability reducing during periods of higher concentrations (Fig. 4a). While still important in the model, SST was marginally non-significant (df = 3.8, p(χ 2 ) = 0.065) but appears to influence the nonlinear effects of CHL through an interaction (Fig. 4b). Day of the year did not influence occurrence.
Exploration of the diel distribution of the detection data suggests that S. lewini are more likely to be present around the island from midday into the evenings (Fig. 5a). Residency to the array around Mikomoto Island was higher for scalloped hammerhead sharks (mean = 0.42, range 0.17-0.67) than for Galapagos sharks (mean = 0.33, range 0.14-0.59). This varied considerably across the six receiver locations (Fig. 5b) with the highest residency, in scalloped hammerhead sharks particularly, at the receivers southwest of the island, those that are most influenced by the southwest to northeasterly Kuroshio Current (Figs. 3b and 5b). Despite the small sample size, the acoustic telemetry data were relatively congruent with the dive-log data (Fig. 5c). Shark presence and abundance, inferred from opportunistic dive-logs from the main operator at the site, appeared to significantly differ according to month (n = 167, Kruskal-Wallis; H = 42.19, p < 0.0001). Peak abundance, which appeared to occur in August, was statistically similar in September and November, but significantly different from June, July, May, and October (Fig. 5c).
Preliminary broad-scale movements away from Mikomoto Island
Both PSAT tag deployments, one on each species, yielded interesting tracks with tag retention substantially different between animals (Fig. 6). The scalloped hammerhead traveled an estimated total distance of 295.62 km over 12 days, exhibiting a two-phased movement pattern, whereby the shark first traveled offshore to what appears to be another set of coastal seamounts, then returning inshore and traversing a nearby coastal embayment (Fig. 6a). The scalloped hammerhead tag then prematurely released. The Galapagos shark carried its tag for a total of 217 days over an estimated 897.47 km, exhibiting a large distance oceanic movement into the open ocean away from the tagging site (Fig. 6b).
Discussion
This study presents preliminary evidence of seasonal residency and site fidelity by scalloped hammerhead sharks, and to a lesser extent, Galapagos sharks, at what appears to be an important aggregation site for the former, Mikomoto Island, off the southeast coast of Japan. Using a combination of acoustic and satellite telemetry, remotely sensed environmental variables, as well as opportunistic citizen science photo log data provided by a collaborating dive operator, we show spatial (e.g., fine-scale differences) and temporal variation (several months after tagging) in visitation to this isolated seamount and associated with chlorophyll-a concentrations in scalloped hammerhead sharks at least. Over the course of a year-long study, we recorded many more detections for scalloped hammerhead sharks than Galapagos sharks, with detections occurring from around midday into the evening. Of the small number of individuals that were tagged, residency was highest to receivers positioned to the west of the island (Fig. 3), potentially indicating a preference for areas most influenced by the Kuroshio Current, although further research would be required to fully confirm this. Peak abundance of scalloped hammerhead sharks, inferred from the dive-log data, indicated this species was most likely to occur in high numbers (> 10 sharks, as seen in photos during this period) between August and November, a period consistent with the four highest monthly detection values logged on our acoustic receivers (Fig. 5). Although largely descriptive, this study provides some of the first, long-term tracking data on a Critically Endangered elasmobranch from southeastern Japan, an area of significant conservation relevance.
Large assemblages of scalloped hammerhead sharks elsewhere are often associated with seamounts and offshore islands, suggesting a preference for high-energy locations influenced by major ocean currents [1,11,21]. In the Eastern Pacific, the Galapagos Marine Reserve (GMR), an area well documented for scalloped hammerhead aggregations, is influenced by the Cromwell, the Humboldt and Panama Currents [21] generating highly dynamic oceanographic conditions in which scalloped hammerhead sharks typically favor up-current habitats [26]. While we had to consider all four tagged scalloped hammerheads together due to the low detection rates in all but one individual, satellite derived CHL concentrations were predictive of occurrence around Mikomoto Island. Concentrations overall were low, but probabilities of occurrence were highest between 0.2 and 0.4 mg m −3 , The Kuroshio Current-the western limb of the North Pacific Subtropical Gyre-strengthens significantly when it rejoins the Pacific Ocean, reaching approximately 65 million cubic meters per second to the southeast of Japan [2]. A recent study showed that a highly aperiodic event known as the Kuroshio Large Meander is associated with positive anomalies of chlorophyll-a concentrations that may have influenced presence/absence, with the meander impacting a large area that includes Mikomoto Island [29]. Productivity aside however, it was on the western and southwestern situated receivers (i.e., those with the greatest exposure to the Kuroshio Current) that we obtained the most detections for this species. In a recent study on grey reef sharks occupying a channel influenced by strong currents and updraft zones in French Polynesia, Papastamatiou et al. [37] found that sharks aggregate and coordinate their behavior (accounting for tidal change) in order to best maintain their position within predictable updraft zones, thus reducing their energy expenditure during periods of refuging. We suspect that similar mechanisms may also underpin scalloped hammerhead schooling behavior, and further meta-analyses of oceanographic conditions around isolated islands/seamounts would be interesting to determine the potential for updrafts to explain hammerhead hotspot locations.
Residency for scalloped hammerhead sharks was higher than that of Galapagos sharks, suggestive of a greater reliance of this species on the conditions around Mikomoto Island. An average residency of 0.42 for scalloped hammerhead sharks was comparable to residency at locations in the Eastern Tropical Pacific where tagged sharks of this species spent on average around half of their time (RI = 0.52) within an acoustic array situated around Cocos Island (Costa Rica), in particular at a shallow seamount to the southeast of the island [34]. For the short period of time that the four Galapagos sharks individuals were present, they were also most resident at same three acoustic receivers (R2, R4 and R5). Our two PSAT deployments provided limited data, but the data allowed us to confirm that both species appeared to exhibit seasonal dispersal away from the seamount, although the types of behavior differed between species. The scalloped hammerhead female exhibited behavior consistent with that seen in other regions (e.g., [22], with a contrast between habitat use at offshore islands or seamounts, and the use of insular bays which function as inshore nursery habitats (e.g., [12,42]. While it remains unknown why the tagged hammerhead in the present study moved into an insular bay, use of these areas may be biologically important, as local fishers commonly report bycatch of both adult and juvenile scalloped hammerhead sharks in beach set-nets targeting finfish throughout bays along the southeastern Japanese coastline (Pers. Comm, YYW). The offshore movements of the Galapagos shark highlighted expansive dispersal into the high seas of the Western Pacific-movements greater than seen for the species in other localities [31,35], yet the function of these movements remains unknown. Nevertheless, both species demonstrated a combination For a preliminary study such as this, it is important to acknowledge the limitations of our data. Logistical challenges precluded further subsequent deployment of tags in the area, meaning that we were limited in the generalizability of our tagging data to just the eight acoustic and two satellite tags. That said, the importance of publishing data on endangered species from under-represented areas cannot be understated [41]. The citizen science data, although opportunistic and lacking information about effort, provided clear, and complimentary evidence of when sharks were likely to be present and when in large numbers (Fig. 5c), with 10 s to 100 s of individuals seen regularly by staff and tourists throughout the season and year-on-year (Mikomoto Hammers, Pers. Comm).
Conclusions
Whereas the majority of research into the movements of pelagic sharks exhibiting residency to coastal and oceanic seamounts, particularly that of the scalloped hammerhead, have been conducted in the Eastern Tropical Pacific, our data suggest the occurrence of regional populations of these species which exhibit seasonal residency at coastal seamounts in the northwest Pacific, a data-poor region for shark conservation. Given that the northwestern region of the Pacific Ocean was recently highlighted within the global distribution map of longline fishing effort as one of a few large-scale areas of heavy longline use [38], we argue that it should be receive rapid research and conservation attention for its potential in safeguarding the biodiversity of seasonally resident, Critically Endangered shark species. | 6,330.8 | 2022-07-12T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Polarization of Government and NGO Orientation towards Eco-Rural Tourism Development in Kerinci Region, Jambi Province, Indonesia
Rural is an area that should be conserved. Conservation in the sense that its development must be in accordance with the concepts and potentials that have been passed down from generation to generation. The success of rural area development is strongly influenced by the orientation of stakeholders, including the government and NGOs. The purpose of this study is to analyze the polarization of the government and NGOs orientation towards the development of eco-rural tourism in Kerinci Regency, Jambi Province. This study used an instrument in a closed questionnaire with a One Score One Indicator Scoring System assessment pattern, with aspects assessed including socio-cultural, conservation and environmental aspects, ethnic politics, economics, regional development, tourism, and landscape ecology in six villages. The polarization of government and NGO orientations is measured by perception, then analyzed using the Kruskal Wallis statistical test by quantitative and comparative descriptive methods. The results showed that the government and NGOs stated that the concept of eco-rural tourism was relevant to be built and developed in rural areas. The direction of polarization from the orientation of the government and NGOs shows the same direction, namely a positive direction and a strong polarization scale. This means that the government and NGOs have agreed to develop their rural areas into eco-rural tourism by meeting the indicators that have been formulated. The perception of the government and NGOs will strengthen the application of this concept. This study concludes that the government and NGOs have an excellent opportunity to create collaboration in developing rural areas in accordance with the concept of eco-rural tourism.
INTRODUCTION
Rural is an area that should be conserved.Conservation in the sense that its development must be in accordance with the concepts and potentials that have been passed down from generation to generation (Jaafar et al., 2015;Nicely & Sydnor, 2015;Sirisrisak, 2009).The development of rural areas is still a hot issue that continues to roll and is full of dynamics and is widely used by various stakeholders.The development carried out by various stakeholders lately, the majority are only for unilateral gain, besides that there is also a lot of friction between stakeholders which results in conflict (Belsoy et al., 2012;Mcareavey & Mcdonagh, 2011b).In this condition, it is the local community who are actually harmed.The roles of each stakeholder such as the government, NGOs, businessmen, academics, and local communities should be clearly distinguished so that there is no overlap and can support each other to make rural areas into areas that develop sustainably, especially for rural areas in Indonesia.
Most of Indonesia's population lives in rural areas (BPS, 2021).Rural in Indonesia plays a significant role related to traditional stereotypes in society, which is an area that dependent of life on the primary sector, namely agriculture, kinship relations that are still strong among community members, limited infrastructure and still hold strongly various cultural values.Therefore until now the issue of rural area development is still a hot issue (Amir et al., 2015;Rasoolimanesh et al., 2017)., Becoming a sustainable rural area is a hope to be achieved in rural area development.Numerous concepts are offered to realize it, such as agriculture-based rural areas, local community-based rural areas, naturebased rural areas, local culture-based rural areas, and sustainable rural areas (Randelli & Martellozzo, 2019).Each concept have different focuses although they have the same goal, namely sustainable rural areas.However, in many cases of developing rural areas, especially areas that have tourism potential, the desired sustainability has not been fully realized.This is one of the emergence backgrounds of a new concept, namely ecorural tourism, which tries to answer things that have not been achieved in other development concepts.
Eco-rural and eco-rural tourism are concepts offered to develop sustainable rural areas with an approach of rural communities' demand (Nørgaard & Thuesen, 2021;Roger, 2015;Yeo, 2013).The subject of development in the eco-rural concept is the village itself with all its potential to create various added values.The development of eco-rural tourism is in line with the function of rural area.Derenne (2008) mentions there are 5 (five) important functions of the rural area, namely (a) production functions; (b) housing functions; (c) recreational and tourism functions; (d) environmental functions; and (e) legacy functions.Tourism development, as well as eco-rural tourism, requires the proper participation of all stakeholders, especially in this case the involvement of government and NGOs in the decision-making of the tourism development process (Theobald, 2005).The main reason why the government and NGOs involvement are crucial: (a) the government is the party that makes policies and regulations related to the direction of rural area development; (b) the government is the party that has the power to be able to carry out the development of an area; (c) a balanced development needs to be controlled and criticized by NGOs (Shapley and Telfer, 2002).The government and NGOs play an important role in supporting, developing, and introducing values that developed in a rural area (Amir et al., 2015).Government and NGO support and participation in tourism development and the management of natural and cultural resource potentials contribute to improve the quality of life of rural communities and to make the area viable (Dissart & Marcouiller, 2012;Sirisrisak, 2009;Untari et al., 2019).The importance of government and NGOs' roles in the development of rural areas is one of the backgrounds of this research, the research specifically aims to analyze the polarization of the government and NGOs orientation towards the development
UNNES JOURNALS
of eco-rural tourism in rural areas in Kerinci Regency, Jambi Province.The results of this study are expected to provide input or initial information that can be used to make policies, rules, and programs for sustainable rural area development (Dissart & Marcouiller, 2012;Higgins-Desbiolles et al., 2019;Prince & Ioannides, 2017).
Research Location and Time of Data Collection
The research was conducted from December 2020 to October 2021, with the study case of Kerinci regency, Jambi Province.Geographically, Kerinci Regency is located between S1 o 40' to S2 o 26' and E101 o 08' to E101 o 50' , with boundaries of West Sumatra Province at north and west side, Bengkulu Province at south side and Merangin and Bungo Regency, Province Jambi at east side.According to Bappeda-Litbang analysis (2020), Kerinci Regency area encompasses 3,449.90km 2 or about 6.64 % of Jambi Province.Approximately 2,047.03km 2 or 59.34% of total area is Kerinci Seblat National Park (Taman Nasional Kerinci Seblat-TNKS) and 1,401.87km 2 or 40.66% is cultivation and residential area.Topographically, the villages in this research have diverse topography between 500 -3,805 meters above sea level with temperature of 18.6 o C-28.9 o C. Specifically, the data collection location was 6 villages consist of Pulau Sangkar (Bukit Kerman District), Lempur Mudik (Gunung Raya District), Koto Petai (Tanah Cogok District), Sawahan Koto Majidin (Air Hangat District), Mekarjaya (Kayu Aro District), and Danau Tinggi (Gunung Kerinci District), and 10 NGOs in Kerinci Regency.
Sampling Techniques and Research Samples
The samples in this study were selected using purposive sampling methods.Sample respondents were village governments (from 6 villages) and NGOs engaged in tourism, environment, and socio-culture.The total number of respondents was 80 people, con-sisting of 10 respondents from each village and 20 respondents from NGOs.
Analysis Method
The analysis was conducted to observe the current condition of government and NGOs perception of eco-rural tourism that develops in rural areas.The perception evaluation was obtained from the assessment of 7 aspects, 49 criteria, and 343 indicators of eco-rural tourism development that have been developed through the elaboration of various literature studies and field observations (Table 1).Data from each assessed indicator criteria aspect was taken using a closed questionnaire designed with the One Score One Indicator Scoring System (Avenzora, 2008).One Score One Indicator Scoring System analysis is a model of analysis through the elaboration of series of questionnaires in collecting data and evaluating various variables that researchers have determined.The range of scales used to obtain scores on each aspect of eco-rural tourism is 1-7 (development of 1-5 Likert scales).The meanings of each score are as follows: one for "strongly irrelevant", two for "irrelevant", three for "somewhat irrelevant", four for "neutral", five for "somewhat relevant", six for "relevant", and seven for "strongly relevant".The higher the value obtained means the concept of eco-rural tourism is increasingly relevant to be applied and developed in rural areas.In vice versa, if the value obtained is lower, the concept of eco-rural tourism is not yet relevant to be applied in rural areas.
Analysis of the orientation of government and NGOs perceptions towards the development of eco-rural tourism is carried out by quantitative descriptive methods, while indications of polarization of government and NGOs orientation are analyzed by comparative methods using Kruskal Wallis statistical test.Polarization of government and NGOs orientation is shown from the distinctive differences in people's perception scores from each village towards aspects (criteria and indicators) of eco-rural tourism development.The difference test of average score is indicated by the p-value or significance value UNNES JOURNALS (sig value).Polarization of government and NGOs orientation is divided into two categories, namely direct polarization and scale polarization (Haribawa et al., 2020;Untari et al., 2019).Polarization of government and NGOs orientation is positive if the average score ≥ 4 and negative if the average score is < 4. Furthermore, the polarization scale can be observed from the Chi-square calculated value and its significance value.If the Chisquare calculated value ≥ Chi-square table value or p-value ≤ α, then the polarization scale is strong.If Chi-square calculated < Chi-square table or p-value > α, then the polarization scale is low.
Validity and Reliability Test of Research Aspects
The results of the validity and reliability tests on the aspects of the government and NGO orientation assessment were declared valid (r count > r table) and reliable (Cronbach Alpha > 0.65) (Table 2).This analysis shows that the assessment aspects of the government and NGOs orientation can be analyzed further.Conservation and environmental aspects are some of the parameters in assessing the relevance of society to the protection, utilization, and management of natural, cultural, and human resources.
3 Ethnic Politics Ethnic politics aspect assessment is closely related to the influence of customary toward local communites life.Criteria assessed in this aspect include traditional leaders and their elections, customary laws and sanctions, customary government systems and decision-making, and community participation towards customary government.
Economics
Economic aspects are essential to be evaluated in accordance with the relevance of community in conducting production activities in their village.The assessments include natural resources and technology in processing raw materials for business/industry, production aspects and marketing, financial management knowledge, incentives, and technical skills of the community.
5 Regional development Regional development is one of the critical aspects of rural communities' development.Criteria assessed in regional development include regional development factors and programs, regional growth centers, sustainable development, community participation, and rural development cooperation, and rural area infrastructure facilities.
Tourism
Tourism aspects as one of the critical support of eco rural tourism are assessed to obtain public ideas about the availability of natural and cultural potential as a tourism attraction, the availability of facilities and management activities at tourism object/area, value-added of tourism activities, and community participation in the development of sustainable tourism.
landscape
Ecological landscapes aspects include elements of natural and artificial landscape and horizontal structures forming them, also structural dynamics/changes (vertical/ horizontal) and landscape functions.
The Pattern of Government and NGOs Orientation on Eco-rural Tourism Concept
The pattern of government and NGO orientation on eco-rural tourism concept was observed from assessment of the government and NGOs perception towards the relevance of indicator criteria in developing eco-rural tourism for rural areas.There are seven aspects assessed, namely (1) socio-cultural aspects; (2) conservation and environmental aspects; (3) ethnic politics aspects; (4) economic aspects; (5) regional development aspects; (6) tourism aspects; and (7) ecological landscape aspects.These aspects are determined based on consideration of the growing needs in rural areas (Altinay & Paraskevas, 2007;Pfueller et al., 2011).In each aspect, the government and NGOs assess seven criteria that have been developed by researchers based on various literature developments and studies.
Socio-cultural aspects.Socio-cultural aspects are closely related to various systems that apply and develop in rural areas.Figure 1 shows that the value of government and NGOs perception on the criteria for socio-cultural aspects is positive with somewhat relevant to relevant category (score above 4).The government and NGOs both chose the criteria for the social life value system with the highest average scores of 6.53 and 6.74, respectively.This criteria describes the government and NGOs agree that values such as: 1) maintain an attitude of mutual consensus in decision-making, 2) maintain an attitude of mutual cooperation in social life, 3) maintain a sense of kinship in social life, 4) maintain an attitude of tolerance in social life, 5) maintain manners in social life, 6) maintain local wisdom values, and 7) maintain applicable norms are relevant things that rural communities must own to build eco-rural tourism.Meanwhile, the lowest average score from the government's assessment is 5.93 on the criteria for traditional ritual and belief system, this is different from NGOs which on average choose the language system criteria as the lowest with a score of 5.94.Even though there are the lowest ratings, the existing criteria are still relevant for building eco-rural tourism with the indication of the score is more than 5.The overall results on the socio-cultural aspect between the government and NGOs have different perceptions, but the differences that occur are not dominant so that the average direction the perception still has a positive value.Conservation and Environmental Aspects.Conservation and environmental aspects cannot be separated from conservation principles, namely protection, utilization, and management.Figure 2 shows that government and NGOs perception of criteria on conservation and environmental aspects is positive with relevant categories (score above 5).The perception of government and NGOs seems different.Government chose utilization of natural resource criteria as the highest assessed score with average score of 6.54, meanwhile NGOs chose protection of natural resources criteria with the highest score of 6.75.The difference does not change the relevance direction of criteria, both are relevant to develop eco-rural tourism.Another perspective of analysis was that government focused more on utilization of natural resources and NGO focused more on protecting natural resources.This would be a proper combination if there were collaborations and synergies between government and NGOs.On other criteria, government and NGOs perception show average score >6 (6.08 -6.55), so it can be concluded that all criteria in the conservation and environmental aspects are relevant to eco-rural tourism development.Ethnic Politics Aspects.Ethnic politics aspects are closely related to the influence of custom towards the life order of rural communities.Figure 3 shows that the value of government and NGOs perception on crite-ria of ethnic politics aspects is positive with relevant category (score above 5).The government perception for the highest criteria are customary leaders with score of 6.54.It can be interpreted that the government agrees that rural communities need to understand the role, authority, rights, and obligations as well as attitude to comply with traditional leaders so that rural areas with eco-rural tourism concept can be realized.Meanwhile, NGOs have the perception that the role of community towards customary government citeria is the highest average score of 6.59.This can be interpreted that NGOs think that rural communities need to understand that they have a role and need to participate morally and materially in customary government activities.Other criteria show average score >6 (6.14 -6.59), so it can be concluded that the criteria assessed by the government and NGOs on ethnic politics aspects are relevant to eco-rural tourism development.Economic Aspects.The economic aspect assessment will show how the government and NGOs understand about production activities to improve the economy of rural areas.Figure 4 shows that the value of government and NGOs perception on the criteria of economic aspect is positive with somewhat relevant category (score above 4).
UNNES JOURNALS
The production aspect skill criteria have the highest average score of 6.30 which the government assesses.This can be interpreted that the government considers local communities need to know and apply aspects of production such as production scale, production technology, input of raw materials, human resources, quality standards, distribution networks, and monitoring.In comparison of the assessment from NGOs, the criteria of financial management knowledge have the highest score with an average score of 6.29.This assessment from NGOs means that the community needs to know about financial management related to potential financial inputs, potential financial cooperation models, financial reporting models, financial expenditure plans, and financial information systems to be able to develop eco-rural tourism.Regional Development Aspects.Regional development aspects become an essential aspect to observe the direction of rural development.Figure 5 shows that the value of government and NGOs perception on criteria of regional development aspects is positive with relevant category (score above 5).The government and NGO participation criteria assessment have the same lowest average score of 6.05 on the regional development factor criteria.This criteria describes the government and NGOs agree that the community needs to increase the elaboration of the grand design of regional development, the pattern of harmonization of development among clusters in the regional area, the pattern of regional development implementation, the pattern of participatory funding, the pattern of providing facilities and information, the pattern of providing qualified human resources, and the pattern of community innovation development to increase competitiveness in regional development to develop eco-rural tourism.The other criteria show an average score with an interval of 6.06 -6.53. Figure 5 also shows that the differences in community perceptions between villages have no effect on the value of the aspect relevance which remains positive.Tourism Aspects.The tourism aspect is one of the essential supporting aspects in building eco-rural tourism because it can provide added value from the potential of the rural community.Figure 6 shows that the value of government and NGO perceptions of the tourism aspect criteria is positive with the relevant category (score above 5).The tourism activity criteria has the lowest average score of 6.25 both by the government and NGOs.This criteria shows that the government and NGOs agree that they need to know more and identify the variety and quantity, market demand, manage-UNNES JOURNALS ment competence, infrastructure, standard quality, and consistency in various tourism activities are relevant indicators for developing eco-rural tourism.Figure 6 also shows that the average scores for the other criteria are not much different, ranging from 6.21 to 6.52, which means that all criteria in the tourism aspect are relevant to develop ecorural tourism.Ecological Landscapes Aspects.The ecological landscape aspects discuss the elements, structures, dynamics/changes of structure, and changes of function in rural areas.Figure 7 displays the perception value of the government and NGOs on the criteria of ecological landscapes aspect is positive with relevant category (score above 5).Elements of artificial landscape criteria has the highest average score of 6.19 which the government assesses.Meanwhile, NGOs assessed elements of natural landscape criteria as high average score of 6.35.This difference has an excellent opportunity for the government and NGOs in developing rural areas together.Based on the government and NGO assessments, they agree that rural communities need to know and identify the variety and characteristics of natural landscape elements, know and identify the location and distribution of natural landscapes, know and apply utilization patterns of various natural landscape elements, know and apply maintaining patterns of various natural landscape elements, know and understand functional patterns of various natural landscape elements, know and understand aesthetic patterns of various natural landscapes elements, and know and understand interaction patterns among various rural natural landscape elements as relevant indicators for developing eco-rural tourism.Figure 6 also shows that the average score on the other criteria is higher than 6, meaning that all the criteria are relevant for eco-rural tourism development.The different perceptions of the government and NGOs have no effect on the value of the relevance of aspects that remain positive.
Polarization of Government and NGOs Towards Eco-rural Tourism Development
e general direction of orientation polarization of government and NGOs towards the concept of eco-rural tourism development is in a positive direction (score >4).The average score on each aspect is higher than 6, which means all aspects of government and NGOs assessment results are relevant for the development of eco-rural tourism.Each aspect's polarization scale indicates a difference in scores for each group of respondents based on the significance value (p-value > 0.05 or Chi-square calculates < Chi-square table) as shown in Table 3.The government and NGOs' polarization direction for all aspects of eco-rural
UNNES JOURNALS
tourism is positive.The interpretation of the result can be explained that the government and NGOs agree on the importance of knowing the relevant and must-have aspects in developing eco-rural tourism.In table 3, it can be observed that the polarization direction of the conservation and environmental aspects, as well as the ethnic politics, is positive with the same average score of 6.42.It can be interpreted that the government and NGOs have agreed to state that conservation and environmental aspects, as well as ethnic politics aspects, are the aspects with the highest average concern for the development of eco-rural tourism.However, this is not very significant compared to other aspects because the average values tend to be almost the same.Still in table 3, it can be observed that the polarization direction of tourism aspect is positive with a score of 6.37.This can be interpreted that the government and NGOs have agreed to develop natural and cultural resources in rural areas in order to create economic, ecological, and socio-cultural added value (Leduc et al., 2021).
The government and NGOs agree that the development of eco-rural tourism must also meet the socio-cultural, regional development, economics, and ecological landscape aspects.The direction of all these aspects is positive with an average value range of 6.11 -6.32.In the socio-cultural aspect, the government must support rural communities in implementing community communication systems, language systems, art systems, social life value systems, kinship systems, knowledge systems, and traditional ritual systems that develop in rural communities (Kallert et al., 2021).The positive direction of regional development aspect can be interpreted that the government and NGOs must be directly involved in elaborating regional development factors and programs, knowing the growth centers of rural areas, infrastructure facilities, and cooperation to make rural areas sustainable (Dorobantu & Nistoreanu, 2012).
The polarization direction of the economic aspect also has a positive direction, it can be understood that the government and NGOs need to know the natural resources to be used as raw materials for business/industry.Processing raw materials using traditional equipment and modern technology must also be able to synergize with each other because rural communities do not have to abandon the traditional processing pattern, which is local wisdom that should be maintained and needs to be understood both by the government and NGOs.In addition, the government and NGOs must also understand that aspects of production, marketing networks until financial management (Haribawa et al., 2020).The ecological landscape aspect also needs to be a benchmark in the development of eco-rural tourism.The positive direction of polarization in this aspect can be interpreted that the government and NGOs need to know the condition of the landscape consisting of natural and artificial landscape elements, horizontal and vertical structures, as well as
UNNES JOURNALS
the dynamics/structural changes that form a unique landscape of the rural areas.The more distinctive a rural area, the greater the added value can be generated (Ahamed, 2018).
Further analysis of each aspect for the comparative assessment of the government and NGOs on the criteria for eco-rural tourism can be summarized in Table 4. Table 4 shows the criteria with the lowest and highest average ratings in each aspect.All polarization directions at the lowest and highest ratings show a positive direction (the average is greater than 4).From the assessment pattern of the government and NGOs, there are scoring patterns with the same highest and lowest score for several criteria, namely social life value system, regional development factor, and tourism activity criteria.For these criteria the government and NGOs already have the same view.There are also opposite assessment patterns, namely roles of community towards customary government and elements of natural landscapes criteria.The government gave the lowest rating while NGOs gave the highest rating for these criteria.On roles of community towards customary government criteria, the government assessed it to be the lowest rating criteria which can be interpreted that the government considers rural communities already know their role in the customary government order.Meanwhile, NGOs gave the highest rating on this criteria which means that NGOs consider the role of the community towards customary government to be very important and must be prioritized.Natural landscape elements criteria that have an opposite assessment can mean that the government considers the natural landscape condition has been fixed so that it is not a top priority in developing ecological landscape criteria.In contrast to the views of NGOs, natural landscape elements must be a priority because if the conditions of natural landscape elements change, the ecological patterns of the landscape that have been formed will also change.
Polarization in the analysis results shows that government and NGOs both want the realization of eco-rural tourism development in rural areas.The development is expected to provide added value in the form of economic, ecological, and sociocultural.According to Efendi (2002:2), development must be planned in such a way and must be oriented towards community development.It is also relevant as conveyed by Shaffer et al. (2004).Development is a continuous progressive change to sustain individuals and communities through the development, intensification, and adjustment of resource utilization.This is also one of the goals of eco-rural tourism development.The right strategy in maintaining the polarization direction to remain positive is to continue equalizing a comprehensive understanding of all stakeholders, including the government and NGOs, regarding the importance of periodically assessing the application of eco-rural tourism criteria and indicators in rural areas.
The success or failure of development is strongly influenced by the orientation of the stakeholders involved in it, including the government and NGOs (Amir et al., 2015;Dorobantu & Nistoreanu, 2012).The objectives of eco-rural tourism development are basically divided into 3, namely: 1) providing an understanding that the subject of rural area development is the rural area itself, not just a tourist attraction, 2) realizing the orientation of rural area development based on the bottom-up concept (requests from local communities based on unique potentials), and 3) realizing sustainable rural development with tourism development based on unique potentials to obtain economic, ecological and socio-cultural added value (Andreopoulou et al., 2014;Marzo-Navarro et al., 2015;Mcareavey & Mcdonagh, 2011a).
The development will not be able to run well if the stakeholders do not have the same vision and goals.In order for tourism development to be successful, there are at least three main stakeholders namely government institutions, the private sector in this case NGOs and local communities must play their roles.The role played must be significant, covering the planning, implementation and benefit sharing processes.If the process runs fairly and the distribution of benefits is evenly distributed, it can be said that sustainable tourism can be realized (Ahmed, 2018;Keovilay, 2012;Skuras, n.d.).Cooperation between the government and NGOs is one of the benchmarks for the success of regional development.How each party can collaborate but still highlight their respective roles without overlapping.Kubickova & Campbell (2018) in their research described the ultimate goal of government is to create employment opprotunities and contribute to the overall economic and social development of the nation.Government involvement initially can be described as infrastructure and facility provider, concentrating on roads and utilities.A suitable balance between public and private sector, also NGOs is vital in ensuring optimal outcomes for rural area.Government can take on more leading role and adopt the role of entrepreneur, formulating policies, developing and initiating plans, and operating and providing tourism and hospitality services (Ahmed, 2018;Das & Chatterjee, 2015;Kubickova & Campbell, 2020).The tourism policy and planning can also be occurred by incorporating state and nonstate organizations.Increase in partnership between government and NGOs is created through eco-rural tourism development (Drumm & Bank, 2005;Keyim, 2018) As argued by Keyim (2018), in order to
UNNES JOURNALS
achieve socioeconomic development in rural areas, fair and effective tourism collaborative activities are needed.One of the main sources of income and employment in village in the case of (Aziz et al., 2015;Keyim, 2018;Šimková, 2007) was tourism and the village can be benefited from local tourism development.The collboration activities between government and NGOs in decision making and implementation of eco-tourism development have been perceived as a positive contribution to the village.This can be promoted through somewhat broad and equitable collaboration among the state and nonstate actors within and beyond the village.The eco-rural tourism development in rural communities can contribute to local income and employment creation, local amenities, and local culture resource conservation.The local natural and cultural resources can also be utilized because of eco-rural tourism development in the village and at the same time sociocultural heritage can be protected (Ku, 2016;Liu et al., 2020;Martín et al., 2018).The central and local government coordination with each other in promoting rural tourism synergistically should improve the local communities area development (Gao & Wu, 2017;Liu et al., 2020;Nørgaard & Thuesen, 2021).If looked deeper in the the decisions and actions regarding local and central government, it reveals different roles at different levels in eco-rural tourism development.However, as policy maker government do understand that their support is needed to be provided to local communities in advancing skill and knowledge (Kiptiah et al., 2018;Situmorang et al., 2019;Zenelaj, 2013).Similarly, NGOs also plays pivotal role in supporting the competitiveness and developing community-based planning process as its role in improving eco-rural tourism.Therefore, stakeholder collboration is a key to eco-rural tourism as it offers active participation of all key stakeholders and form a management system to support the eco-rural tourism development.
CONCLUTION
The government and NGOs are part of the stakeholders that have an essential role in the development of sustainable rural areas.Information about the orientation of the government and NGOs can be used as a basis and to determine the polarization direction of eco-rural tourism development in rural area.The perception orientation of the local government states that the eco-rural tourism concept in rural area development is relevant to be applied to rural areas.Aspects concerned to the government and NGOs for the eco-rural tourism development are conservation and environmental, ethnic politics, tourism, socio-cultural, regional development, economic and ecological landscape aspect, respectively.The polarization direction of eco-rural tourism development based on the government and NGOs orientation shows a positive direction and strong polarization scale.This means that the government and NGOs have agreed to work together with other stakeholders to develop rural areas together into eco-rural tourism by meeting the indicators that have been formulated.The government and NGOs have an excellent opportunity to collaborate in developing their rural areas in accordance with the eco-rural tourism concept.The right strategy in maintaining the polarization direction to remain positive is to continue equalizing comprehensive understanding on the importance of assessing the application of eco-rural tourism criteria and indicators in rural areas periodically and continuously.It is not only the government and NGOs that determine the polarization direction in the development of rural areas, but also other stakeholders have an important role so that the same understanding between stakeholders and synergies can become a force to realize eco-rural tourism development according to the desired purposes.
Figure 1 .
Figure 1.Government and NGOs perception of socio-cultural aspects
Figure 2 .
Figure 2. Government and NGOs perception of conservation and environmental aspects
Figure 3 .
Figure 3. Government and NGOs perception of ethnic politics aspects
Figure 4 .
Figure 4. Government and NGOs perception of economic aspects
Figure 5 .
Figure 5. Government and NGOs perception of regional development aspects
Figure 6 .
Figure 6.Government and NGOs perception of tourism aspects
Figure 7 .
Figure 7. Government and NGOs perception of ecological landscape aspects
Table 1 .
Aspects for assessing the concept of eco-rural tourism only.
Table 2 .
Validity and reliability test of eco-rural tourism aspects
Table 3 .
Validity and reliability test of eco-rural tourism aspects If Chi Square calculated < Chi Square table or p-value or Asymp Sig > 0.05, there is no noticable difference in the average value.Chi-Square table for α = 0,05; df (1;180) is 3,84.
Table 4 .
Government and NGOs assessment pattern towards eco-rural tourism criteria | 7,456.4 | 2023-03-30T00:00:00.000 | [
"Economics"
] |
TCAD Modelling of Magnetic Hall Effect Sensors
: In this paper, a gallium nitride (GaN) magnetic Hall effect current sensor is simulated in 2D and 3D using the TCAD Sentaurus simulation toolbox. The model takes into account the piezoelectric polarization effect and the Shockley–Read–Hall (SRH) and Fermi–Dirac statistics for all simulations. The galvanic transport model of TCAD Sentaurus is used to model the Lorentz force and magnetic behaviour of the sensor. The current difference, total current, and sensitivity simulations are systematically calibrated against experimental data. The sensor is optimised using varying geometrical and biasing parameters for various ambient temperatures. This unintentionally doped ungated current sensor has enhanced sensitivity to 16.5 %T − 1 when reducing the spacing between the drains to 1 µ m and increasing the source to drain spacing to 76 µ m. It is demonstrated that the sensitivity degrades at 448 K ( S = 12 %T − 1 ), 373 K ( S = 14.1 %T − 1 ) compared to 300 K ( S = 16.5 %T − 1 ). The simulation results demonstrate a high sensitivity of GaN sensors at elevated temperatures, outperforming silicon counterparts.
Introduction
Over the course of more than a century of development, Hall effect devices have been used to measure magnetic fields, uncover details of carrier transport phenomena in solids, identify the presence of a magnet, and illustrate basic physics principles [1].Hall effect devices did not become widely used in sensing applications until the development of semiconductor technology.The first commercially accessible Hall effect magnetic sensors were introduced in the mid-1950s, a few years after high-mobility compound semiconductors were discovered.Since then, the development of Hall effect devices has profited from the utilisation of high-quality materials and sophisticated, very efficient production methods made available in the microelectronics industry.However, advancements in microelectronics have also increased the demand for high-quality and reasonably priced sensors.These sensors provide the basis of highly developed and important industrial activity in the modern world [2].Hall sensors are unique varieties of magnetic sensors that operate according to the Hall effect theory [3].They are linear devices that are readily integrable [4].
In contrast to typical silicon-based sensors, the gallium nitride sensors can function at high temperatures and have strong magnetic field sensitivity because they possess high electron mobility [5].
The silicon Hall sensors are still in widespread usage today because of their additional benefits of being inexpensive, simple to manufacture, and compatible with complementary metal oxide semiconductor (CMOS) technology.Due to its small bandgap, silicon technology cannot function over 200 • C [6].Gallium nitride (GaN) technology brings several advantages such as a wider operating temperature range and is energy efficient.It simplifies cooling at the system level, brings a higher current density to applications, and enables a higher breakdown voltage and faster switching frequencies [7] compared to silicon counterparts.It is used in numerous industries including automotive, automation, and nondestructive testing.Nondestructive testing is a branch of technology that uses magnetic sensors, acoustic sensors, RF-radars, and X-ray scanners to perform tests including radiography, eddy current, ultrasonic, and others.
Nearly all power electronic conversion systems rely heavily on current measurement for device monitoring, efficiency, and reliability increase [8].To assist the development and deployment of next-generation power electronic systems offering reduced carbon emissions, novel measurement solutions must deliver advanced features including lower loss, higher precision, and a broader working temperature range.This work is focused on the new generation of Hall effect sensors with advanced features based on wide bandgap GaN high electron mobility transistor technology [9].Since each magnetic sensor technology delivers a certain sensitivity over a finite temperature and frequency range, the gallium nitride (GaN)-based power electronic device derived a new generation of sensors providing a larger bandwidth (1000×), shorter times responses (1/1000), and increased sensitivity (3×) compared to its traditional silicon counterparts.Additionally, it offers a compact design, saving up to 30 cubic centimetres of space when compared with traditional materials, and a wide operating temperature range of −80 • C to 225 • C, suitable for current-measuring solutions while cost effective at the system level [10].
This work highlights a comprehensive study for optimising device performance by combining several elements such as temperature stability and sensitivity of the GaN sensor by varying its geometry and biasing parameters.This all-encompassing approach adds up to the fundamental qualities of GaN-based Hall sensors.The possibility for creating enhanced Hall sensors that can be extensively used in crucial applications involving harsh environments, such as extreme industrial automation and aerospace and automotive systems, is what makes this work significant for the electronics research community.This study also benefits the larger public, providing a viable path for designing cutting-edge sensing technologies that can enhance sustainability, efficiency, and safety across a range of industries by optimising these devices holistically.
This work may deliver a significant contribution for the future integration of GaN power transistor chip technology [11], such as an effective progressive approach in the energy system promising the integration of the sensors for in situ monitoring [12], into the rechargeable batteries used for e-mobility in order to minimise charging times, and to boost battery longevity [13].
In this paper, we present simulation, calibration, analysis and optimisation for Gallium Nitride Magnetic Hall Sensors consist of the following three contacts: a source and a drain divided into two halves-Drain 1 and Drain 2. The current differences between the two drain terminals are computed under different biasing and temperature conditions.Additionally, it is shown that different geometrical variations on the sensor's sensing area can produce significant sensitivity improvements.
Materials and Methods
The gallium nitride Hall sensors detect a magnetic flux by using a split drain contact (Figure 1) [14].These sensors will experience a Lorentz force in the presence of a magnetic field with a strength B that originates from a wire carrying electric current within the circuit [15].
In other words, when a charged particle q is travelling with velocity v in electric field E y and magnetic field B z , it experiences a force called the Lorentz force given by the following equation [16]: This is the result of electric and magnetic fields deflecting moving charged carriers within the sensor body, thereby creating a current imbalance measurable between the two drains; this imbalance can then be used to calculate the sensitivity of the sensor through Equation ( 4) [14].
Sensitivity is mainly split into two types as follows: current-scaled sensitivity and voltage-scaled sensitivity.When the Hall sensor's output voltage varies in direct proportion to the applied magnetic field, it is defined as voltage-scaled sensitivity.This is measured in millivolts per Gauss (mV/G) given by the following equation [6]: where S v is the sensitivity with respect to the supply voltage, V H is the Hall voltage, V S is the supply voltage, and B is the magnetic field.
When the Hall sensor output's current varies in direct proportion to the applied magnetic field, the value is defined as current-scaled sensitivity.The output is expressed in microamperes per Gauss (µA/G) given by the following equation [6]: where S I is the current-scaled sensitivity, V H is the Hall voltage, I is the output current, and B is the magnetic field.When a high degree of accuracy is needed, voltage-scaled sensitivity is desirable, whereas a high degree of precision calls for current-scaled sensitivity [17].
Inventions 2024, 9, x FOR PEER REVIEW 3 of 15 Sensitivity is mainly split into two types as follows: current-scaled sensitivity and voltage-scaled sensitivity.When the Hall sensor's output voltage varies in direct proportion to the applied magnetic field, it is defined as voltage-scaled sensitivity.This is measured in millivolts per Gauss (mV/G) given by the following equation [6]: where Sv is the sensitivity with respect to the supply voltage, VH is the Hall voltage, VS is the supply voltage, and B is the magnetic field.
When the Hall sensor output's current varies in direct proportion to the applied magnetic field, the value is defined as current-scaled sensitivity.The output is expressed in microamperes per Gauss (µA/G) given by the following equation [6]: where SI is the current-scaled sensitivity, VH is the Hall voltage, I is the output current, and B is the magnetic field.When a high degree of accuracy is needed, voltage-scaled sensitivity is desirable, whereas a high degree of precision calls for current-scaled sensitivity [17].
Gallium Nitride Hall Effect Device Structure
Our GaN sensors are developed on a silicon substrate with step-graded AlGaN intermediary layers, resulting in inadvertent doping of GaN/Al0.25Ga0.75N/GaNheterostructures.The thicknesses of the GaN buffer, AlGaN barrier, and GaN cap are 0.002 µm, 0.025 µm, and 1.8 µm, respectively [15].A four-inch-diameter GaN wafer on a silicon substrate was divided into smaller wafer pieces measuring three centimetres by three centimetres.A specially designed three-mask method was then employed to build various devices onto the tiny wafers.The first mask made it possible to dry etch the wafers and produce mesas, or isolated active zones, using inductively coupled plasma (ICP) [14].
Using physical vapour deposition, the second mask was utilised to create Ohmic contacts by sputter depositing a Ti (20 nm)/Al (100 nm)/Ti (30 nm)/Au (100 nm) metal stack.This was followed by a lift-off procedure and a brief, fast annealing operation at 800 °C in an N2 environment.Using plasma-enhanced chemical vapour deposition, a conventional SiO2 passivation layer measuring 100 nm was produced.Lastly, an ICP etch based on fluorine may be used to remove passivation from the Ohmic contact locations thanks to the third mask [14].
Gallium Nitride Hall Effect Device Structure
Our GaN sensors are developed on a silicon substrate with step-graded AlGaN intermediary layers, resulting in inadvertent doping of GaN/Al 0.25 Ga 0.75 N/GaN heterostructures.The thicknesses of the GaN buffer, AlGaN barrier, and GaN cap are 0.002 µm, 0.025 µm, and 1.8 µm, respectively [15].A four-inch-diameter GaN wafer on a silicon substrate was divided into smaller wafer pieces measuring three centimetres by three centimetres.A specially designed three-mask method was then employed to build various devices onto the tiny wafers.The first mask made it possible to dry etch the wafers and produce mesas, or isolated active zones, using inductively coupled plasma (ICP) [14].
Using physical vapour deposition, the second mask was utilised to create Ohmic contacts by sputter depositing a Ti (20 nm)/Al (100 nm)/Ti (30 nm)/Au (100 nm) metal stack.This was followed by a lift-off procedure and a brief, fast annealing operation at 800 • C in an N 2 environment.Using plasma-enhanced chemical vapour deposition, a conventional SiO 2 passivation layer measuring 100 nm was produced.Lastly, an ICP etch based on fluorine may be used to remove passivation from the Ohmic contact locations thanks to the third mask [14].
The simulated sensor has a source length of L S = 4.5 µm and drain length of L D = 4.5 µm.The source width is 20 µm and both the drain widths are 7.5 µm.The drain to source distance is 26 µm.The overall length and width of the sensor is L = 35 µm and W = 20 µm.The separation between the two drain contacts is W DD = 5 µm.The SiO 2 passivation thickness is t SiO2 = 0.02 µm [14].The sensor is doped with a phosphorus active concentration of 1 × 10 17 cm −3 and a boron active concentration of 1 × 10 15 cm −3 .The three contacts are designed and the computational grid is constructed in the simulation program for ensuring the calculation of device parameters.This process delivered the three-dimensional (3D) structural model of the emulated device.
The piezoelectric polarizations are active in all simulations and the electron sheet density is set to N S = 1 × 10 13 cm −2 via polarization adjustments.The surface states are placed at E C − E T = 2.5 eV with a density of D surface = 4.5 × 10 19 cm −3 , to define the surface potential.The Shockley-Read-Hall (SRH) and Fermi-Dirac statistics are enabled for all simulations.The mobility model is Caughey-Thomas.Self-heating effects are neglected due to low biasing conditions.
The Lorentz force and magnetic effects are modelled by using the Galvanic transport model of TCAD Sentaurus.The magnetic field (B) is applied perpendicular to the surface in units of Tesla (T).
GaN Hall Sensor Model Setup
Figure 2 is the simulated structure of a 3D GaN Hall effect magnetic sensor in Sentaurus structure editor toolbox.The electric field propagates through the 2-dimensional electron gas (2DEG) until it reaches the source, allowing the electric currents I DS1 and I DS2 to flow between the source and the drains D1 and D2, respectively, when a positive voltage is provided at D1 and D2.If the drain contacts have the same effective area when there is no magnetic field, then they will be susceptible to current offset because of the alignment error of equal current flow (I DS1 − I DS2 = 0).Electrons in the currents will be deflected by the applied magnetic field, resulting in current discrepancies, I DS1 − I DS2 = ∆I.One can calculate the magnetic field value by measuring ∆I [17].From this measured ∆I value, relative sensitivity of the sensor can be determined by the following equation [14]: where S r is the relative sensitivity measured in Tesla, I is the total drain current ( I DS1 + I DS2 ), and B is the applied magnetic field.
Inventions 2024, 9, x FOR PEER REVIEW 4 of 15 The simulated sensor has a source length of LS = 4.5 µm and drain length of LD = 4.5 µm.The source width is 20 µm and both the drain widths are 7.5 µm.The drain to source distance is 26 µm.The overall length and width of the sensor is L = 35 µm and W = 20 µm The separation between the two drain contacts is WDD = 5 µm.The SiO2 passivation thickness is tSiO2 = 0.02 µm [14].The sensor is doped with a phosphorus active concentration of 1 × 10 17 cm −3 and a boron active concentration of 1 × 10 15 cm −3 .The three contacts are designed and the computational grid is constructed in the simulation program for ensuring the calculation of device parameters.This process delivered the three-dimensional (3D) structural model of the emulated device.
The piezoelectric polarizations are active in all simulations and the electron sheet density is set to NS = 1 × 10 13 cm −2 via polarization adjustments.The surface states are placed at EC − ET = 2.5 eV with a density of Dsurface = 4.5 × 10 19 cm −3 , to define the surface potential.The Shockley-Read-Hall (SRH) and Fermi-Dirac statistics are enabled for all simulations.The mobility model is Caughey-Thomas.Self-heating effects are neglected due to low biasing conditions.
The Lorentz force and magnetic effects are modelled by using the Galvanic transport model of TCAD Sentaurus.The magnetic field (B) is applied perpendicular to the surface in units of Tesla (T).
GaN Hall Sensor Model Setup
Figure 2 is the simulated structure of a 3D GaN Hall effect magnetic sensor in Sentaurus structure editor toolbox.The electric field propagates through the 2-dimensional electron gas (2DEG) until it reaches the source, allowing the electric currents 1 and 2 to flow between the source and the drains D1 and D2, respectively, when a positive voltage is provided at D1 and D2.If the drain contacts have the same effective area when there is no magnetic field, then they will be susceptible to current offset because of the alignment error of equal current flow ( 1 − 2 = 0).Electrons in the currents will be deflected by the applied magnetic field, resulting in current discrepancies, 1 − 2 = ∆I.One can calculate the magnetic field value by measuring ∆I [17].From this measured ∆I value, relative sensitivity of the sensor can be determined by the following equation [14]: where is the relative sensitivity measured in Tesla, I is the total drain current ( 1 + 2 ), and B is the applied magnetic field.In Figure 3a,b, the sensor is optimised for a higher sensitivity for which the optimal ratio of L/W is required.The sensitivity depends on G (geometric correction factor).The L/W ratio varies with G and (Hall angle), given by the following equation [19]: In Figure 3a,b, the sensor is optimised for a higher sensitivity for which the optimal ratio of L/W is required.The sensitivity depends on G (geometric correction factor).The L/W ratio varies with G and θ H (Hall angle), given by the following equation [19]: where 0 ≤ θ H ≤ 0.45 radians; and 0.85 ≤ L W ≤ infinity.From Equation ( 5), G depends on the Hall angle and the L/W ratio.The Hall angle is the angle of inclination of the current density with respect to the total electric field given by the following equation [1]: where | | is the absolute value of the Hall electric field and | | is the external electric field.
The Hall field depends on the current density and the magnetic field given by the relationship [1]: where is a parameter called the Hall coefficient, J is the current density of the sensor surface, and B is the magnetic field applied perpendicularly to the sensor in z direction.
The strength and sign of the Hall effect in a given material are described by the Hall coefficient, a material property.The unit of the Hall coefficient is VmA −1 T −1 (volt metre per ampere Tesla).
The Hall coefficient of strongly extrinsic semiconductors (i.e., the semiconductor material doped with impurities of high concentration to enhance the electrical conductivity and sensitivity of material) is given by the following equation [1]: where q is the charge of a single carrier and n is the concentration of n-type material.
Simulation Results for a 2D GaN Transmission Line Model (TLM)
A transmission line model of GaN in 2D is simulated with the same dimensions as that of the 3D GaN Hall sensor, because the 3D GaN structure is nothing but the drain from the 2D GaN TLM being split into two identically sized halves.The structure of the GaN TLM comprises of a passivation layer SiO 2 along with a GaN cap, a GaN buffer, and an AlGaN barrier.Nonetheless, the 2D model will only have one drain contact along with a source contact.From Equation ( 5), G depends on the Hall angle and the L/W ratio.The Hall angle is the angle of inclination of the current density with respect to the total electric field given by the following equation [1]: where |E H | is the absolute value of the Hall electric field and |E e | is the external electric field.The Hall field depends on the current density and the magnetic field given by the relationship [1]: where R H is a parameter called the Hall coefficient, J is the current density of the sensor surface, and B is the magnetic field applied perpendicularly to the sensor in z direction.
The strength and sign of the Hall effect in a given material are described by the Hall coefficient, a material property.The unit of the Hall coefficient is VmA −1 T −1 (volt metre per ampere Tesla).
The Hall coefficient of strongly extrinsic semiconductors (i.e., the semiconductor material doped with impurities of high concentration to enhance the electrical conductivity and sensitivity of material) is given by the following equation [1]: where q is the charge of a single carrier and n is the concentration of n-type material.
Simulation Results for a 2D GaN Transmission Line Model (TLM)
A transmission line model of GaN in 2D is simulated with the same dimensions as that of the 3D GaN Hall sensor, because the 3D GaN structure is nothing but the drain from the 2D GaN TLM being split into two identically sized halves.The structure of the GaN TLM comprises of a passivation layer SiO 2 along with a GaN cap, a GaN buffer, and an AlGaN barrier.Nonetheless, the 2D model will only have one drain contact along with a source contact.
A channel is formed on the interface of the GaN buffer and the AlGaN barrier.Figure 4 shows the energy band diagram and electron density.E C is the conduction band energy, which refers to the energy at the bottom of the conduction band in the semiconductor, while E T is the trap energy level, which refers to the energy of a trap state within the band of the semiconductor.The surface states are placed at E C − E T = 0.67 eV with a density of D surface = 4.5 × 10 19 cm −3 to define the surface potential.The capture cross section of the electrons and holes are set to 1 × 10 −14 cm 2 [20].
Inventions 2024, 9, x FOR PEER REVIEW 6 of 15 A channel is formed on the interface of the GaN buffer and the AlGaN barrier.Figure 4 shows the energy band diagram and electron density.EC is the conduction band energy, which refers to the energy at the bottom of the conduction band in the semiconductor, while ET is the trap energy level, which refers to the energy of a trap state within the band of the semiconductor.The surface states are placed at EC − ET = 0.67 eV with a density of Dsurface = 4.5 × 10 19 cm −3 to define the surface potential.The capture cross section of the electrons and holes are set to 1 × 10 −14 cm 2 [20].As shown in Figure 5, as the applied drain voltage gradually increases from 0 V to 1 V, the drain current also surges.This is recorded at a room temperature of 300 K.With a surge in the drain voltage, the flow of electrons in the 2 DEG channel will be more, hence the rise in drain current.
Simulation Results for a Gallium Nitride Hall Sensor
The GaN Hall sensor contains a GaN buffer layer, an AlGaN barrier, and a GaN cap along with a passivation layer.From Figure 6b, a coarser mesh is applied to the GaN buffer As shown in Figure 5, as the applied drain voltage gradually increases from 0 V to 1 V, the drain current also surges.This is recorded at a room temperature of 300 K.With a surge in the drain voltage, the flow of electrons in the 2 DEG channel will be more, hence the rise in drain current.
Inventions 2024, 9, x FOR PEER REVIEW 6 of 15 A channel is formed on the interface of the GaN buffer and the AlGaN barrier.Figure 4 shows the energy band diagram and electron density.EC is the conduction band energy, which refers to the energy at the bottom of the conduction band in the semiconductor, while ET is the trap energy level, which refers to the energy of a trap state within the band of the semiconductor.The surface states are placed at EC − ET = 0.67 eV with a density of Dsurface = 4.5 × 10 19 cm −3 to define the surface potential.The capture cross section of the electrons and holes are set to 1 × 10 −14 cm 2 [20].As shown in Figure 5, as the applied drain voltage gradually increases from 0 V to 1 V, the drain current also surges.This is recorded at a room temperature of 300 K.With a surge in the drain voltage, the flow of electrons in the 2 DEG channel will be more, hence the rise in drain current.
Simulation Results for a Gallium Nitride Hall Sensor
The GaN Hall sensor contains a GaN buffer layer, an AlGaN barrier, and a GaN cap along with a passivation layer.From Figure 6b, a coarser mesh is applied to the GaN buffer
Simulation Results for a Gallium Nitride Hall Sensor
The GaN Hall sensor contains a GaN buffer layer, an AlGaN barrier, and a GaN cap along with a passivation layer.From Figure 6b, a coarser mesh is applied to the GaN buffer because it is a large uniform area, while a finer mesh is considered for the interface between the GaN and AlGaN and on the GaN cap to reduce the computational time and improve accuracy.because it is a large uniform area, while a finer mesh is considered for the interface between the GaN and AlGaN and on the GaN cap to reduce the computational time and improve accuracy.In Figure 7a, when a sweeping voltage of 0 to 1 V is applied across the drains, the total current from both drains is obtained at different temperatures.It is observed that the total current falls with each temperature rise.This is mainly due to the degradation in mobility due to the increase in scattering mechanisms within the sensor [14,[21][22][23].The current imbalance measured between D1 and D2 contacts rises when increasing the magnetic field's strength as shown in Figure 7b, although the total current and the estimated imbalance decrease as temperature increases.
Validation
The 3D GaN simulations are validated in Figures 8 and 9 where the sensor is simulated for current imbalance against the magnetic field sweeping from 0 to 30 mT, and the sensor output current against the drain source voltage sweeping from 0 to 0.5 V.For the simulation and experiment results, both are verified at elevated temperatures of 300 K, 373 K, and 448 K.In Figure 7a, when a sweeping voltage of 0 to 1 V is applied across the drains, the total current from both drains is obtained at different temperatures.It is observed that the total current falls with each temperature rise.This is mainly due to the degradation in mobility due to the increase in scattering mechanisms within the sensor [14,[21][22][23].
Inventions 2024, 9, x FOR PEER REVIEW 7 of 15 because it is a large uniform area, while a finer mesh is considered for the interface between the GaN and AlGaN and on the GaN cap to reduce the computational time and improve accuracy.In Figure 7a, when a sweeping voltage of 0 to 1 V is applied across the drains, the total current from both drains is obtained at different temperatures.It is observed that the total current falls with each temperature rise.This is mainly due to the degradation in mobility due to the increase in scattering mechanisms within the sensor [14,[21][22][23].
(a) (b) . (a) Simulated total current (A) from Drain 1 and Drain 2 vs. drain source voltage (V) at different temperatures of 300 K, 400 K, and 500 K at voltage sweeping from 0 to 1 V.(b) Simulated current imbalance (mA) vs. temperature (K) when the sensor is applied to a drain voltage of 1 V against an increasing magnetic field strength (B = 0 to 30 mT) at 300 K, 400 K, and 500 K.
The current imbalance measured between D1 and D2 contacts rises when increasing the magnetic field's strength as shown in Figure 7b, although the total current and the estimated imbalance decrease as temperature increases.
Validation
The 3D GaN simulations are validated in Figures 8 and 9 where the sensor is simulated for current imbalance against the magnetic field sweeping from 0 to 30 mT, and the sensor output current against the drain source voltage sweeping from 0 to 0.5 V.For the simulation and experiment results, both are verified at elevated temperatures of 300 K 373 K, and 448 K.The current imbalance measured between D1 and D2 contacts rises when increasing the magnetic field's strength as shown in Figure 7b, although the total current and the estimated imbalance decrease as temperature increases.
Validation
The 3D GaN simulations are validated in Figures 8 and 9 where the sensor is simulated for current imbalance against the magnetic field sweeping from 0 to 30 mT, and the sensor output current against the drain source voltage sweeping from 0 to 0.5 V.For the simulation and experiment results, both are verified at elevated temperatures of 300 K, 373 K, and 448 K. .Figure 9. Simulated total current (mA) from Drain 1 and Drain 2 against magnetic field sweeping from 0 to 30 mT at 300 K, 373 K, and 448 K at an applied voltage sweeping from 0 to 0.5 V.
Figure 10a shows the sensitivity of the GaN sensor as the temperature rises from 300 K to 448 K, while Figure 10b illustrates the sensitivity of the sensor against increasing the magnetic field from 0 to 30 mT for both the experiment and the simulation, where the dotted line is the experiment graph and the straight line is from the simulation.The sensitivity falls as expected with increasing temperature. .Figure 9. Simulated total current (mA) from Drain 1 and Drain 2 against magnetic field sweeping from 0 to 30 mT at 300 K, 373 K, and 448 K at an applied voltage sweeping from 0 to 0.5 V.
Figure 10a shows the sensitivity of the GaN sensor as the temperature rises from 300 K to 448 K, while Figure 10b illustrates the sensitivity of the sensor against increasing the magnetic field from 0 to 30 mT for both the experiment and the simulation, where the dotted line is the experiment graph and the straight line is from the simulation.The sensitivity falls as expected with increasing temperature.V DS (V) Figure 9. Simulated total current (mA) from Drain 1 and Drain 2 against magnetic field sweeping from 0 to 30 mT at 300 K, 373 K, and 448 K at an applied voltage sweeping from 0 to 0.5 V.
Figure 10a shows the sensitivity of the GaN sensor as the temperature rises from 300 K to 448 K, while Figure 10b illustrates the sensitivity of the sensor against increasing the magnetic field from 0 to 30 mT for both the experiment and the simulation, where the dotted line is the experiment graph and the straight line is from the simulation.The sensitivity falls as expected with increasing temperature.The sensitivity remains constant with the increasing magnetic field, with the experimental data from the TCAD simulation showing a good agreement with the previous work's experimental data [24].The Hall sensor's sensitivity should not be misinterpreted for the sensor's current or voltage differential output measurement, which linearly varies with the magnetic field density until reaching the saturation zones for the north pole and, symmetrically, for the south pole, such as presented in [11].The sensitivity remains constant with the increasing magnetic field, with the experimental data from the TCAD simulation showing a good agreement with the previous work's experimental data [24].The Hall sensor's sensitivity should not be misinterpreted for the sensor's current or voltage differential output measurement, which linearly varies with the magnetic field density until reaching the saturation zones for the north pole and, symmetrically, for the south pole, such as presented in [11].
Optimisation
Figures 11-13 represent the current difference, total current, and sensitivity at V DS = 0.5 V when the electron mobility is 1700 cm 2 V −1 − s with the sensor's dimensions as L = 80 µm, W = 20 µm, W DD = 5 µm, 2 µm, 1 µm, and L SD = 71 µm.The current difference, total current, and sensitivity degrades with increasing the distance between the split drains.As the drains are away from each other, the drain resistance is less, which increases the total current flowing in the device, causing the current deflection to decrease, resulting in a lower Lorentz force.Hence, degrading the sensitivity [25].
The sensitivity remains constant with the increasing magnetic field, with the experimental data from the TCAD simulation showing a good agreement with the previous work's experimental data [24].The Hall sensor's sensitivity should not be misinterpreted for the sensor's current or voltage differential output measurement, which linearly varies with the magnetic field density until reaching the saturation zones for the north pole and, symmetrically, for the south pole, such as presented in [11].
Optimisation
Figures 11-13 represent the current difference, total current, and sensitivity at VDS = 0.5V when the electron mobility is 1700 cm 2 V −1 -s with the sensor's dimensions as L = 80 µm, W = 20 µm, = 5 µm, 2 µm, 1 µm, and = 71 µm.The current difference, total current, and sensitivity degrades with increasing the distance between the split drains.As the drains are away from each other, the drain resistance is less, which increases the total current flowing in the device, causing the current deflection to decrease, resulting in a lower Lorentz force.Hence, degrading the sensitivity [25].Figures 14-16 represent the current difference, total current, and sensitivity at VDS = 0.5 V when the electron mobility is 1700 cm 2 V −1 -s with the sensor's dimensions as L = 80 µm, W = 20 µm, = 5 µm, 2 µm, 1 µm, and = 76 µm.The current difference, total current, and sensitivity improves as the distance between the source and drain is increased.When the source contact width is scaled down, the effective contact area reduces, causing the total current flowing in the device to decrease.The current deflection increases resulting in a larger Lorentz force and improving the sensitivity.Also, from Equation ( 6), when current density reduces, it diminishes the Hall electric field, which causes the Hall angle to drop-off because the Hall angle is proportional to the Hall electric field.When the Hall angle reduces, the geometric correction factor increases and the sensor's sensitivity also improves [25].Figures 14-16 represent the current difference, total current, and sensitivity at V DS = 0.5 V when the electron mobility is 1700 cm 2 V −1 − s with the sensor's dimensions as L = 80 µm, W = 20 µm, W DD = 5 µm, 2 µm, 1 µm, and L SD = 76 µm.The current difference, total current, and sensitivity improves as the distance between the source and drain is increased.When the source contact width is scaled down, the effective contact area reduces, causing the total current flowing in the device to decrease.The current deflection increases resulting in a larger Lorentz force and improving the sensitivity.Also, from Equation ( 6), when current density reduces, it diminishes the Hall electric field, which causes the Hall angle to drop-off because the Hall angle is proportional to the Hall electric field.When the Hall angle reduces, the geometric correction factor increases and the sensor's sensitivity also improves [25].
Discussion
The GaN sensor is simulated and presented by Figure 6a.The drains are of equal length and width so as to avoid unwanted offsets when there is no magnetic field.Highly doped regions are created by the drain contacts, which are Ohmic in nature to repose the fields and thermionic emissions [25].Figure 6b illustrates the meshing profile of the modelled sensor's surface plane.It is very complex to mesh the model in three dimensions because the coarser the mesh, the larger the convergence issues, leading to faults in current deflection in absence of the magnetic field.A small mesh involves more elements on the same surface than a coarse one; therefore, despite the accuracy increasing, the simulation speed decreases while more computational resources are required (i.e., CPU power, RAM, parallel computing capabilities, etc.).That is why, when there is large homogenous area (without details), coarse mesh is used and the finer mesh is only used for the details.Here in the 3D GaN model, a coarser mesh is applied to the GaN buffer because it is a large uniform area, while a finer mesh is considered for the interface of GaN and AlGaN, and on the GaN cap to improve the speed and accuracy of the simulation results.
In Figure 3a,b, to obtain the ideal value for the L/W ratio, Matlab is used to visualise Equation (5).Since sensitivity is dependent on the L/W ratio, finding the ideal value is necessary for sensor optimisation.Regarding the 3D plot for G, L/W, and θ H , where G is the geometrical correction factor varying from 0 to 1 and θ H ranges from 0 to 0.45 radians, plotting G against L/W shows that L/W begins to increase and becomes a constant line when L/W approaches 4. Thus, the graphic illustrates that 4 is the ideal L/W ratio.
A transmission line model (TLM) of GaN in 2D is simulated before the 3D model, keeping the dimensions the same as that of the 3D GaN Hall sensor.A channel is formed on the interface of the GaN buffer and the AlGaN barrier.Figure 4 shows the energy band diagram and electron density of this 2D GaN TLM model.E C is the conduction band energy, which refers to the energy at the bottom of the conduction band in the semiconductor, while E T is the trap energy level, which refers to the energy of a trap state within the band of the semiconductor.The surface states are placed at E C − E T = 0.67 eV with a density of D surface = 4.5 × 10 19 cm −3 to define the surface potential.The capture cross section of electrons and holes are set to 1 × 10 −14 cm 2 .The GaN sensor is composed of a passivation layer of SiO 2 , a GaN cap, an AlGaN barrier, and a GaN buffer.In the GaN cap region, the conduction band E c and valance band E v show band bending due to the presence of the electric field.In the AlGaN layer, a significant bending is observed due to the polarization effects and formation of a two-dimensional electron gas layer at the interface of the AlGaN barrier and the GaN buffer.Then, the bands in the GaN buffer region become flat, indicating the equilibrium conditions with no significant band bending.The band bending of E c and E v is important for the confinement of electrons in 2DEG.In the AlGaN/GaN region, a high electron concentration is indicated by the Fermi level located above the conduction band edge.The electron density is illustrated on the right axis with orange circular marking and energy is defined on the left with black circular marking.
Figure 5 simulates the drain current against the drain voltage of the GaN TLM at a voltage sweeping from 0 to 1 V.The drain current grows in tandem with the drain voltage as the voltage progressively climbs from 0 to 1 V.The higher the drain voltage, the stronger the Lorentz force and the electric field, increasing the flow of electrons in the 2 DEG channel.
Figure 7a presents the simulated total current from Drain 1 and Drain 2 against drain source voltage at different temperatures of 300 K, 400 K, and 500 K at an applied voltage sweeping from 0 to 1 V.This suggests that when the temperature rises, the combined current from both drains decreases.The primary cause of this is the reduction in mobility brought on by the sensor's increased number of scattering mechanisms.Figure 7b mimics current imbalance vs. temperature.At 300 K, 400 K, and 500 K, the drain voltage is 1 V as opposed to a rising magnetic field intensity (B = 0 to 30 mT).As the intensity of the magnetic field increases, so does the current imbalance measured between the two drain contacts.However, when the temperature rises, both the total current and the current imbalance drop.
The 3D GaN simulations are validated in Figures 8 and 9 where the sensor is simulated for current imbalance against magnetic field sweeping from 0 to 30 mT, and sensor output current against drain source voltage sweeping from 0 to 0.5 V.For simulation and experiment, both are verified at temperatures of 300 K, 373 K, and 448 K.As previously mentioned, an increase in the magnetic field and drain source voltage will cause the current imbalance and total current to rise; however, an increase in temperature will cause both to drop because of a decrease in mobility and saturation velocity.GaN sensors are subject to a variety of scattering phenomena, including phonon scattering [26], dislocation scatterings [27], ionized impurity scatterings [28], and interface roughness between the AlGaN top layer and the 2DEG channel.Mobility has been demonstrated to be impacted by ionized impurity scattering at low temperatures, but when temperatures rise over 300 K, phonon scattering takes over as the primary source of scattering [23,26,29].Furthermore, both high and low temperatures may be influenced by surface roughness [26].Since all of the temperatures examined in this work were higher than 300 K, surface roughness and phonon scattering are thought to have had a role in the decreasing current that was seen [26,30].
When the temperature rises from 300 K to 448 K, the sensitivity decreases from 12.27% to 9.6%, as illustrated in Figure 10a,b.The drop in relative sensitivity found at increasing temperatures is attributed to the mobility deterioration of electrons in the 2DEG channel due to increased phonon scattering [26].
The sensor is tuned to increase sensitivity.The geometric correction factor has a direct correlation with sensitivity.The formula of G, provided by Equation ( 5), indicates that it relies on the Hall angle and L/W ratio.The L/W ratio is optimal at 4, as seen in Figure 3b.The length and width are scaled up by ratio 4 for the next set of simulations, so that L = 80 µm and W = 20 µm.To achieve optimisation, two parameters are changed as follows: in the first scenario, as illustrated by Figures 11-13, which show the simulations for current imbalance, total current, and sensitivity, the distance between drain 1 and 2 is shortened while maintaining the drain and source distance at 71 µm.
In the second scenario, the scaled-up distance (76 µm) between the drain and source (71 µm) is displayed in Figures 14-16 for current imbalance, total current, and sensitivity.At ambient temperature, the sensitivity decreases from 13.8% to 12% when W DD increases from 1 µm to 5 µm and L SD = 71 µm.At higher temperatures, however, sensitivity considerably reduces.When drains are spaced apart, the drain resistance decreases, increasing the total current flowing through the device and reducing the current deflection, which lowers the Lorentz force, degrading the sensitivity as a result.
In the second case, the sensitivity improves to 16.5% from 13.8% at constant W DD of 1 µm when L SD increases from 71 µm to 76 µm at a room temperature of 300 K.The total current, current difference, and the sensitivity improves as the distance between the source and the drain is increased.The effective contact area shrinks with a drop in source contact width, which also results in a decrease in the device's overall current flow.As the current deflection rises, a significant Lorentz force is produced, increasing sensitivity.Additionally, given that the Hall angle is proportionate to the Hall electric field, Equation (6) shows that a decrease in current density also results in a decrease in the Hall electric field.Both the sensitivity and the geometric correction factor rise with a decrease in the Hall angle.Table 1 illustrates the sensitivity (%T −1 ) of the gallium nitride Hall sensor being optimised for both the cases.
Conclusions
The findings of this paper demonstrate the capabilities of the new generation split drains GaN Hall sensor simulated in TCAD software.Current difference, total current, and sensitivity simulations of the GaN Hall sensor are illustrated in this paper.These results are then calibrated against the measurements for validation purpose.The sensor shows a decrease in sensitivity at elevated temperatures as demonstrated.The sensor is optimised and it shows that scaling up the source to drain spacing and reducing the split drains increases the relative sensitivity of GaN Hall sensors.Reducing the spacing between the drains to 1 µm and increasing the source to drain spacing to 76 µm optimises the sensor's sensitivity to 16.5 %T −1 .The sensor was simulated at an elevated temperature of 448 K, demonstrating its ability to function even in challenging environments.
Figure 2 .
Figure 2. Simulated structure of a 3D GaN Hall effect magnetic sensor visualised using the Sentaurus structure editor toolbox.
Figure 2 .
Figure 2. Simulated structure of a 3D GaN Hall effect magnetic sensor visualised using the Sentaurus structure editor toolbox.
Figure 3 .
Figure 3. (a) G, L/W, plotted in 3D; (b) G vs. L/W, where the maximum value of G is 1 and the optimal L/W ratio is 4.
Figure 3 .
Figure 3. (a) G, L/W, θ H plotted in 3D; (b) G vs. L/W, where the maximum value of G is 1 and the optimal L/W ratio is 4.
Figure 4 .
Figure 4. Simulated energy band diagram (black and red line plot) and electron density (brown line plot) for 2D GaN TLM at an equilibrium of 300 K.The electron density is shown on the right axis, and energy is defined on the left.The black line shows the conduction energy band, the red line shows the valance energy band, while the black dotted line depicts the Fermi level.The 2-dimensional electron gas layer formed at the interface of the GaN buffer and AlGaN barrier is shown by an orange peak in the middle.As shown in Figure5, as the applied drain voltage gradually increases from 0 V to 1 V, the drain current also surges.This is recorded at a room temperature of 300 K.With a surge in the drain voltage, the flow of electrons in the 2 DEG channel will be more, hence the rise in drain current.
Figure 5 .
Figure 5. Simulated drain current (A) against drain to source voltage (V) sweeping from 0 V to 1 V of TLM at 300 K.
Figure 4 .
Figure 4. Simulated energy band diagram (black and red line plot) and electron density (brown line plot) for 2D GaN TLM at an equilibrium of 300 K.The electron density is shown on the right axis, and energy is defined on the left.The black line shows the conduction energy band, the red line shows the valance energy band, while the black dotted line depicts the Fermi level.The 2-dimensional electron gas layer formed at the interface of the GaN buffer and AlGaN barrier is shown by an orange peak in the middle.As shown in Figure5, as the applied drain voltage gradually increases from 0 V to 1 V, the drain current also surges.This is recorded at a room temperature of 300 K.With a surge in the drain voltage, the flow of electrons in the 2 DEG channel will be more, hence the rise in drain current.
Figure 4 .
Figure 4. Simulated energy band diagram (black and red line plot) and electron density (brown line plot) for 2D GaN TLM at an equilibrium of 300 K.The electron density is shown on the right axis, and energy is defined on the left.The black line shows the conduction energy band, the red line shows the valance energy band, while the black dotted line depicts the Fermi level.The 2-dimensional electron gas layer formed at the interface of the GaN buffer and AlGaN barrier is shown by an orange peak in the middle.As shown in Figure5, as the applied drain voltage gradually increases from 0 V to 1 V, the drain current also surges.This is recorded at a room temperature of 300 K.With a surge in the drain voltage, the flow of electrons in the 2 DEG channel will be more, hence the rise in drain current.
Figure 5 .
Figure 5. Simulated drain current (A) against drain to source voltage (V) sweeping from 0 V to 1 V of TLM at 300 K.
Figure 5 .
Figure 5. Simulated drain current (A) against drain to source voltage (V) sweeping from 0 V to 1 V of TLM at 300 K.
Figure 6 .
Figure 6.(a) Simulated GaN Hall sensor in 3D with contacts as source, Drain 1, and Drain 2 at equilibrium (i.e., 0 V biasing of source under 300 K); (b) GaN Hall sensor at the surface mesh.
Figure 7 .
Figure7.(a) Simulated total current (A) from Drain 1 and Drain 2 vs. drain source voltage (V) at different temperatures of 300 K, 400 K, and 500 K at voltage sweeping from 0 to 1 V.(b) Simulated current imbalance (mA) vs. temperature (K) when the sensor is applied to a drain voltage of 1 V against an increasing magnetic field strength (B = 0 to 30 mT) at 300 K, 400 and 500 K.
Figure 6 .
Figure 6.(a) Simulated GaN Hall sensor in 3D with contacts as source, Drain 1, and Drain 2 at equilibrium (i.e., 0 V biasing of source under 300 K); (b) GaN Hall sensor at the surface mesh.
Figure 6 .
Figure 6.(a) Simulated GaN Hall sensor in 3D with contacts as source, Drain 1, and Drain 2 at equilibrium (i.e., 0 V biasing of source under 300 K); (b) GaN Hall sensor at the surface mesh.
Figure 7 .
Figure7.(a) Simulated total current (A) from Drain 1 and Drain 2 vs. drain source voltage (V) at different temperatures of 300 K, 400 K, and 500 K at voltage sweeping from 0 to 1 V.(b) Simulated current imbalance (mA) vs. temperature (K) when the sensor is applied to a drain voltage of 1 V against an increasing magnetic field strength (B = 0 to 30 mT) at 300 K, 400 K, and 500 K.
Figure 8 .
Figure 8. Simulated current imbalance (µA) from Drain 1 and Drain 2 against drain source voltage varying from 0 to 0.5 V at elevated temperatures of 300 K, 373 K, and 448 K.
Figure 8 . 15 Figure 8 .
Figure 8. Simulated current imbalance (µA) from Drain 1 and Drain 2 against drain source voltage varying from 0 to 0.5 V at elevated temperatures of 300 K, 373 K, and 448 K. | 11,433.8 | 2024-07-10T00:00:00.000 | [
"Physics",
"Engineering"
] |
Notoginseng Triterpenes Inhibited Autophagy in Random Flaps via the Beclin-1/VPS34/LC3 Signaling Pathway to Improve Tissue Survival
Random flaps are widely used in tissue reconstruction, attributed to the lack of vascular axial limitation. Nevertheless, the distal end of the flap is prone to necrosis due to the lack of blood supply. Notoginseng triterpenes (NTs) are the active components extracted from Panax notoginseng, reducing oxygen consumption and improving the body’s tolerance to hypoxia. However, their role in random flap survival has not been elucidated. In this study, we used a mouse random skin flap model to verify that NT can promote cell proliferation and migration and that increasing blood perfusion can effectively improve the survival area of a skin flap. Our study also showed that the autophagy of random flaps after NT treatment was activated through the Beclin-1/VPS34/LC3 signaling pathway, and the therapeutic effect of NT significantly decreased after VPS34 IN inhibited autophagy. In conclusion, we have demonstrated that NT can significantly improve the survival rate of random flaps through the Beclin-1/VPS34/LC3 signaling pathway, suggesting that it might be a promising clinical treatment option.
INTRODUCTION
Skin flap transplantation is a commonly used method in reconstructive surgery to repair skin loss caused by trauma, surgical resection, and other factors (Schürmann et al., 2009;Lee et al., 2017). Random flaps can be transplanted and arranged arbitrarily because they are not restricted by axial blood vessels and are widely employed in skin flap transplantation (Fang et al., 2020). In the flap, the blood supply is maintained by the vascular network of the pedicle bed of the flap (Lorenzetti et al., 2001). When the flap's aspect ratio exceeds, the distal region of the flap will necrose due to insufficient blood supply to the microvascular network, which significantly limits randomness (Luo et al., 2021). Therefore, improving the vitality of random flaps and inhibiting necrosis are crucial for improving the clinical application of random flaps.
Notoginseng triterpene (NT) is an active ingredient extracted from Panax notoginseng, which has the functions of reducing oxygen consumption of the organism, improving tolerance of the organism to hypoxia, expanding blood vessels and increasing blood flow, and anti-thrombosis and anticoagulation (Shi et al., 2013;Wang et al., 2021). Studies have shown that NT can be used to treat cerebral infarction and ischemic cerebrovascular disease (Xie et al., 2020). The results have shown that NT has an excellent clinical therapeutic effect on the acute stage of cerebral infarction, and it can significantly improve cerebral edema during global ischemia or focal I/R . In cytology, morphology, and lipid peroxidation studies, NT has been shown to protect cells from damage caused by energy metabolism disorders (Huang et al., 2015;Huang et al., 2017). Other studies have shown that NT can significantly reduce platelet surface activity; inhibit platelet aggregation, adhesion, and anti-thrombosis; improve microcirculation; and other effects (Wang et al., 2016). However, it has not been examined yet if it can enhance the survival of voluntary skin flaps. Autophagy is referred to as the rough-surfaced endoplasmic reticulum of the ribosome area of the double-membrane package, which is part of the cytoplasm and cell organelles. Protein compositions, such as the degradation of domestic demand autophagy, will form, and fusion with lysosomes that produce autophagy-lysosome and degradation of the contents of the package to realize the metabolism of the cell itself and some organelles needs to be updated (Hale et al., 2013;Hurley and Young, 2017;Doherty and Baehrecke, 2018;Yu et al., 2018). In this process, Beclin-1 is an essential molecule which plays a key role in the formation of autophagosomes, that can mediate the localization of other autophagic proteins to phagocytes and thus regulate the formation and maturation of autophagosomes (Menon and Dhamija, 2018;Xu and Qin, 2019;Kaur and Changotra, 2020). In an acute injury, autophagy can clear damaged organelles and convert them into energy (Tamargo-Gómez and Mariño, 2018;Yang et al., 2019;Lin et al., 2021). Therefore, moderately upregulated autophagy expression after acute injury is beneficial to tissue survival. Apoptosis is activated at the distal end of random flaps, autophagy is inhibited due to prolonged injury time, and the tissue self-protection ability is weakened. Therefore, the activation of autophagy in random flaps is essential Tu et al., 2021).
In this study, we comprehensively investigated the application of NT in promoting angiogenesis and antioxidant stress of random flaps and found that NT can improve the survival rate of random flaps by promoting autophagy. The VPS34 IN inhibition experiment showed that NT promoted the survival of random flaps through the Beclin-1/VPS34/LC3 pathway.
Cell Scratch Assay and CCK-8 Test
The UVECs were laid on a 12-well petri dish and cultured overnight until a fused monolayer was formed. The cells were scratched with a 200-μL pipette and washed with phosphatebuffered saline (PBS) three times. The images of monolayers of injured cells were captured using light microscopy at 0, 6, 12, and 24 h post-injury. UVECs were laid onto 96-well plates with 10,000 cells per well. After 12 h, the cells were divided into three groups: the control group, the 50 μg/ml NT group, and the 100 μg/ml NT group. The CCK-8 kit was used to detect cell proliferation.
Animal Model of the Random Flap
Sixty healthy C57BL/6 male mice (20-22 g) were provided by the Experimental Animal Center of Wenzhou Medical University. The animal experiment was approved by the Animal Protection and Use Committee of Wenzhou Medical University (Wydw. 2021-0242). These mice were randomly divided into 4 groups, including the control group, NT-treated group (NT group), NT+3MA treated group (NT+3MA group), and NT + VPS34 IN-treated group (NT + VPS34 IN group). Prior to surgery, all mice were anesthetized by 1% sodium pentobarbital (50 mg/kg, i.p.). The random dorsal flap was constructed by removing all the blood-supplying arteries from the central back of the mouse with an aspect ratio of 8:3. Finally, the detached flap was fixed. The mice were killed on the seventh day after surgery. After euthanasia, the skin flap tissue and substantial organs were immediately collected for follow-up experiments.
Treatment Protocols
The mice in the NT and NT+3 MA groups were treated with 40 mg/kg/d NT intraperitoneal injection for 7 days and the control group was treated with an unequal volume of saline. The mice in the NT+3MA group received 15 mg/kg/d 3MA 30 min before NT through intraperitoneal injection. The mice in the NT + VPS34 IN group received 5 mg/kg/d VPS34 IN 30 min before NT through intraperitoneal injection.
General Evaluation of Flap Survival
Three and seven days after surgery, the survival of the flap was observed using high-quality photography. The macroscopic development, appearance, and color characteristics of the flaps were observed 7 days after surgery. All images were measured using ImageJ (MD, United States). To calculate the survival area, Frontiers in Bioengineering and Biotechnology | www.frontiersin.org November 2021 | Volume 9 | Article 771066 the survival area percentage was measured as follows: survival area range/total area × 100%.
Tissue Edema Assessment
Seven days after surgery, the skin flaps were collected from each group and weighed. Then, they were dehydrated until the weight remained stable. The degree of edema can be determined as follows: ([wet weight − dry weight]/wet weight) × 100%.
Laser Doppler Blood Flow Imaging
LDBF imaging was used to determine the blood supply and vascular flow of the flap. Under anesthesia, the mouse lies in the prone position in the scanning area, and the laser Doppler imager scans the entire flap area. The blood flow at 0, 3, and 7 days was observed postoperatively through the color of the live blood flow images provided. The quantification of blood flow was performed via perfusion units, and the blood flow was measured using moorLDI Review software (version 6.1).
Hematoxylin and Eosin and Masson's Trichrome Staining
Tissue samples were taken from each group for pathologic analysis. The tissue was fixed with 4% paraformaldehyde and embedded in paraffin, cut into 5 μm for H and E and Masson's trichrome staining. H and E-stained sections were examined under a light microscope (x 10) to assess histological changes, including granulation tissue, swelling, and microvascular remodeling. Masson-stained sections were examined under a light microscope (x 10) to assess fibrotic accumulation.
Frontiers in Bioengineering and Biotechnology | www.frontiersin.org November 2021 | Volume 9 | Article 771066 finally sealed with a sealing solution containing DPAI. A Nikon ECLIPSE Ti microscope (Nikon, Tokyo, Japan) was used to take images. Single dyeing uses grayscale display and double dyeing uses color display.
Blood Biochemical Test
The blood of mice was collected by anticoagulant vessels containing heparin lithium, and the supernatant plasma was collected after centrifugation at 3,000 r at 4°C for 5 min. 100 μL of plasma was poured into a blood biochemistry tray (Micro-nano Chip, Tianjin, China) and diluted with 430 μL of pure water. An M3 biochemical analyzer was used (Micro-nano Chip, Tianjin, China) to test related indexes on the test tray with better density.
Statistical Analysis
Statistical analysis was performed using a two-sided Student's t-test. SPSS13 was used for statistical analyses. Data were expressed as mean ± standard deviation (SD). p < 0.05 was considered statistically significant.
NT Could Promote Cell Proliferation and Migration
NT is the active extract of Panax notoginseng ( Figure 1A), which can promote blood circulation, remove blood stasis, and increase blood flow. The proliferation, migration, and remodeling of vascular endothelial cells are crucial in random flap transplantation. We first tested the migration ability of UVECs by the cell scratch test, and the experimental results showed that the migration ability of UVECs was significantly decreased after H 2 O 2 stimulation than the control group, and the migration ability of UVECs was well-recovered after NT treatment ( Figures 1B-C). Then, we used the CCK-8 test to detect UVEC proliferation by NT, and the experimental results showed that NT could effectively promote the proliferation of UVECs ( Figure 1D). Furthermore, we detected the toxicity of NT in vivo, and no obvious pathologic changes were found in the brain, kidneys, spleen, and other tissues by H and E staining ( Figure 2A). The fluctuation of AST, ALT, and CRE in blood biochemistry was found to be within a normal range ( Figure 2B). In conclusion, NT can promote cell proliferation and migration with minimum toxicity.
NT Could Improve the Survival Rate of Random Flaps
The distal end of the random flap is prone to ischemic necrosis because the pedicle of a blood vessel is cut off during modeling ( Figure 3A). We first observed the necrosis of the flap and quantified the necrotic area. Our results showed that small areas of necrosis began to occur in the flaps of each group 3 days after surgery, and the survival of the flap in the control group was less than that in the NT and NT+3MA groups ( Figures 3B-C). The flap survival rate in the NT group was higher than that in the NT+3MA group. On the seventh postoperative day, a large area of skin flap necrosis occurred in both the control and NT+3MA groups, while the survival rate of the NT group was better than that of the other two groups (Figures 3D-E). Our results suggest that NT can effectively promote the survival of random flaps, and the pro-survival effect may be related to autophagy activation.
NT Could Enhance the Blood Flow Signal in Random Flaps
Adequate blood perfusion is the key to flap survival, and LDBF was used to trace the microvascular network reconstruction. Our results showed that the random flaps in each group lost blood supply after modeling, and the blood flow signal in the NT and NT+3MA groups rebounded on the third day after surgery, as well as the blood flow signal in the NT group was higher than that in the NT+3MA group ( Figures 4A-D). On postoperative day 7, blood flow signals in the NT group showed significant recovery among the three groups. The control and NT+3MA groups showed no significant recovery compared to the third postoperative day (Figures 4E,F). In conclusion, NT can effectively restore random flap perfusion and promote flap survival.
NT Could Improve the Blood Perfusion and Tissue Morphology of Random Flaps
Neovascularization is the key to increase blood perfusion. We used histologic staining to observe the blood supply regeneration of random flaps. H and E staining showed no obvious neovascularization in the flap of the control group. On postoperative day 7, the NT group showed a large number of microvessels. The number of microvessels in the NT+3MA group was less than those of the NT group ( Figures 5A,B). Masson's trichrome staining showed that the NT group had angiogenesis and collagen fiber arrangement was dense and in order, while the collagen fiber arrangement was irregular and loose in the control and NT+3MA groups ( Figure 5C). Then, we assessed the degree of edema in the flap, and the results showed that the degree of edema in the NT group was lower than that in the control and NT+3MA groups ( Figure 5D). In conclusion, NT could effectively improve the blood perfusion and tissue morphology of random flaps to promote the survival of flaps.
NT Could Promote Angiogenesis and Inhibit Oxidative Stress
Western blotting was used to further analyze the expression of neovascularization. Our results showed that the CD34 index representing neovascularization significantly increased after NT treatment, and the number of neovascularization in the NT+3MA group was significantly reduced than that in the NT group ( Figures 6A,B). The expression levels of VEGF promoting angiogenesis and VE-cadherin promoting vascular maturation were also significantly increased in the NT group compared with those of the control group. The levels of VEGF and VE-cadherin in the NT+3MA group were significantly lower than those in the NT group ( Figures 6A,C,D). Then, the oxidative stress levels of random flaps were analyzed by Western blotting. We detected eNOS, HO-1, SOD1, and other reductive indexes and found that the expression of reductive indexes in the NT+3MA group was significantly higher than that in the control group after NT treatment, and the expression of reductive indexes in the Frontiers in Bioengineering and Biotechnology | www.frontiersin.org November 2021 | Volume 9 | Article 771066 NT+3MA group was significantly lower than that in the NT group ( Figures 6E-H). In summary, NT can effectively promote angiogenesis, maintain the stability of regenerated blood vessels, and promote vascular recanalization to form an arteriovenous loop. Moreover, it can promote the generation of reducing substances to reduce the damage caused by oxidative stress.
NT Promoted the Survival of Random Flaps by Activating Autophagy
In random flaps, autophagy is needed to generate energy to supply cell survival due to the lack of vascular pedicle and tissue blood supply. Therefore, the activation of autophagy can effectively promote the survival of flaps. We first detected autophagy-related proteins, namely, Beclin-1, VPS34, LC3, and P62 using Western blotting. We found that the autophagy activation indexes, namely, Beclin-1, VPS34, and LC3 in the NT group were significantly higher than those in the control group. Beclin-1 in the NT+3 MA group was significantly higher than that in the control group after autophagy inhibitor 3MA. The expression levels of VPS34 and LC3 were significantly downregulated ( Figures 7A-D). P62 as a substrate for autophagy decreased significantly in the NT group than in the control group. The level of P62 in the NT+3MA group was significantly higher than that in the NT group ( Figures 7A,E). Then, we further verified the expression of LC3B and P62 in each group by immunofluorescence. Our results showed that LC3B in the NT group was significantly higher than that in the control group, and the LC3B level in the NT+3MA group was significantly lower than that in the NT group. The results of (E) LDBF technique shows the subcutaneous blood flow status immediately after surgery, as well as the number of days. (F) Statistical result of blood flow signal intensity at the seventh day. Scale: 0.5 cm. Data were expressed as mean ± SD, n 3. "**" and "*" represent p <0.01 or p <0.05 versus the NT group, indicating statistical significance. n.s. represents no statistical significance.
Frontiers in Bioengineering and Biotechnology | www.frontiersin.org November 2021 | Volume 9 | Article 771066 P62 are exactly the opposite of those of LC3B ( Figures 7F-H). In summary, NT promotes the survival of random flaps through the Beclin-1/VPS34/LC3 pathway.
NT Inhibits Autophagy Through the Beclin-1/VPS34/LC3 Pathway
To verify the autophagy pathway of NT action, the VPS34 inhibitor: VPS34 IN was first used to inhibit the expression of VPS34 at the cellular level. Western blotting results showed that the expression of VPS34 activated by NT was downregulated after VPS34 IN, LC3 expression was also downregulated, and P62 expression was increased. The expression levels of CD34, an indicator of angiogenesis, and eNOS and SOD1, an indicator of oxidative stress protection, were down-regulated after using VPS34 IN (Figures 8A-F). CCK-8 results showed that VSP34 IN significantly inhibited the protective effect of NT on UVECs ( Figure 8G). Next, we used immunofluorescence to detect the expression of autophagy, angiogenesis, and oxidative stress protectants in tissues. Our results showed that VPS34 IN inhibited autophagy, reduced angiogenesis, and downregulated the expression of oxidative stress-protective substances ( Figures 9A-I). The cell scratch assay showed that VPS34 IN could significantly inhibit the migration activity of UVECs ( Figures 9J,K). In conclusion, NT activates autophagy in random flaps through the Beclin-1/VPS34/LC3 pathway, promoting the survival of flaps.
DISCUSSION
During flap repair, avascular necrosis of the distal flap is the most common cause of surgical failure (Bai et al., 2021;Zhang et al., 2021). Therefore, the most important factor in protecting flap survival is to promote angiogenesis (Zhu X. et al., 2021;Lou et al., 2021). However, the adverse effects of I/R injury and nutrient deficiency during angiogenesis should not be ignored (Basu et al., 2014;He et al., 2021). Our findings showed that NT was the core factor for the survival of random flaps. NT promoted the survival of random flaps by inhibiting angiogenesis and inhibiting oxidative stress and also improved the tolerance of random flaps by activating autophagy and increasing the probability of random flaps' survival.
As a better blood-activating agent, NT has been widely used in cerebral infarction, central retinal vein occlusion, and other diseases (Xie et al., 2019;Li H.-L. et al., 2021). Previous studies have shown that NT played a major role in promoting angiogenesis and dilating blood vessels to increase blood flow, but no studies have reported the role of NT in random flaps (Hong et al., 2009;Zheng et al., 2013;Yang et al., 2016;Zhong et al., 2020;Zhu P. et al., 2021). Our results showed that NT could effectively promote the regeneration of skin flap vessels and increased the generation of neovascularization by upregulating the secretion of VEGF and the expression of VE-cadherin to promote the maturation of neovascularization, making it a blood vessel connecting arteries and veins and capable of supplying oxygen.
However, I/R injury is a key problem that cannot be ignored to promote angiogenesis. Reactive oxygen species (ROS) mainly cause the damage of microvessels and parenchymal organs during ischemic tissue reperfusion. The synthesis ability of antioxidant enzymes, which can scavenge free radicals, is impaired in the ischemic tissue, thus aggravating the damage of free radicals to ischemic reperfusion tissue (Neeff et al., 2012;Sun et al., 2014;Szabó et al., 2020). SOD can protect the tissue from ischemia and reperfusion by scavenging free radicals (Ambrosio et al., 1987;Galiñanes et al., 1992;Chen et al., 1996;Zhao et al., 2018). Our experimental results showed that NT could upregulate antioxidant indexes, such as SOD1, eNOS, and HO-1, in tissues and effectively resist I/R injury. These results suggest that NT promotes angiogenesis and regulates the tolerance of random skin flap cells to a harsh environment.
Autophagy is referred to as the rough-surfaced endoplasmic reticulum of the ribosome area of doublemembrane package which is part of the cytoplasm and cell organelles. Protein compositions, such as the degradation of domestic demand autophagy, will form, and fusion with lysosomes forming autophagy-lysosome and degradation of the contents of the package to realize the metabolism of the cell itself and some cell organelles needs to be updated (Ariosa et al., 2021;Ganley, 2021;Zhao et al., 2021). In normal cells, small amounts of autophagy are maintained to retain cell homeostasis (Talukdar et al., 2021). Due to the lack of nutrients in random flaps, autophagy is required to decompose damaged organelles to provide energy (Sciarretta et al., 2014;Wu et al., 2020). Therefore, the activation of autophagy in random flaps may improve the survival of random flaps. Our study showed that Beclin-1 protein expression increased in NT-treated flaps, an autophagy agonist, and can activate VPS34 and bind to form the VPS34 complex, promoting the formation of autophagosomes. Our subsequent detection of VPS34 protein expression showed a significant increase in its expression level. LC3 is the gold standard for autophagy expression, and our results showed that the expression of LC3 II/LC3 I in the NT group was significantly increased compared with that in the control group. P62 as a substrate for autophagy was significantly reduced in the NT group.
To further verify that NT promotes the survival of random flaps by activating autophagy, we applied the autophagy inhibitor 3MA to the flaps treated by NT. Experimental results showed that the therapeutic effect of NT was significantly inhibited, and the number of new vessels, blood flow signal intensity, and the survival area of flaps were significantly downregulated. The organizational structure becomes disorganized. Finally, to clarify the pathway of NT's role in autophagy, the VPS34 inhibitor: VPS34 IN was used to inhibit NT-upregulated VPS34 expression to verify whether NT regulates autophagy through the Beclin-1/VPS34/LC3 pathway. Our results showed that the therapeutic effect of NT decreased significantly after VPS34 IN.
In conclusion, we demonstrate for the first time that NT promotes random flap angiogenesis and improves tissue survival. In addition, NT has been widely used in clinical practice and has high translational significance. Our study confirmed that NT plays a crucial role in promoting angiogenesis and defending against oxidative stress, and this protective effect of NT may be closely related to Beclin-1/ VPS34/LC3-mediated autophagy activation.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding authors.
ETHICS STATEMENT
The animal study was reviewed and approved by the Animal Experiment and was ethically approved by the Animal Protection and Use Committee of Wenzhou Medical University (wydw 2021-0242).
AUTHOR CONTRIBUTIONS
ZH, XLu, YZ and YY coordinated and carried out most of the experiments and data analysis and participated in drafting the Frontiers in Bioengineering and Biotechnology | www.frontiersin.org November 2021 | Volume 9 | Article 771066 manuscript. XC, WLu and JZ provided technical assistance. YW, WLin and YT carried out assistance on data analysis. ZX, QW carried out assistance on manuscript preparation. XLi and SZ supervised the project and experimental designs and data analysis. XLi, SZ and SY supervised the project and revised the manuscript. All authors approved the final manuscript. | 5,346.6 | 2021-11-19T00:00:00.000 | [
"Medicine",
"Biology"
] |
Estimation of the Timing and Intensity of Reemergence of Respiratory Syncytial Virus Following the COVID-19 Pandemic in the US
Key Points Question What are the factors associated with the timing and intensity of reemergent respiratory syncytial virus (RSV) epidemics following the COVID-19 pandemic? Findings In this simulation modeling study of a simulated population of 19.45 million people, virus introduction from external sources was associated with the spring and summer epidemics in 2021. Reemergent RSV epidemics in 2021 and 2022 were projected to be more intense and to affect patients in a broader age range than in typical RSV seasons. Meaning These findings suggest that the timing and intensity of reemergent RSV epidemics might be different from the usual RSV season, depending on the duration of mitigation measures and the extent of virus introduction from other regions.
eAppendix. Transmission dynamic models
Mathematical models were used to reproduce the annual RSV epidemics before the COVID-19 pandemic based on the inpatient data of New York (2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014) and California (2003California ( -2011. Parameters to produce biennial RSV epidemics and year-round RSV activity were taken from models fit to similar datasets from Colorado and Florida (1989Florida ( -2009, respectively. This model assumes infants are born with transplacentally-acquired antibodies against RSV infections from their mothers (M). As transplacentally-acquired protective antibodies wane, infants become susceptible to infection (S 0 ). Following each infection (I i ), individuals gain partial immunity that lowers both their susceptibility to subsequent infections and the duration and infectiousness of subsequent infections (see eFigure 1). The force of infection for a specific age group , ( ), for time t is defined as: Seasonality in the force of infection is represented by (1 + 1 cos( 2 − 12 )), where 1 is the amplitude of seasonality and ϕ is the seasonal offset. The chance of susceptible individuals in age group being infected is influenced by their contacts with infectious individuals in the entire population. , is the transmission rate from age group k to age group . The proportion of infected individuals and their relative infectiousness at time t is denoted by ( 1, ( ) + 1 2, ( ) + 2 3, ( ) + 2 4, ( ))⁄ ( ), where 1, is the number of infectious individuals of age k during their first infection; 2, , 3, and 4, are the number of infectious individuals who have been infected two, three and four or more times, respectively; 1 and 2 denote the relative infectiousness of the second and subsequent infections; and N k is the total population of age k.
The transmission parameter , can be further decomposed into the age-specific contact probability between age group and per unit time ( , ) and the probability of transmission given contact between an infectious and a susceptible individual (q). Age-specific mixing patterns were obtained from several previous studies, including detailed contact patterns for infants under 1 year of age and location-specific contact patterns. [1][2][3] Age was stratified into thirteen groups: infants younger than 3 months, 3-5 months, 6-8 months, 9-11 months, 1 year, 2 years, 3 years, 4 years, 5-9 years, 10-19 years, 20-39 years, 40-59 years, and ≥60 years.
The disease transmission process is linked to observation-level information. The probabilities of developing lower respiratory tract disease and being hospitalized upon RSV infection are informed by cohort studies conducted in the US and Kenya. 1,[4][5][6][7][8][9][10][11][12][13][14] The number of lower respiratory tract infections (LRI) due to RSV is given by: where ( ) is the force of infection for a specific age group at time t (as defined above). 0, is the number of fully susceptible individuals of age a; 1, , 2, and 3, are the number of susceptible individuals who have been infected once, twice and more times, respectively. 1 , 2 and 3 denote the relative risk of infection following the first, second, and more infections. ℎ 1, , ℎ 2, and ℎ 3, are the proportion of the first, second, and more infections that are hospitalized.
The average age of hospitalization among children under 5 in month t is given by: 15 where the weight is the midpoint of age group .
Several model parameters were fixed based on data from previous cohort and modeling studies. 1,[4][5][6][7][8][9][10][11][12][13][14] We used Bayesian inference to estimate the average duration of transplacentally-acquired immunity, age-specific probability of hospitalization in the 40-59 year and >60 year age groups, the transmissibility coefficient, and seasonal parameters by fitting the model to the hospitalization data from New York and California. 16,17 We identified the best-fit parameter sets by maximum a posteriori estimation. 18 The likelihood was calculated by assuming the observed number of hospitalizations in the entire population was Poisson-distributed with a mean equal to the model-predicted number of hospitalization, and that the observed age distribution was multinomialdistributed with probabilities equal to the model-predicted distribution of RSV hospitalizations in each age group.
To validate our model predictions, we fitted the transmission model to the inpatient data for California from 2003 to 2011; we then compared the model predictions with data on the percent of clinical specimens positive for RSV from a separate sentinel surveillance database from 2012 to 2018. We rescaled the percent positive data by calculating a scaling factor based on overlaying the surveillance data and inpatient data from 2009 to 2011 (see eFigure 3).
We initialized the transmission models with 1 infectious individual in each age group (except for infants under 6 months) in July 1981 and used a burn-in period of 24 years and 22 years in New York and California, respectively. We also performed a sensitivity analysis around what re-emergence might look like in a state with a biennial pattern of epidemics, using parameters fitted to earlier data from Colorado as an example and assuming a linearly declining birth rate (from 17 to 10 births per 1,000 people per year). We used the same number of infectious individuals to initialize transmission model, and a burn-in period of 40 or 41 years starting from 1971 or 1970 to allow for greater incidence in even or odd years. The numbers on the top show the percentage difference between the expected incidence and the counterfactual incidence in each age group. | 1,450.4 | 2021-12-01T00:00:00.000 | [
"Biology"
] |
Modulation of glucose metabolism by a natural compound from Chloranthus japonicus via activation of AMP-activated protein kinase
AMP-activated protein kinase (AMPK) is a key sensor and regulator of glucose metabolism. Here, we demonstrated that shizukaol F, a natural compound isolated from Chloranthus japonicus, can activate AMPK and modulate glucose metabolism both in vitro and in vivo. Shizukaol F increased glucose uptake in differentiated C2C12 myotubes by stimulating glucose transporter-4 (GLUT-4) membraned translocation. Treatment of primary mouse hepatocytes with shizukaol F decreased the expression of phosphoenolpyruvate carboxykinase 2 (PEPCK), glucose-6-phosphatase (G6Pase) and suppressed hepatic gluconeogenesis. Meanwhile, a single oral dose of shizukaol F reduced gluconeogenesis in C57BL/6 J mice. Further studies indicated that shizukaol F modulates glucose metabolism mainly by AMPKa phosphorylation activity. In addition, we also found that shizukaol F depolarizes the mitochondrial membrane and inhibits respiratory complex I, which may result in AMPK activation. Our results highlight the potential value of shizukaol F as a possible treatment of metabolic syndrome.
which in turn inhibits gluconeogenesis 7 . In addition, AMPK increases glucose uptake by stimulating the transfer of GLUT-4 from cytoplasm to membrane 8 . The function of AMPK in regulating glucose metabolism has been demonstrated with several AMPK activators, such as metformin and AICAR, to increase AMPKa phosphorylation and mediates glucose metabolisms both in vivo and in vitro 9,10 . Thiazolidinedione (TZDs) up-regulates the cellular AMP/ATP ratio, which in turn activates AMPK activation and suppresses gluconeogenesis 11 .
Several compounds have been reported to stimulate AMPK activity. For example, Metformin increases AMPKa phosphorylation and mediates glucose metabolism 12 . Rosiglitazone, a clinical drug used to control hyperglycemia, also activates AMPK and regulates PPAR gamma expression [13][14][15] . Arctigenin activates AMPK via the inhibition of mitochondria complex I and ameliorates metabolic disorders in ob/ob mice 16 . Shizukaol D extracted from traditional Chinese medicine Chloranthus japonicus has been confirmed to inhibit the hepatic fatty acid contents by activating AMPK 17 . Shizukaol F and shizukaol D are lindenane-type disesquiterpenoids, which are all isolated from Chloranthus japonicus 18,19 . Though they have the same skeleton, shizukaol F contains a unique 18-membered macrocyclic trimester ring and a hydroxyl group at C-4′ and shizukaol D has an acetoxyl group attached at C-15′ 19,20 . Besides, shizukaol F exhibits anti-HIV RNase activity and inhibits PMA-induced homotypic aggregation of HL-60 cells 20,21 . While, there are no reports of shizukaol F modulates metabolism until now. Given the fact that shizukaol D regulate lipid metabolism in hepatic cells, we proposed that shizukaol F may modulate metabolic activity. In this study, our results showed that shizukaol F activated AMPK, increased the glucose uptake in skeletal muscle cells and reduced gluconeogenesis both in primary hepatic cells and in vivo via an AMPKa phosphorylation dependent mechanism. Further results showed that the activation of AMPK by shizukaol F is caused by inhibition of mitochondrial complex I activity.
Results
Identification of shizukaol F as an AMP-activated protein kinase (AMPK) activator. Shizukaol F ( Fig. 1) was extracted from Chloranthus japonicus as previously described 20,22 . To assess the potential effect of shizukaol F on energy metabolism, we first analyzed the cytotoxicity of shizukaol F in differentiated C2C12 cells. As we observed, treatment of shizukaol F didn't change the cell viability at various doses for up to 48 hours (Fig. S1A). We then treated C2C12 cells with shizukaol F at the indicated concentrations for 1 h. 2 mM metformin was set as a positive control. The AMPK activity was analyzed by immunoblotting with the specific antibody for phosphorylated AMPKa (Thr 172). As a result, incubation of shizukaol F activated AMPKa phosphorylation in a dose-dependent manner ( Fig. 2A,B). In addition, we confirmed the activity of AMPK with 1 μM shizukaol F for different time points (Fig. 2C,D).
Shizukaol F modulates intracellular glucose metabolism.
Several studies have shown that the phosphorylation of AMPKa at Thr 172 leads to up-regulated glucose uptake in skeletal muscle cells and decreased gluconeogenesis in hepatic cells 1,23 . To determine the function of shizukaol F on glucose metabolism, we measured the glucose uptake in a differentiated mouse muscle cell line (Fig. S2), C2C12, after treatment with the indicated concentrations of shizukaol F for 24 h. As shown in Figs 3A and S3, under these conditions, shizukaol F increased phosphorylation of AMPKa (Thr 172) and stimulated the translocation of GLUT-4 from cytoplasm to membrane, which in turn led to an up-regulation of glucose uptake (Fig. 3B).
In addition, we analyzed gluconeogenesis ability in isolated mouse hepatic cells after treatment with shizukaol F overnight. Interestingly, exposure to shizukaol F increased the phosphorylation of AMPKa (Figs 3C and S3D), and suppressed the expression of PEPCK and G6Pase, which were important to gluconeogenesis in hepatocytes 24
Shizukaol F inhibits gluconeogenesis and increases AMPKa phosphorylation in vivo.
To assess the effect of shizukaol F on gluconeogenesis in vivo, we performed a pyruvate tolerance test (PTT), as administration of the gluconeogenic substrate pyruvate increases blood glucose levels by promoting gluconeogenesis in the liver. As described above, mice were pre-treated with 75 mg/kg shizukaol F by gavage, and then were administrated of pyruvate. As shown in Fig. 4A,B, shizukaol F significantly attenuated the blood glucose enhanced by pyruvate compared to control mice. This result indicated that shizukaol F reduced gluconeogenesis in vivo. In addition, the phosphorylation of AMPKa in liver isolated from mice treated with shizukaol F was increased about 40%, suggesting shizukaol F activated AMPK in vivo (Fig. 4C,D).
The effect of shizukaol F on glucose metabolism is dependent on the AMPKa phosphorylation activity. To further confirm the relationship between AMPKa phosphorylation and glucose metabolism in response to treatment with shizukaol F, we inhibited AMPKa phosphorylation with the chemical inhibitor compound C 25 . C2C12 cells were pre-treated with 20 μM compound C and then treated with 1 μM shizukaol F. Treatment of the C2C12 cells with compound C significantly inhibited the AMPKa phosphorylation stimulated by shizukaol F (Figs 5A and S4A). In addition, the translocation of GLUT-4 to cell membrane along with the up-regulation of the glucose uptake was blocked by compound C (Figs 5A,B and S4B). Next, the primary hepatic cells were pre-incubated with 10 μM compound C and then treated with 1 μM shizukaol F. As predicted, treatment of the primary hepatic cells with compound C inhibited the phosphorylation of AMPKa induced by shizukaol F (Figs 5C and S4C). Compound C also increased the expression of PEPCK and G6Pase that were suppressed by shizukaol F (Fig. S4D and E). Importantly, the down-regulation of the gluconeogenesis in primary hepatic cells induced by shizukaol F was also blocked by AMPK inhibitor (Fig. 5D).
In addition, we inhibited AMPK activity using a shRNA approach. C2C12 cells were infected with lenti-virus to knockdown AMPKa1 which is a key component of AMPK (Fig. 5E) and then treated the cells with shizukaol F (see Materials and Methods). As expected, the down-regulation of AMPKa1 expression mediated by the shRNA-AMPKa1 resulted in a significant reduction in the levels of phosphorylated AMPKa (Thr 172) and GLUT-4 translocation induced by drug treatment (Figs 5E and S5A,B). Furthermore, knockdown of AMPKa1 significantly reversed the shizukaol F induced glucose uptake (Fig. 5F). Meanwhile, primary hepatocytes were infected with lenti-virus of shRNA-AMPKa1 and then treated with 1 μM shizukaol F. As shown in Figs 5G and S5A, knockdown of AMPKa1 inhibited the phosphorylation of AMPKa (Thr 172) induced by shizukaol F. As a result, knockdown of AMPKa1 stimulated the expression of PEPCK and G6Pase that were inhibited by shizukaol F (Fig. S5D). Importantly, the down-regulation of the gluconeogenesis in hepatocytes induced by shizukaol F was blocked by shRNA-AMPKa1 (Fig. 5H). Taken together, these results strongly supported the conclusion that shizukaol F modulates the glucose metabolism in an AMPKa phosphorylation dependent manner.
Shizukaol F activates AMPK by inhibiting respiratory complex I. Several studies have shown that
AMPK activating compounds such as metformin and AICAR influence mitochondrial function 10,26 . We next examined the effect of shizukaol F on mitochondrial membrane potential (Δψm) and energy status. Using a fluorescence detection assay, we first confirmed that shizukaol F depolarized the mitochondrial membrane potential of C2C12 cells in a dose dependent manner (Fig. 6A), though the mitochondrial dysfunction induced by shizukaol F incubation was not as strong as the mitochondrial uncoupling compound CCCP (Fig. 6A). In addition, the inhibition of mitochondrial membrane potential (Δψm) by shizukaol F also led to an increase in the AMP/ATP ratio. As shown in Fig. 6B,C, the increased AMP/ATP ratio in C2C12 cells treated with shizukaol F was detected by HPLC. CCCP and metformin were used as positive control (Figs 6B and S6A). Taken together these results suggested that shizukaol F may activate AMPK through the induction of mitochondria dysfunction, especially energy depletion.
To determine whether the change in nucleotide ratio was due to an impact on cellular respiration, as in case of another AMPK activator such as metformin and AICAR 10, 27 , we examined oxygen consumption in C2C12 cells in the presence of shizukaol F. Treatment with shizukaol F resulted in a dose dependent inhibition of aerobic respiration in C2C12 myotubes (Fig. 6D). Furthermore, the effects of shizukaol F on ADP stimulated respiration in the presence of complex I (glutamate + malate) or complex II (succinate) substrates were measured in isolated mouse liver mitochondria, Rosiglitazone was set as positive control (Fig. S6B,C). Shizukaol F produced a dose dependent inhibition of oxygen consumption with complex I linked substrate, but not with complex II linked respiration (Fig. 6E). Furthermore, shizukaol F increased lactate production in C2C12 myotubes, which is a marker of cellular anaerobic respiration, as the reduction of aerobic respiration may lead to compensated elevation of anaerobic respiration (Fig. 6F). All these findings suggested that shizukaol F modulates AMPKa phosphorylation activity by inhibiting mitochondrial respiratory complex I, which in turn suppresses aerobic respiration and up-regulates anaerobic respiration to meet the energy requirement. . **P < 0.05 (one way ANOVA). The gluconeogenesis ability in primary mouse hepatic cells was measured after treating with shizukaol F overnight. (C) Shizukaol F increased the phosphorylation of AMPKa in primary hepatocytes, and suppressed the PEPCK/G6Pase gene expression (D). *P < 0.05; **P < 0.05 (two way ANOVA). n = 3 independent biological replicate experiments. The glucose production in primary hepatocyte was measured after treated with shizukaol F overnight (E). **P < 0.05 (one way ANOVA). n = 3 independent biological replicate experiments.
Discussion
As a natural product from Chloranthus japonicus, shizukaol F has been demonstrated to induce various biological activities, including anti-inflammation and anti-HIV activity 20,22 . But there were no studies showing that shizukaol F has metabolic activities. In the previous study, a natural compound isolated from Chloranthus japonicus, shizukaol D was reported to activate AMPK and reduces triglyceride and cholesterol levels in HepG2 cells 17 . To assess the effect of shizukaol F on AMPKa phosphorylation (Thr 172), we first identified that shizukaol F also activates AMPK in a dose-dependent manner. We also showed that shizukaol F modulates glucose metabolism in skeletal myotubes, primary hepatocytes, and in C57BL/6 J mice via AMPK activity. In addition, we found that shizukaol F activated AMPK by inhibiting mitochondria respiratory complex I.
Glucose transport in skeletal muscle is the major component of whole body glucose uptake and plays a key role in maintaining glucose homeostasis 28 . Here we have shown that shizukaol F significantly stimulated glucose uptake in differentiated skeletal myotubes (Fig. 3B). The stimulation of glucose uptake by shizukaol F may be a result of GLUT-4 translocation from cytoplasm to membrane, a downstream effect of AMPK (Fig. 3A). Rosiglitazone was set as positive control since it was used to regulate glucose uptake with the effect of AMPK activation 14 . Meanwhile, the protein of GLUT-4 in cytoplasm was not changed which means shizukaol F increased cellular total GLUT-4 expression. This result was consistent with previously report that AMPK regulates GLUT-4 transcription by phosphorylating histone deacetylase 5 (HDAC5) 29 . In agreement with our studies in C2C12 myotubes, shizukaol F also activated AMPK in primary hepatic cells isolated from C57BL/6 J mouse and reduced PEPCK and G6Pase expression (Fig. 3C,D), suggesting that the decreased expression of gluconeogenic genes may also contribute to the glucose lowering effect of shizukaol F (Fig. 3E). In addition, acute effect of shizukaol F on gluconeogenesis and hepatic AMPKa phosphorylation in C57BL/6 J mice were measured (Fig. 4). Taken together, our results indicated that shizukaol F improves overall glucose metabolism, an effect likely to be mediated by the activation of AMPK.
To confirm the significance of AMPKa phosphorylation in the activity of shizukaol F, we inhibited AMPKa phosphorylation by using AMPK inhibitor compound C and shRNA lenti-virus 25 . As shown in Fig. 5, compound C and shRNA-AMPKa1 caused a remarkable inhibition of AMPK pathway, and decreased shizukaol F mediated phosphorylation of AMPKa. In addition, inhibition of AMPKa phosphorylation suppressed glucose uptake in myotubes stimulated by shizukaol F, and rescued gluconeogenesis ability in primary hepatocytes caused by shizukaol F. These finding suggested that the modulation of glucose metabolism by shizukaol F was dependent on AMPK activity.
AMPK is a sensor of whole body energy homeostasis and is activated directly as a result of energy depletion 30 . Metformin and rosiglitazone are well known to increase the ratio of AMP: ATP, which in turn leads the activation Figure 4. Acute effect of shizukaol F on gluconeogenesis and hepatic AMPKa phosphorylation in C57BL/6 J mice. Mice were treated as described in methods. In the pyruvate tolerance test, blood glucose levels (A) were measured. **P < 0.01 (one way ANOVA). n = 6 independent biological replicate experiments; (B) AUC was calculated (n = 6), **P < 0.01, two-tailed Student t-test. (C) Hepatic p-AMPKa and AMPKa were analyzed by immunoblotting and quantified as relative optical density (D). **P < 0.01, two-tailed Student t-test. n = 6 independent biological replicate experiments. of AMPK 12 . Here, we found that the treatment of shizukaol F inhibited mitochondrial membrane potential and cellular respiration. As a result, the cellular AMP/ATP ratio was also increased by shizukaol F (Fig. 6). As we see, 4 mM metformin increased about 4 folds of AMP/ATP ratio (Fig. S6A), which was consistent with previously report 31 . In this story, 1 μM shizukaol F increased about 4 folds of AMP/ATP ratio, suggesting a much lower dosage of shizukaol F can be used in future treatment. We further investigated whether shizukaol F inhibited the respiratory complex. Surprisingly, we found that shizukaol F inhibited mitochondrial respiration complex I (glutamate and malate) but not complex II (succinate) (Fig. 6E). This finding suggested that shizukaol F modulates AMPK activity by inhibiting mitochondrial respiratory complex I, which therefore led to an elevated AMP/ATP ratio. Figure 5. Shizukaol F regulates glucose metabolism via AMPKa phosphorylation activity. C2C12 myotubes were pretreated with 20 μM AMPK inhibitor compound C, and followed by the treatment of 1 μM shizukaol F. Then the immunoblotting analysis of AMPKa phosphorylation and GLUT-4 translocation were measured (A); and the determination of glucose uptake (B). **P < 0.01 (two-tailed Student t-test). n = 6 independent biological replicate experiments. Primary hepatocytes were pretreated with 10 μM compound C, and followed by the incubation of 1 μM shizukaol F. Then the western blotting (C) and gluconeogenesis level were detected (D). *P < 0.05; **P < 0.01 (two-tailed Student t-test). n = 6 independent biological replicate experiments. C2C12 cells infected by lenti-virus of shRNA-AMPKa1 were treated with 1 μM shizukaol F for 24 h. AMPKa phosphorylation and GLUT-4 translocation were analyzed by western blotting (E). Glucose uptake was measured (F). **P < 0.01 (two-tailed Student t-test). n = 3 independent biological replicate experiments. Primary hepatocytes were transfected with shRNA-AMPKa1 lenti-virus or a negative control. Cells were incubation with 1 μM of shizukaol F for 24 h. AMPKa and ACC phosphorylation were analyzed (G), and gluconeogenesis level was measured (H). *P < 0.05; **P < 0.01 (two-tailed Student t-test). n = 3 independent biological replicate experiments.
In conclusion, our studies demonstrated the beneficial effects of a natural product, shizukaol F, as a potent activator of AMPK regulating glucose homeostasis both in vitro and in vivo. Shizukaol F caused inhibition of mitochondrial respiratory complex I, thus leading the activation of AMPK and subsequent beneficial metabolic outcomes, including enhanced glucose uptake in skeletal muscle cells and suppression of hepatic gluconeogenesis. These results highlight the value of shizukaol F as a potential compound for the treatment of metabolic diseases. The air-dried and powdered Chloranthus japonicus plants (10 kg) were extracted and purified (at least 98%) as previously described 21, 22 . Cell Culture. C2C12 cells (American Type Culture Collection, VA) were cultured in Dulbecco's modified Eagle Medium (DMEM) supplemented with 10% FBS (GIBCO) and 100 units/ml penicillin and streptomycin at 37 °C in 5% CO 2 . For differentiation, cells were washed with PBS and incubated with 2% horse serum for 6 days. Primary hepatocytes were isolated from 12 week male C57BL/6 J mice as previously reported 32 . Cells were cultured in DMEM with 10% FBS.
Materials
Determination of glucose uptake and isolation of plasma membrane. Glucose uptake activity was measured by 2-deoxy-D-glucose uptake as described previously 33 . C2C12 myotubes cultured in 35 mm dishes were serum starved with DMEM for 4 h, washed three times with warm KRH buffer (25 mM HEPES, pH 7.4, 120 mM NaCl, 5 mM KCl, 1.2 mM MgSO 4 , 1.3 mM CaCl 2 , 1.3 mM KH2PO 4 ) and then incubated in KRH buffer at 37 °C. Subsequently, the cells were stimulated with 100 nM insulin for 30 min and 2-deoxy-D- [1-14 C] glucose was added during the last 5 min at a final concentration of 50 μM with 0.2 μCi/ml. Glucose uptake was terminated by ice cold phosphate buffered saline (PBS) washing. The cells were lysed with 1% SDS and subjected to liquid scintillation counting.
The plasma membrane isolation was described previously 33 . Cells were washed with cold HES buffer (20 mM HEPES, Ph7.4, 1 mM EDTA, 0.25 M sucrose) three times and homogenized with HES buffer plus protease inhibitor cocktails. The cell lysate was centrifuged 16000 g for 20 min at 4 °C and the membrane pellet was re-suspended in HES buffer. The plasma membrane was isolated by re-suspend onto a 1.12 M sucrose cushion and centrifuged at 100000 g for 60 min. The membrane fraction on the density interface was collected, diluted with 20 mM HEPES, pH 7.4 and 1 mM EDTA solution and pelleted by centrifugation.
Mammalian lenti-viral shRNAs. Lenti-viral short hairpin RNA (shRNA) expression vectors and virus
were purchased from GenePharma (Suzhou, China). To generate the lenti-viruses, shRNA plasmids were co-transfected into HEK293T cells along with envelope (VSVG) and packaging (Delta 8.9) plasmids using lipofectamine 2000 (Invitrogen). The viral supernatants were harvested and filtered after two days transfection. C2C12 cells were infected in the presence of a serum-containing medium supplemented with 8 μg/ml polybrene. Following infection for 48 hours, cells were selected with 2.0 μg/ml puromycin (Sigma). Knockdown efficiencies were examined by western blot.
Mitochondrial membrane potential assay. The mitochondrial membrane potential assay was performed as described previously 16,17 . C2C12 cells were seeded into black 96-well optical-bottom plates (Corning, Costar). The cells were incubated with shizukaol F or CCCP at 37 °C for 10 min, and then 100 μl of fresh medium containing 0.2 μg JC-1 was added to each well. The plates were incubated at 37 °C for another 20 min, followed by washing three times with 200 μl of Krebs-Ringer phosphate HEPES buffer. The fluorescence was measured at 530 nm/580 nm (red) excitation and emission (ex/em) wavelengths and then at 485 nm/530 nm (green) ex/em wavelengths. The ratio of red to green fluorescence reflects the mitochondrial membrane potential (Δψm).
Adenine nucleotide extraction and measurement. C2C12 cells were cultured in 60-mm dishes with shizukaol F or CCCP for the indicated time. The samples for cellular adenine nucleotide measurement were prepared and analyzed as described previously 17 . The cells were washed with PBS buffer (140 mM NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 , 1.8 mM KH 2 PO 4 ) and trypsinized. Next, the cells were suspended in 4% (vol/vol) perchloric acid and incubated on ice for 30 min. The pH of the lysates was adjusted to between 6 and 8 with 2 mol/l KOH and 0.3 mol/l MOPS. The precipitated salt was separated from the liquid phase by centrifugation at 13000 rpm at 4 °C for 15 min. Adenine nucleotide measurements were conducted by HPLC (Agilent 1200 series) using a C18 column. The HPLC buffer contained 20 mM KH 2 PO 4 and 3.5 mM K 2 HPO 4 3H 2 O at pH 6.1 with the flow rate of 1.0 mL/min. The order of eluted nucleotides was ATP, ADP, and AMP. Standards (7.5 μM ATP, ADP, and AMP in ddH 2 O) were used to quantify the samples.
Measurement of respiration in C2C12 cells and isolated Mitochondria. Liver mitochondria were prepared from C57BL/6 J mice. Isolated liver was chopped, homogenized (Polytron Homogeniser, Switzerland) and centrifuged (10 g) as described in a previous study 34 . Mitochondria were re-suspended in respiration buffer at a concentration of 60 mg/ml. Respiration measurements in C2C12 cells and isolated mitochondria were performed in a 782 2 channel oxygen system (Strathkelvin Instruments, UK). For mitochondria, the respiration medium contained 225 mM mannitol, 75 mM sucrose, 10 mM Tris-HCl, 10 mM KH 2 PO 4 , 10 mM KCl, 0.8 mM MgCl 2 , 0.1 mM EDTA, and 0.3% (wt/vol) fatty acid-free BSA, pH 7.0. The respiration medium used for the C2C12 cells consisted of 25 mM glucose, 1 mM pyruvate, and 2% (wt/vol) BSA in PBS, Ph 7.4. Gluconeogenesis in primary cultured mouse hepatocytes. Primary hepatocytes were isolated by collagenase digestion from mice that had been fasted for 24 h as previously described 32 . Cells were washed with PBS and cultured in glucose production buffer consisting of glucose-free DMEM (Ph 7.4), without phenol red, supplemented with 20 mM sodium lactate and 2 mM sodium pyruvate. After incubation with shizukaol F or DMSO, the medium was collected and the glucose concentration was measured with a colorimetric glucose (GO) assay kit (Sigma). The results were normalized to the total protein content.
Acute shizukaol F administration and pyruvate tolerance test. 75 mg/kg shizukaol F or vehicle (0.5% carboxymethyl cellulose, wt/vol) was administrated by gavage to overnight fasting male C57BL/6 J mice (10 week) 2 h prior to intraperitoneal pyruvate challenge (2 g/kg). Blood glucose values were measured at 0, 15, 30, 60, 90 and 120 min after pyruvate loading via blood drops obtained by clipping the tail of the mice using an ACCU-CHEK advantage II glucose monitor (Roche, IN). The animals were killed and livers were isolated immediately and frozen in liquid nitrogen for immunoblotting analysis 16 . Determination of lactate content. C2C12 cells were cultured in a 24-well plate and treated with shizukaol F or 50 μM rosiglitazone (as a positive control) in serum-free cell culture medium for 1 or 4 hours. The amount of lactate in the medium was measured using a lactate assay kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China). | 5,392.8 | 2017-04-10T00:00:00.000 | [
"Biology"
] |
β‐Klotho sustains postnatal GnRH biology and spins the thread of puberty
Hypogonadotropic hypogonadism is a syndrome found to be isolated (IHH) or associated with anosmia, corresponding to the Kallmann syndrome (KS). It comprises a defect in gonadotropin‐releasing hormone (GnRH) secretion and absent or delayed puberty. Genetic causes have been identified with a high genetic heterogeneity. Fibroblast growth factor receptor 1 (FGFR1), a tyrosine kinase receptor, was one of the first genes whose mutations were identified as causative in KS. FGFR1 is responsible for the formation of the GnRH neuron system. Studying patients has not only allowed the identification of new etiologies for this syndrome but also helped to unravel the signaling pathways involved in the development of GnRH neurons and in GnRH control and function. The FGF21/FGFR1/Klotho B (KLB) signaling pathway mediates the response to starvation and other metabolic stresses. Preventing reproduction during nutritional deprivation is an adaptive process that is essential for the survival of species. In this work, Xu et al (2017), using a candidate gene approach, provide a description of the essential role played by this pathway in GnRH biology and in the pathogenesis of IHH and KS. They establish a novel link between metabolism and reproduction in humans.
Micheline Misrahi
Hypogonadotropic hypogonadism is a syndrome found to be isolated (IHH) or associated with anosmia, corresponding to the Kallmann syndrome (KS). It comprises a defect in gonadotropin-releasing hormone (GnRH) secretion and absent or delayed puberty. Genetic causes have been identified with a high genetic heterogeneity. Fibroblast growth factor receptor 1 (FGFR1), a tyrosine kinase receptor, was one of the first genes whose mutations were identified as causative in KS. FGFR1 is responsible for the formation of the GnRH neuron system. Studying patients has not only allowed the identification of new etiologies for this syndrome but also helped to unravel the signaling pathways involved in the development of GnRH neurons and in GnRH control and function. The FGF21/FGFR1/Klotho B (KLB) signaling pathway mediates the response to starvation and other metabolic stresses. Preventing reproduction during nutritional deprivation is an adaptive process that is essential for the survival of species. In this work, Xu et al (2017), using a candidate gene approach, provide a description of the essential role played by this pathway in GnRH biology and in the pathogenesis of IHH and KS. They establish a novel link between metabolism and reproduction in humans.
See also: C Xu et al (October 2017) T he hypothalamic secretion of GnRH by GnRH neurons is essential for the onset and maintenance of reproduction. GnRH induces synthesis and secretion of follicle-stimulating (FSH) and luteinizing (LH) hormones by the pituitary, themselves acting on the gonads to induce sex steroid production. A defect in GnRH secretion or action results in congenital hypogonadotropic hypogonadism (CHH), a rare genetic disorder characterized by lack of puberty and infertility. CHH exists in two forms: idiopathic hypogonadotropic hypogonadism (IHH) with anosmia (KS) or with a normal sense of smell (normosmic IHH: nIHH), which are both associated with deficient GnRH secretion. In addition, metabolic defects are present, however considered as secondary to sex steroid deficiency as usually improved by steroid treatment. IHH and KS are genetically heterogenous with more than 30 causative genes identified to date (Boehm et al, 2015).
FGFR1 has been one of the first two genes described as responsible for KS. This gene is also found mutated in IHH showing the proximity of the two syndromes. FGFR1 is a member of the FGFR family of transmembrane receptors with intrinsic tyrosine kinase activity. FGFR1 is important for multiple biological processes, including cell growth and migration, organ formation, and bone growth. FGFR1 is highly expressed in central nervous system tissues and is involved in the development and migration of GnRH neurons (Boehm et al, 2015).
FGFs are growth factors that bind to FGFRs and act in a paracrine or endocrine manner. They control multiple biological processes such as proliferation, survival, migration, and differentiation of a variety of cell types. FGFs are involved in the formation and maintenance of GnRH neurons (Owen et al, 2015). FGF ligands may have paracrine or endocrine activity. Compared to paracrine FGFs, endocrine FGFs have poor affinity for their cognate FGFRs. They overcome this deficiency through the parallel binding to a/b Klotho coreceptors expressed in their target cells (Goetz et al, 2012;Owen et al, 2015). Klotho (KL) is a transmembrane protein discovered in 1997. The name of the gene comes from Greek mythology. Klotho means "spinner" in Greek. Klotho was one of the Three Fates of Destiny and responsible for "spinning the thread of life". Mutation of the mouse klotho gene leads to a syndrome resembling aging and shortens lifespan. b-Klotho (KLB) has been identified by homology with the KL gene (Ogawa et al, 2007). It is abundantly expressed in metabolic tissues, especially adipose tissue. FGF21 is an endocrine FGF mainly secreted by the liver that regulates major metabolic processes such as glucose and lipid metabolism and decreases body weight. Endogenous FGF21 plays a role in mediating the physiological response to starvation and a variety of other metabolic stresses (Owen et al, 2015). FGF21 signals primarily on a tissue-specific manner through the b-Klotho/ FGFR1c receptor complex (Ogawa et al, 2007).
Because altered metabolism is associated with altered reproduction, Xu et al (2017) suspected that FGF21/KLB/FGFR1 signaling was involved in the pathogenesis of CHH. By using a candidate gene approach to study CHH and many different molecular and cellular approaches in vitro and in vivo, they convincingly demonstrate that b-Klotho is involved in postnatal GnRH biology (Xu et al, 2017). Compared to its homolog KL, KLB "spins the thread of puberty".
This genetic study of patients has made a leap in reproductive research allowing, in particular, to unravel multiple steps of FGFR1 signaling and function in vivo, altered in CHH.
The authors previously identified FGF8b as a key ligand for FGFR1c during embryonic development (Pitteloud et al, 2007). Specifically, they studied a patient with KS displaying an FGFR1 loss-of-function mutation, L342S, which alters FGF8b binding. The patient also had metabolic phenotypes with severe insulin resistance.
Pitteloud et al then identified missense mutations in FGF8 in IHH probands with variable olfactory phenotypes (Falardeau et al, 2008). Furthermore, mice homozygous for a hypomorphic Fgf8 allele lacked GnRH neurons in the hypothalamus and exhibited nasal cavity developmental defects and olfactory bulb dysgenesis, a phenotype similar to that observed in the Fgfr1 conditional knockout mouse . Heterozygous mice showed substantial decreases in the number of GnRH neurons and hypothalamic GnRH peptide concentration. The authors conclude that FGF8 is implicated in GnRH deficiency in both humans and mice and report the exquisite sensitivity of GnRH neuron development to reductions in FGF8 signaling during development. Mutations in FGFR1 and FGF8 account for~12% of cases of CHH (Falardeau et al, 2008).
Thus far, the natural ligand of FGFR1 in postnatal biology was unknown. The expression of FGF8 is restricted to embryonic development. FGF21 has been identified as a major peripheral and central metabolic regulator (Owen et al, 2013). Because of the link between metabolism and reproduction, Xu et al (2017) hypothesized that a defect in the FGF21/FGFR1/KLB pathway may underlie GnRH deficiency in humans and rodents.
In this work, Pitteloud et al show that the CHH-associated FGFR1 mutation p.L342S leads to a decreased signaling of the metabolic regulator FGF21 by impairing the association of FGFR1 with KLB, the obligate coreceptor for FGF21. Interestingly, the KS patient also had metabolic defect.
The change of FGFR1 signaling between embryonic and adult life is operated by the loss of expression of the paracrine factor FGF8 and the expression of an endocrine ligand, FGF21, together with a mandatory expression of the FGFR1 coreceptor KLB in a tissue-specific way. Klb is expressed in the postnatal hypothalamus. KLB enhances FGF21-FGFR1c (an isoform of FGFR1) binding and hence promotes FGF21 signaling by simultaneously tethering FGF21 and FGFR1c to itself through two distinct sites. Furthermore, the competitive binding of FGF8 and b-Klotho to the same site of FGFR1 will favor binding to endocrine FGF21 and inhibit paracrine FGF8 binding and signaling (Goetz et al, 2012). Indeed, the binding of KLB involves a conserved hydrophobic groove in the immunoglobulin-like domain III (D3) of FGFR1c (Goetz et al, 2012). Interestingly, this hydrophobic groove is also used by paracrine FGF8 ligands for receptor binding. Amino acid L342, highly conserved across species, is a key constituent in the hydrophobic groove of D3 (Fig 1; Pitteloud et al, 2007;Goetz et al, 2012). This leucine accounts for the unique binding specificity determinant of FGF8b for the c splice isoforms of FGFR1-3.
To know whether the FGF21/KLB/FGFR1 signaling pathway was involved in GnRH deficiency in humans, a candidate gene approach in 334 patients with CHH was performed. The majority of patients also exhibited metabolic syndrome. While no mutation of FGF21 was identified, mutations of KLB were detected. Seven heterozygous variants were identified in 13 CHH probands including six missense variants and one in frame deletion in seven unrelated patients. The variants have decreased signaling, ligand affinity binding, or decreased expression in vitro. Wide phenotypic variability was observed from severe to mild forms such as CHH with reversal or fertile eunuch syndrome. Variable expressivity and incomplete penetrance were found in the families. This suggests that other genes or environmental factors may contribute to the phenotype. Indeed, 35% of patients were found to carry a supplementary heterozygous mutation of FGF8, PROKR2, and FGFR1, predicted or shown to be deleterious. A more severe phenotype was observed in the families with two genes mutated, and partial GnRH deficiency was observed in patients with KLB mutations alone. This is compatible with an oligogenic model of inheritance, which might explain the difference in phenotype and expressivity of the syndrome in a single family. The same group has previously shown that IHH can be caused by the combination of genetic defects. In total, 4% of patients had heterozygous KLB mutations. The majority of CHH patients with KLB mutations exhibit metabolic defects; 17% of CHH patients carry mutation(s) in FGF21/ KLB/FGFR1 pathway, either as monoallelic or digenic combinations. Mutational analysis by next-generation sequencing will allow uncovering a proportion of patients with oligogenicity and may lead to greater accuracy in phenotypic predictions. Oligogenicity also has implications for genetic counseling regarding IHH patients and their family.
In vivo models were studied by Xu et al (2017) to confirm the pathogenicity of the mutations detected. Complementation studies in Caenorhabditis elegans where the two homologs klo-1 and klo-2 were depleted showed that the mutants failed or had a decrease ability to rescue the cyst phenotype of the double-deleted mutant. In addition, Xu et al (2017) studied the reproductive phenotype of KlbKO mice and show that they exhibit disrupted estrous cycles, blunted LH levels at estrus stage, and impaired fertility due to a hypothalamic defect. Klb À/À mice do not have abnormalities in GnRH neuron differentiation. There is a normal GnRH vesicular pool at the nerve terminals. Klb À/À mice respond to GnRH and kiss stimulations. This excludes a pituitary defect and suggests that GnRH neurons are present and can respond to stimulation. These results support an implication of KLB in postnatal hypothalamic GnRH secretion, consistent with a contribution of KLB to the central regulation of reproduction (Fig 2). Interestingly, the heterozygous KlbHET mice had a similar phenotype. A mechanism of haploinsufficiency is thus conceivable in the patients with heterozygous loss-of-function mutations of KLB.
At the molecular level, Xu et al (2017) show that FGF21 stimulates neurite outgrowth in mature immortalized GnRH neurons in vitro and induces GnRH secretion/release in median eminence (ME) explants ex vivo. These results raise the possibility that peripheral FGF21 modulates GnRH secretion by acting directly on GnRH neuroendocrine terminals in the ME and suggest a novel role for FGF21 in controlling fertility by modulating GnRH neuron structural plasticity.
By using in vivo fluorescently labeled rFGF21 intravenously injected to GnRH::gfp mice, the authors show that peripheral FGF21 has a potential to reach the hypothalamic GnRH neuron terminals residing outside the blood-brain barrier (BBB), by During the embryonic development, the paracrine FGF8 binds FGFR1c. Deficient FGF8 signaling results in absence of GnRH neuron development in the hypothalamus. During postnatal life, the endocrine FGF21, secreted under the influence of metabolic stimuli, acts on the liver and adipose tissue (WAT, white adipose tissue; BAT, brown adipose tissue) and reaches the hypothalamus through fenestrated capillaries (FC) of the median eminence (ME) or of the organum vasculosum of the lamina terminalis. FGF21 binds FGFR1c and the obligate coreceptor KLB. Deficient KLB is associated with normal embryonic GnRH neuron development but with a block in GnRH secretion during postnatal development. There is an altered secretion of LH and FSH by the pituitary (P) leading to altered puberty and fertility. extravasation through fenestrated vessels of the vascular organ of the lamina terminalis and the ME. The authors speculate that GnRH neuron terminals outside the BBB may perceive peripheral FGF21 to adapt GnRH secretion according to the metabolic state of the individuals. They further demonstrate that FGF21/ KLB/FGFR1 signaling plays an essential role in GnRH biology, which establishes a novel link between metabolism and reproduction in humans. Interestingly, a majority of patients with KLB mutations exhibit HH and, to some degree, metabolic defects (i.e., overweight, insulin resistance, and/or dyslipidemia) consistent with a metabolic role for this pathway. Persistence of defects after sex hormone replacement therapy has to be verified since this treatment usually improves metabolic parameters in CHH.
Previous studies had shown that FGF21 contributes to neuroendocrine control of female reproduction (Owen et al, 2013). Preventing reproduction during nutritional deprivation is an adaptive process that is essential for the survival of species. Fgf21 transgenic mice exhibit GnRH deficiency with infertility by repressing the vasopressin-kisspeptin pathway at the level of the suprachiasmatic nucleus in the hypothalamus . Both deficiency and excess of FGF21 may lead to defects in GnRH function. Such alterations of the FGF21/KLB/FGFR1 signaling pathway and especially mutations of KLB have to be searched in functional HH, like hypothalamic amenorrhea and obesity-related HH. Indeed, a genetic basis for functional hypothalamic amenorrhea has been shown by the same group (Caronia et al, 2011).
The candidate gene approach used on CHH families is a winning strategy to identify new genes involved in IHH, allowing dissecting the FGFR1 molecular signaling pathway. It will gradually increase the proportion of patients for whom a genetic cause is identified. Interactome studies could be combined in the future to pinpoint potential candidate genes. To date, no pathogenic mutation is known for 50% of CHH patients, suggesting that additional mutations in currently unknown genes remain to be discovered. | 3,395 | 2017-08-04T00:00:00.000 | [
"Biology",
"Medicine"
] |
Does tax aggressiveness and capital structure affect firm performance? The moderating role of political connections
____________________________________________________________ This study examines how company performance affects tax aggressiveness, capital structure, and political connections. In addition, we also examine whether political connections moderate the effect of tax aggressiveness and firm performance, as well as capital structure and firm performance. Companies with aggressive tax strategies where they are politically connected perform better than vice versa. In addition, companies with larger external capital structures perform better when the company's boards are politically connected. In order to avoid the disadvantages of an aggressive tax strategy and a high external political model structure, the Company builds connections through the board to obtain projects from the government and avoid the risk of oversight by the authorities. Therefore, we suggest that regulators conduct inspections and supervision of companies that have political connections through the board to use unconstitutional methods to obtain projects from the Government or
Introduction
Tax avoidance is one of the problems faced by developing countries, including Indonesia. In addition, the high cost of debt, as well as political interference, and the ineffective implementation of corporate governance have a negative impact on economic growth and company performance. Dyreng et al. (2016) revealed that aggressive tax actions by managers by ignoring transparency of financial reporting would increase agency conflicts between contracting parties, thus exacerbating information asymmetry problems. Consequently, Cook et al. (2017) revealed that companies face an increased risk of litigation, scrutiny by regulatory authorities, and a reputation that will suffer due to aggressive tax actions. These negative consequences increase risks for investors, creditors, and shareholders. Creditors are reluctant to invest in companies that have dubious accounting practices and are at high risk. Furthermore, if creditors are willing to fund these risky companies, they will provide more stringent provisions in the credit agreement clauses, including demands for high debt costs. In addition, the debt agreement can provide restrictions on companies not to invest in risky projects that have an unfavourable impact on company performance.
In order to overcome the consequences of aggressive tax actions, they try to establish political connections through the board and shareholders. This improves company performance (Yan & Chang, 2018). Khwaja & Mian (2005) and Faccio et al. (2010) argue that political relations established by companies are carried out to obtain projects from the government, subsidies, and to avoid or obtain protection from authorities against supervision or inspection. Thus, it can be concluded that various benefits can be obtained when the company has political connections so that it can fulfil the company's goals in order to gain profits through increasing profits from obtaining projects from the government and ease of access to funding.
Aggressiveness and tax evasion are severe problems in Indonesia. In 2020 and 2021, Indonesia had a tax-to-GDP ratio of 8.33% and 9.11%, much lower than several ASEAN countries, which show figures above 11%. Dewanta & Machmuddah (2019) revealed that this indicates that the opportunistic behaviour of taxpayers causes it through an aggressive tax strategy. Low tax revenues have affected public sector development spending, economic growth and contributed to high inflation in the country. Given the importance of adequate tax revenues for development, policymakers, academics, industry stakeholders and the general public must make a concerted effort for broader structural changes to Indonesia's tax system. Inger (2014) reports that tax aggressiveness positively affects company performance. The research argues that when these aggressive actions are carried out transparently, they can improve the company's performance. Furthermore, Chen et al. (2010) also stated that the positive effect of tax aggressiveness on company performance can occur when they have an effective corporate governance mechanism. However, several studies have also found a negative relationship between tax aggressiveness and financial performance (Hanlon & Slemrod, 2009;Zhang et al., 2017). This is due to the high information asymmetry and cost of agency causing the Company's weak performance.
Several previous studies have produced variations on the effect between capital structure and company performance. Velnampy (2014); Fathony & Syarifudin (2021); and Ritonga et al. (2021) found a negative effect between capital structure and company performance. In contrast, Ningsih & Utami (2020) and Mai & Setiawan (2020) revealed a positive influence between capital structure and company performance, respectively in the property and real estate industry and the sharia manufacturing industry.
In order to improve performance, the Company establishes political relations through the board. Utamaningsi (2020) revealed that companies with political connections had better performance than those without; this happened during the lifetime of the associated politician. Furthermore, political connections positively affect financial and environmental performance (Mustika et al., 2020). This condition results from a series of environmental management policies implemented by companies with political connections. However, Azizah and Amin (2020) found that there is no influence between political connections and company performance which is reflected in profitability, while Kristanto (2019) argues that political connections have a negative effect on firm performance, which is due to the high political costs incurred thus reducing the company's performance.
Based on the discussion above, this study analyzes the influence between tax aggressiveness, capital structure, political connections, and company performance in Manufacturing Companies in Indonesia. Furthermore, the consistency of the results of previous research needs to provide sufficient evidence about the relationship between tax aggressiveness, capital structure, political connections, and company performance in Indonesia. Therefore, this study has several research objectives. First, this study analyzes the relationship between tax aggressiveness and firm performance. Second, testing the influence of capital structure on company performance. Third, we analyze whether political connections moderate the influence between (i) tax aggressiveness and firm performance and (ii) capital structure and firm performance.
This study uses tax aggressiveness and capital structure as independent variables. The literature provides several proxies to measure tax aggressiveness (Lanis & Richardson, 2011). The measurement of tax aggressiveness uses the effective bag rate, which is the proportion of tax expense to profit before tax, consistent with previous studies (Choi & Park, 2022;Guenther et al., 2017). A company is considered to carry out an aggressive tax strategy when the taxes paid are lower than the corporate tax rates determined by the government (Chen et al., 2010;Nurcahyono et al., 2022;Richardson et al., 2013). Furthermore, the capital structure is calculated as the proportion of total liabilities to total assets of the Company, which is consistent with (Fathony & Syarifudin, 2021;Fauzi et al., 2022;Nurcahyono et al., 2021;Ritonga et al., 2021). In addition, this study uses political connections as an independent variable and a moderator. Measurement of political connections through a dummy variable where the value is one if the company is politically connected and 0 if not (Faccio et al., 2010;Junaidi & Siregar, 2020). A company is considered politically connected if the board of directors is directly or indirectly affiliated with a political party. A company is considered to have direct political connections when a politician is present on the board. Furthermore, indirect political connections can be developed when an assembly member has close ties to a political party or politician (Faccio et al., 2010).
The purpose of this study is to fill the existing gaps so that the research results are expected to contribute to the level of accounting science by developing a theoretical understanding through the variables in the research and providing positive managerial implications concerning factors that influence company performance. The study findings provide input to regulators in order to limit politically connected companies from using unconstitutional methods in order to obtain projects from the government.
Hypothesis Development Tax Aggressiveness and Company Performance
Tax aggressiveness can be interpreted as the Company's efforts to reduce tax obligations through various accounting strategies (Hanlon & Heitzman, 2010). Tax aggressiveness has the potential to provide benefits but also disadvantages for the Company and stakeholders. Khuong et al. (2020) revealed that companies taking an aggressive tax strategy could lead to higher free cash flow, so these conditions provide good creditworthiness, low risk and cost of capital. Tax aggressiveness provides benefits when managers' interests align with those of shareholders (Desai & Dharmapala, 2006). Agency theory reveals that when there is a conflict of interest between managers and shareholders as principals, agents can expropriate excess free cash flows obtained from aggressive tax strategies. Conversely, several studies argue that shareholders and creditors consider companies that choose aggressive tax strategies more risky. This is because companies face tax audits and legal efforts that can damage their reputation (Drake et al., 2019;Guenther et al., 2017). In addition, Hasan et al. (2014) revealed that this action exposed the company to the tight regulation of credit agreements with lenders; this was done because creditors were faced with agency conflicts and low performance. Furthermore, an aggressive tax strategy exposes companies to the high cost of equity provided by shareholders because they consider them to lack transparency and acute information asymmetry problems (Goh et al., 2016). Inger (2014) found a positive relationship between tax aggressiveness and company performance. This study argues that tax aggressiveness will benefit companies if the tax strategy is transparent and avoids complex business transactions. Furthermore, implementing corporate governance can trigger increased performance in companies with an aggressive tax strategy (Desai & Dharmapala, 2006). Conversely, other studies reveal a negative effect between tax aggressiveness and firm performance (Chung et al., 2019;Hanlon & Slemrod, 2009;Khuong et al., 2020). Companies with poor performance are due to high agency costs and information asymmetry. Cook et al. (2017) argue that tax aggressiveness also triggers complicated business transactions, giving managers opportunities to expropriate cash flows and reduce the company's performance. Therefore, the hypothesis is formulated as follows: H1: Tax aggressiveness has a negative effect on company performance.
Capital Structure and Company Performance
Empirical evidence explaining the relationship between capital structure and firm performance provides contradictory results. Most theories are related to capital structure, and empirical evidence proves a positive relationship between capital structure and firm performance. At the same time, other studies have found results where there is a negative influence between capital structure and firm performance. Specifically, the study of Ngatno et al. (2021); Berger & Bonaccorsi (2006); and Gill (2011) revealed that a high debt ratio is directly proportional to company performance. This condition is caused by high debt being able to suppress agency costs so that managers as agents have no room to take opportunistic actions, which causes them to act in the interests of shareholders. (2021); Ritonga et al. (2021) found that capital structure is negatively correlated with profitability. They argue that the risk of bankruptcy or liquidity can cause companies to make efforts to get fresh money through funding from external parties so that the debt ratio increases, where this condition exposes the company to higher interest expenses which will impact decreasing the company's performance. Therefore, the high cash flow from funding makes managers act or behave discretionary or negatively impacts company performance. Thus, the hypothesis we propose is as follows: H2: Capital structure has a positive effect on company performance.
Political Connections and Corporate Performance
Political connections occur when a company's board consists of directors and commissioners who have political connections (Faccio et al., 2010). Agency theory is used in analysing political connections' role in company performance. Agency theory suggests that politicians can expropriate the wealth of minority shareholders and pursue goals that may not maximize firm value. The literature shows that politically connected companies expropriate the wealth of minority shareholders in several ways. First, board members who have political connections directly get minority shareholders. Second, they motivate majority shareholders to retain minority shareholders. Third, they influence management to recruit politically connected managers to pursue their own social and political goals. Utamaningsi (2020) revealed that companies with political connections perform better than vice versa during the term of the politician concerned. Furthermore, Sulistyowati & Prabowo (2020) found that political connections have a positive influence on financial performance as well as environmental performance. This condition results from a series of environmental management policies implemented by companies with political connections. However, Azizah & Amin (2020) found political connections and financial performance to have no relationship, while Kristanto (2019) argued that there was a negative relationship between political connections and company performance due to the high political costs incurred, which reduced company performance. Based on this discussion, the formulation of the hypothesis is as follows: H3: Political connections have a positive effect on company performance. Dyreng et al. (2008) revealed that tax aggressiveness could cause damage to the quality of financial reporting and trigger an increase in agency problems in companies. Previous research also argues that tax aggressiveness gives the Company the consequences of a negative reputation. This is due to the authorities' supervision and stakeholders' legal efforts (Cook et al., 2017). The results of studies on the influence between tax aggressiveness and company performance vary. Inger (2014); Sunengsih et al. (2021); Sukesti et al. (2021) argue that tax aggressiveness can improve company performance. This condition occurs because the company aims to choose an aggressive tax strategy to minimize the taxes paid. After all, taxes are a burden to the Company. Conversely, other studies reveal that aggressive tax strategies have a negative effect on company performance (Chung et al., 2019;Hanlon & Slemrod, 2009;Khuong et al., 2020).
Tax Aggressiveness, Political Connections and Corporate Performance
Aggressive tax actions taken by the company can expose the company to problems when increasing external capital, and creditors will ask for conditions in strict debt agreements to restrict investment in risky projects. In addition, creditors also ask for higher interest charges on loans to protect against risks. Several studies reveal that a high debt ratio is directly proportional to company performance (Berger & Bonaccorsi, 2006;Gill, 2011;Ngatno et al., 2021). This condition is caused by high debt being able to suppress agency costs so that managers as agents have no room to take opportunistic actions, which causes them to act in the interests of shareholders. However, study Velnampy (2014) (2021), and Ritonga et al. (2021) found that capital structure is negatively correlated with profitability. They argue that the risks to bankruptcy or liquidity.
The study results show that companies with political connections perform better than vice versa (Mustika et al., 2020;Utamaningsi, 2020). This is due to their convenience in obtaining projects from the government. Given the detrimental consequences for firms of aggressive tax actions, we argue that these firms develop political connections to exploit the political connections and influential status of politicians to overcome reputational issues and gain convenient and cost-effective access to finance (Faccio et al., 2010;Khwaja & Mian, 2005). Therefore, political connections owned by the Company's board can benefit Companies with an aggressive tax strategy and high debt costs caused by the dominance of external funding, which ultimately shows better company performance. Thus, we develop the following hypothesis: H4: Political connections moderate the influence between tax aggressiveness and firm performance. H5: Political connections moderate the influence between capital structure and firm performance.
Method
Purposive sampling is the sampling method in this study with the criteria of companies in profit conditions during the observation period both before and after tax. 108 data from 2016-2021 manufacturing industry companies listed on the Indonesia Stock Exchange were used to analyse the relationship between variables. Data collection is carried out from the Company's annual report, which is available on the Company's website. This study investigates the relationship between political connections, tax aggressiveness, capital structure, and firm performance. This section provides measurements of the variables used in this study. Return on assets is the proportion of net profit to the company's total assets (Sulistyowati & Prabowo, 2020); (Pattiruhu & Paais, 2020); (Tangngisalu et al., 2020) Tax Aggressiveness The measurement of tax aggressiveness uses the effective tax rate which is the proportion of tax expense to profit before tax (Choi & Park, 2022;Guenther et al., 2017) Capital Structure The capital structure is calculated as the proportion of total liabilities to total assets (Fauzi et al., 2022); (Fathony & Syarifudin, 2021); (Ritonga et al., 2021) Political Connections Measurement of political connections through a dummy variable where the value is 1, if the company is politically connected and 0 if not (Faccio et al., 2010); (Junaidi & Siregar, 2020) Specifically, the model analyses the relationship between variables with multiple regression analysis. Model 1 is the basic model for testing H1, H2, and H3. H1, H2, and H3 will be supported if the coefficients of tax aggressiveness (TA), capital structure (DTA), and political connections (PC) are each statistically significant. Model 1 is presented below: Table 2 shows the descriptive statistical analysis of the 108 samples of the research variables. The company's Return on Assets (ROA), which measures company performance and is the dependent variable, shows a mean value of 0.12. The manufacturing company, which is the object of research, earns a profit after tax of around 12% of the total assets owned. The capital structure by the proxy of total debt divided by total assets (DTA) shows a mean of 0.41, so it can be interpreted that 41% of the company's assets are financed through debt. Tax aggressiveness (TA), as measured by the influential tax rate variable, shows a mean of 0.23, which means that, on average, Manufacturing Companies carry out an aggressive tax strategy of around 2%, from the standard tax rate of 25%. Descriptive analysis of the moderating variable, namely political connections, shows a mean value of 0.65, which means that 65% of the manufacturing companies in the research sample have political connections. Table 3 shows the results of the panel regression. The results show that tax aggressiveness positively affects firm performance (β=0.53, p<0.05). This signals that companies with aggressive tax strategies generate higher profits than conservative tax strategies. This condition occurs because the company aims to choose an aggressive tax strategy to minimize the taxes paid. After all, taxes are a burden to the Company. Efficient tax payments provide benefits for the Company in the form of increased performance. This study's results align with agency theory, where companies carry out aggressive tax strategies to achieve better corporate performance in order to obtain incentives. In addition, this study strengthens the perspective on agency theory which states that there are differences in interests between companies and tax authorities, whereby to improve shareholder welfare through profits, the efficiency of tax payments is carried out through aggressive tax strategies. Zhang et al. (2017) revealed that to reduce agency costs resulting from choosing an aggressive tax strategy, corporate governance mechanisms must be strengthened to suppress or prevent managerial rent extraction. This study supports the study Inger (2014); Sunengsih et al. (2021); Caroline et al. (2023), which found that tax aggressiveness can improve company performance.
Effect of Capital Structure on Company Performance
The analysis results show that capital structure, as measured by debt to total assets, significantly negatively affects firm performance (β=-0.423, p<0.05). Interest expenses increased in line with the high portion of funding through debt, which impacted the Company's profitability. This consequence is the result of management's inability to manage the significant source of funding from loans, and this may be due to low margins from the company's operations or the excess working capital that cannot be absorbed to support or develop the company's operations. This study is in line with the view of agency theory, which reveals that high debt has an impact on increasing agency costs, which in the end, the company's profits will be eroded by interest expenses so that the profits generated are low. Agency costs directly result from conflicts of interest between shareholders and creditors. Shareholders must bear high-interest expenses as a consequence given by creditors when loans dominate the company's capital structure. Creditors do this as risk mitigation when the company's financial distress, bankruptcy or liquidation occurs. This study shows that the greater the funding through debt, the lower the company's performance. The results of this study are in line with those (Fathony & Syarifudin, 2021;Fauzi et al., 2022;Ritonga et al., 2021;Velnampy, 2014).
The Effect of Political Connection on Company Performance
The results of the study show that political connections have a positive influence on firm performance (β=0.376, p<0.05). This implies that the Company's political connections can increase profitability better than the absence of political connections. The Company's political connections provide benefits in several ways, including the ease of obtaining contracts from the government, obtaining preferential treatment in terms of taxation, ease of access to funding, and inadequate supervision. This special treatment gives advantages to politically connected companies so that it will encourage better performance than companies that do not have political connections (Faccio et al., 2010;Khwaja & Mian, 2005). The ease of obtaining funding provides an advantage for companies to develop their business and increase their performance or profit. Furthermore, political connections owned by the company make it easy for the company to obtain information about government projects, which will impact increasing revenue and improve the company's performance compared to its competitors. The results of this study support the view of agency theory, whereby when someone occupies the board of directors or commissioners with political connections, the company must be willing to provide more compensation in return for the convenience or special treatment they receive. This study is in line with Utamaningsi (2020) and Sulistyowati & Prabowo (2020), which reveal a positive influence on company relationships and performance.
Moderation of Political Connections on the Effect of Tax Aggressiveness on Company Performance
Analysis of the moderating variable statistically showed that the TA*PC coefficient was positive and significant (β=2.27 p<0.05). These results illustrate that political connections moderate the effect of tax aggressiveness and firm performance. In other words, this study finds that companies that have an aggressive tax strategy, where they are companies that establish political connections, show better company performance than those without political connections. This condition is caused by the risks faced by companies with aggressive tax strategies in the form of inspections and supervision by authorities, including low shareholder trust, which will have a negative impact on performance (Drake et al., 2019;Guenther et al., 2017;Khuong et al., 2020). However, companies with aggressive tax strategies can solve these problems through their advantage, namely by establishing political connections. This is done to avoid supervision or inspection from the authorities without compromising investors to improve their performance. Therefore, it can be concluded that although an aggressive tax strategy is risky, it can provide benefits for the Company, which is reflected in better performance. Thus, in order to minimize this risk, the Company places a board of directors and commissioners who are individuals who have political connections, which are expected to be able to protect the company so that it gets special treatment in terms of taxation in the form of avoiding the risk of inspection and supervision by the tax office.
Moderation of Political Connections on the Effect of Capital Structure on Company Performance
Furthermore, the results of moderating political connections on the relationship between capital structure and firm performance show a positive and significant direction (β=0.816, p<0.05). These results illustrate that political connections moderate the influence of capital structure and firm performance. These conditions indicate that companies with a capital structure through larger loans, where they are politically connected, perform better than vice versa. The capital structure, dominated by loans, has consequences in the form of high-interest expenses. However, this can be overcome by having a politically connected board, as they can open up opportunities to obtain projects from the larger government. So that even though there is a high cost of capital, with additional sources of income in the form of projects from the government plus existing non-government projects, this can increase the company's profitability. In the end, the results of this study can be concluded that although the capital structure, which is dominated by funding from third parties, has the consequence of interest expenses, as well as exposes the company to the risk of financial distress, which can affect the company's performance when the management of these loans is carried out properly, it can improve performance for the Company. This is in line with the agency cost view whereby the capital structure through loans can suppress opportunistic actions by managers so that these conditions benefit the Company by showing better performance. In addition, political connections owned by politically connected companies provide benefits for them in the ease of obtaining projects from the government. This benefits the company by increasing revenue, which can pay high-interest expenses and increase company income.
Conclusion and Recommendation
This study analyzes the effect of tax aggressiveness and capital structure on company performance. The results of the analysis found that the variables of tax aggressiveness and political connections have a positive influence on company performance. Furthermore, capital structure has a negative influence on company performance. These results support the view of agency theory, in which they take an aggressive tax strategy to provide good performance and place politically connected boards.
The results of this study have several implications. First, regulators must limit politically connected companies through boards to use them in unconstitutional ways to obtain projects from the government.
The second implication is providing input to regulators regarding the importance of policies related to transparency of financial reporting and the protection of shareholders, especially if the company has political connections. Investors should be more careful in placing their funds in companies with the main capital structure from loans. Third, this study advises shareholders to reduce agency costs, and shareholders must use their voting rights to supervise aggressive tax behaviour and improve the quality of financial reporting.
However, some limitations in this study need to be corrected. First, this research was only conducted in the manufacturing sector. Therefore, it is hoped that further studies will be carried out across industries to enrich and test the consistency of the research results on these variables. Second, the study stops at the factors that influence company performance. Furthermore, we recommend further research examining the consequences of company performance, namely on company value as reflected in stock prices and investor decisions. The third limitation is that this study only uses one measurement variable for each tax aggressiveness and capital structure variable, so it is hoped that further studies can use more than one variable, such as book-tax gap, effective cash rate and long-term debt, short-term term debt to test the consistency of the results. | 6,111 | 2023-04-11T00:00:00.000 | [
"Business",
"Political Science",
"Economics"
] |
Drop generation from a vibrating nozzle in an immiscible liquid-liquid system
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
Inducing vibration to jets or drops can be used to control breakup, thus drop size.Vibration is applied for example in ink jet printing, spray coating or vibrating cross-flow membrane emulsification, the latter having motivated our research.We focus on transversal vibrations, where drops undergo axial oscillations.For a membrane with a mean pore diameter of 0.8 µm, Arnaud 1 found a decrease in the peak of the volume-weighted drop size distribution (from 30 µm to 10 µm) at a forcing frequency of 15 to 20 kHz compared to without vibration.Thus, vibrating the membrane in this process impacts drop size but mechanisms for drop detachment were not explained. 1,2To obtain a fine control on drop size, understanding the physics of drop vibration and detachment is necessary.
Oscillations of liquid drops have been extensively studied since the pioneering work of Lord Rayleigh 3 .He calculated the eigenmodes of a free inviscid, incompressible drop in a vacuum, in absence of gravity and for small-amplitude oscillations.The eigenmodes are characterized by two integers: a polar wavenumber 2 and an azimuthal wavenumber ∈ ; .In this study, we focus on axisymmetric modes ( 0).Rayleigh 3 showed that the eigenfrequencies depend on , the liquid density, interfacial tension and drop size.Drop eigenfrequencies scale as / , with the drop diameter.Lamb 4 generalized this theory by calculating the eigenpulsation of a drop in a surrounding fluid and found the same relationship ~ / .
Rodot et al. 5 and Bisch et al. 6 investigated drops partially bound to a rod, submitted to controlled vibration.The drop was immersed in immiscible liquids of equal density, with its contact line pinned on the rod edge.A large range of liquid couples were examined and the first eigenfrequency was found to depend on the support diameter, drop diameter, drop density and surface tension.The first resonance frequency scales as and not as / such as for the free drop.Then, Strani and Sabetta 7 (hereafter denoted S&S) studied linear oscillations of a liquid drop in an outer fluid, in partial contact with a spherical bowl under inviscid and zero-gravity assumptions.The presence of the support increases the eigenfrequencies for modes 2, but an additional low-frequency mode appears.This is the 1 eigenmode, associated with the displacement of the bound drop center of mass.When the support reduces to a single point, the 1 mode degenerates to a zero-frequency 2 rigid motion of the drop.S&S 7 noted that the mode 1 eigenfrequency may be approximated over intervals by with varying between -2.9 and -1.75 for drop to support diameter ratios of 1.3 to 7, respectively.This is consistent with -2 proposed by Bisch et al. 6 Also, S&S 7 computed frequencies were in agreement with Bisch et al. 6 data, but resonance frequencies were overpredicted by 20% (reduced to 10% by accounting for viscous effects 8 ).However, both models (inviscid 7 and viscous 8 ) overpredicted resonance frequencies for large support to drop diameter ratios, attributed to nonlinear effects, not taken into account in the models.Smithwick and Boulet 9 studied the first resonance frequency of mercury drops on glass (pinned contact line) under partial vacuum and compared their data to the calculations of S&S. 7 A maximum error of 3.3% was found.
Bostwick and Steen 10 and Vejrazka et al. 11 studied linear oscillations of a drop supported on a ring.Bostwick and Steen 10 noted that the center of mass motion is partitioned among all the eigenmodes but the 1 mode is its main carrier.Vejrazka et al. 11 found that for small support to drop diameter ratios, the frequency response of the drop is independent of the constraint (bowl or ring).Abi Chebel et al. 12 and Vejrazka et al. 11 examined drop oscillations driven by imposed periodic volume variations.The frequency response is independent of the forcing type as long as the support to drop diameter ratio is small. 11Lastly, Noblin et al. 13 studied bound drop oscillations with mobile instead of pinned contact lines: a decrease in resonance frequency was found.The transition from a pinned to mobile contact line occurred above a critical forcing amplitude.In that case, the variation of the contact angle exceeds the contact angle hysteresis.
Previous studies explored linear oscillations.Wilkes and Basaran 14 (hereafter denoted W&B) used computational fluid dynamics (CFD) to study large-amplitude axisymmetric oscillations of a viscous bound drop on a rod (pinned contact line).They found that the drop resonance frequency varies slightly with amplitude at high Ohnesorge numbers (Oh, expressed in section V.B.) but decreases significantly with amplitude at low Oh, Oh being the ratio of a viscocapillary to an inertial-capillary timescale.Resonance frequency also decreases as the Bond number (Bo, expressed in section II.C.) increases.Bo compares the gravity to capillary forces.The maximum drop deformation, observed at resonance, increases with forcing amplitude and Bo and decreases with Oh and .DePaoli et al. 15 experimentally studied pendant drops in air under high-amplitude forcing and observed hysteresis, characteristic of soft nonlinearities.At a set forcing amplitude (resp.frequency), a larger response amplitude appeared at lower frequencies (resp.amplitudes) when a downwards frequency (resp.amplitude) sweep was performed vs. an upwards sweep.W&B 16 numerically gave the critical forcing amplitude for the onset of hysteresis for different Oh.This value could be as low as 3% of the rod radius (drop and rod radii of the same order).Calculations were also performed for drops hanging from a tube: the first resonance frequency is slightly higher when the support is a tube, the hysteresis range is shifted to higher values of forcing frequency and the deformation at resonance is higher.
For high enough forcing amplitudes, drops detach from the support.W&B 17 used CFD to simulate drop ejection from a rod (pinned contact line).Above a critical amplitude, the bound drop ruptures: a primary drop is ejected from the liquid remaining on the rod.The variations of the critical amplitude as a function of the forcing pulsation has a V-shape (the minimum corresponds to drop resonance).For a set rod diameter, the critical amplitude increases when Oh decreases or when the bound drop volume decreases.Critical amplitudes range from 25% to 80% of the rod radius.Kim 18 experimentally studied the detachment of a pendant drop from a smooth vibrating plate in air (mobile contact line).The variations of the critical amplitude as a function of the forcing frequency has a Wshape.Kim 18 found that the minima concord well with the 1 and 2 modes of the bound drop as calculated by S&S. 7 Again, the minima correspond to drop resonance.The agreement between data and calculations of S&S 7 is remarkable as the contact line mobility is different and experimental oscillation amplitudes are beyond the linear regime.
Resonance also triggered drop detachment in previous work of the authors, where different pore diameters were studied for one system (dodecane-water without surfactant). 19Drops were formed through a vibrating nozzle continuously fed with dodecane, immersed in the stationary immiscible water phase.This enabled to gain insight into transversally vibrating membrane emulsification in a simplified configuration.We found that at a set forcing frequency, smaller drops were generated above a threshold forcing amplitude: a growing drop detached prematurely when its first resonance frequency (as given by Bisch et al.) and the forcing frequency coincided.However, the threshold was higher than expected, attributed to the fact that the bound drop did not spend enough time in the resonance range to reach steady-state resonance.The generation mode forming the smaller drops was named the "stretching mode".Below the threshold, larger drops were formed in dripping mode.
The aim of this work is to study the mutual effect of forcing parameters and system properties on drop generation from a vibrating nozzle.We also aim to further model drop generation modes by accounting for drop growth and motion as a function of time.We emphasize that studies on vibrated growing drops are rare 19 compared to those on constant-volume drops [5][6][7][8][9][10][11][12][13][14][15][16][17][18] and that previous work concerned only one system and parameter. 19In the following, we first describe our setup.We present the dripping to stretching transition and propose a simple framework to approach it.Then, we discuss the effect of nozzle inside diameter, dispersed phase flow rate, interfacial tension and dispersed phase viscosity on the transition.We examine the effect of these parameters on (i) the threshold amplitude for the stretching mode and (ii) the resulting drop diameters.We further analyze our results by comparing them to S&S calculations. 7Finally, we propose a simple transient model to describe drop dynamics until detachment and compare the model predictions to experiments.
II. EXPERIMENTAL A. MATERIALS
The reference continuous and dispersed phases are distilled water and dodecane (99%, Fisher Scientific), respectively.To study the impact of interfacial tension, a surfactant (SDS, 85%, Acros Organics) is added to the continuous phase at 0.1 wt% or 2 wt% (systems 1 and 2, resp.).To study the impact of dispersed phase viscosity, paraffin (Fisher Scientific) is added to the dispersed phase at 25 wt% or 50 wt% (systems 3 and 4, resp.).A system with an increased continuous phase viscosity was also tested (supplementary material D).The reference system and system 1 to 4 properties are given in table I.The viscosities and densities of the mixtures were measured in triplicate, the former with a Ubbelohde type viscosimeter (AVS310, Schött-Gerade) at 25.1°C.The interfacial tension was measured in triplicate by the rising drop method with a tensiometer (Tracker, I.T. Concept, Teclis).For systems 1 and 2, is determined by the method explained in the supplementary material A. Table I values report an intermediate plateau interfacial tension.We consider that the plateau value gives an adequate estimation of the interfacial tension when drops form (see supplementary material B).
B. EXPERIMENTAL SETUP
The setup, illustrated in prior work 19 , is summarized in fig. . is the forcing amplitude measured by a laser sensor (M5L/2, Bullier Automation) with a precision in the order of 10 µm. is the forcing frequency set on the signal generator (33512B Arbitrary Waveform Generator, Agilent).Vibrations are parallel to the nozzle axis, so drops undergo axial oscillations.In the moving non-inertial frame of reference where the nozzle is still (axes in fig.2), the forces exerted on the drop due to nozzle motion are the inertial force and associated Archimedes' thrust.We note that the continuous phase above the nozzle and support is accelerated by the exciter, shown by Faraday waves at the free surface.The resulting excitation force is: the bound drop diameter and cap sin the nozzle acceleration in the laboratory inertial frame ( 2 is the forcing pulsation).Drop formation is recorded with a highspeed camera (v310, Phantom) and macro lens (AF Zoom-Micro Nikkor 70-180mm f/4.5-5.6DED, Nikon).The acquisition frequency is ten times the forcing frequency or 100 fps for trials without vibration.The resolution is 800 x 600 px².We extract data with ImageJ 20 including average detached drop diameters , axial drop elongations and the position of the drop center of mass compared to the nozzle surface.Images were calibrated (36 px/mm) using the outer diameter (7.86 0.01 mm for 0.32 mm) of the nozzle.The main output data are resumed in fig. 2.
C. EXPERIMENTAL PROTOCOL
The tank is filled with the continuous phase and the tube and syringe with the dispersed phase.The syringe pump is activated and drop diameters are measured without vibration.Drops are formed in dripping mode.We calculate Bond numbers Bo and Weber numbers We as Clanet and Lasheras 21 : we find Bo 2 ⁄ (with the gravitational acceleration) from 1.7×10 -2 to 1.5×10 -1 and We / from 4.5×10 -3 to 2.8×10 -1 .These values are below the critical We for the transition to jetting (at the given Bo), confirming the setup operates in dripping mode.From the drop diameters obtained without vibration, we calculate the in situ interfacial tensions by Tate's law 22 : Drop detachment occurs when buoyancy (left-hand side of Eq. ( 2)) exceeds the maximum capillary force that the drop neck can resist without breaking.is the detached drop diameter.is the Harkins Brown correction factor 23 : it accounts for the fraction of liquid volume which stays attached to the nozzle after drop detachment.We use the factor of Mori. 24The in situ interfacial tension is then compared to the measured one (table I), to ensure the setup is adequately cleaned.
Then, vibration is applied.A forcing frequency is set and an upwards amplitude sweep is performed.Measurements are made at different amplitudes, ensuring one is always made at the threshold where a transition in drop generation occurs (see III.).This is repeated for frequencies from 30 to 150 Hz.From 30 to 100 Hz, 10 Hz intervals are applied.Above 100 Hz, intervals vary depending on the pore diameter.The vibrating exciter limitations do not enable us to observe the stretching mode above 150 Hz for 0.32 mm and 110 Hz for 0.11 mm.After trials, a cleaning agent at 3 vol% (Mucasol, Merz) fills the setup for 24h and it is rinsed with distilled water leading to a hydrophilic glass surface.Then, the organic dispersed phase does not wet the nozzle and the outer nozzle diameter does not influence drop detachment.
For each test condition (i.e., physicochemical system, pore diameter, dispersed phase flow rate and forcing frequency), three trials are carried out to determine the transition threshold.For each trial, six detached drops are studied.For each drop, ten images are analyzed.We checked that the accuracy of the diameter measurement from ten different snapshots of a given drop is sub-pixel.For a given trial, we noted variations up to 2 px at most in diameter from one drop to another.In the figures displaying drop diameters, the error bars correspond to the relative standard deviation in drop diameters: it ranges between 1% to 7% depending on the test conditions.
III. TRANSITION FROM DRIPPING TO STRETCHING MODE
Figure 3 shows typical variations of the drop diameter as a function of the forcing amplitude at a set forcing frequency .The drop diameter falls (by 63%) at a threshold amplitude .The same behavior occurs for all systems.For the reference system, a relative decrease in drop diameter of 45% to 76% was found at compared to without vibration depending on , for all pore diameters.For system 2, similar values were found: 29% to 73%.This fall at corresponds to a transition in the drop generation regime.For , drops detach in dripping mode and their diameter is close to the diameter of the drop formed without vibration: detachment is buoyancy-controlled.For , the drop detaches when its mode 1 eigenfrequency coincides with and when it reaches a critical elongation ratio: detachment is controlled by the excitation force.Figure 2 shows how a drop elongates at resonance (characteristic mode 1 resonance shape) and detaches.We named this the "stretching mode". 19It should be noted that there is an amplitude interval where both modes coexist.The threshold amplitude is defined as the upper bound of that interval, when all drops are generated in stretching mode.2); simulation results of section V.C., Eq. ( 14) (dotted line); of section V.D., Eq. ( 17) with -1.9, 4.4 (solid line).
We proposed a simple model 19 to describe the main features of stretching mode.Below, we recall its main arguments and derive scaling laws to provide a framework to analyze our experimental results in section IV.The vibrating bound drop is considered as a linearly forced harmonic oscillator (LFHO) with moderate damping. 25Drop growth is considered slow enough for oscillations to reach steady state.We are aware that these assumptions are strong since the system is probably no longer linear when oscillations are such that the drop detaches and the process remains transient.However they are required to develop the following scaling laws.
A simple analytical expression of the mode 1 eigenfrequency of a bound drop has been empirically established by Bisch et al. 6 for / of 1.3 to 7 and fluids of equal densities: is a constant that should depend on the fluid density ratio, with 9 for fluids of equal densities.is the resonating bound drop diameter, which we assimilate to the detached drop diameter.Bisch et al. 6 also propose an empirical expression for the damping coefficient : 3) and ( 4) can be reasonably applied to our trials as the density ratio ⁄ is in the order of 1 (it ranges from 1.26 to 1.33).Also, we neglect the effect of buoyancy on and .
From Eq. ( 2) and ( 3), omitting , we deduce the minimum forcing frequency above which a growing drop may detach in stretching mode (if ): The minimum forcing frequency is around 8 Hz for 0.32 mm for the reference system and 14 Hz for 0.11 mm for system 2.This is smaller than the lower bound of the frequency range investigated.Consequently, drops may detach in stretching mode.
Whatever the mode (dripping or stretching), the drop detaches when the restoring capillary force exceeds the maximum capillary force .Under LFHO assumption, reads: with the eigenpulsation of the bound drop without damping.is the displacement of the drop center of mass with respect to its rest position (absence of buoyancy and excitation forces).The forcebased detachment criterion can be easily recast into an elongation-based criterion: The displacement of the drop center of mass is made up of a stationary part due to buoyancy and an oscillatory part due to the excitation force.Assuming quasi steady state, reads: with , the phase shift and the amplitude given by the well-known expression 25 : is the quality factor given by 2 ⁄ . Since is given by Eq. ( 4), depends only on the phase densities and viscosities.
is maximum when 1 1 2 ⁄ ⁄ , i.e., ≅ for moderate damping.In that case, simplifies to ≅ .In dripping mode, the drop has left the resonance range and its eigenfrequency is much lower than the forcing frequency.Then, buoyancy dominates: Eq. ( 7) and ( 8) reduce to Eq. (2) (omitting ).In stretching mode, the drop detaches at resonance ( ≅ ).Its diameter is thus: An estimate of the threshold amplitude may be derived from the above equations neglecting buoyancy and assuming that the detachment criterion is satisfied at an oscillation peak: As a result, and should both scale as ⁄ , i.e., ⁄ .
IV. IMPACT OF PROCESS PARAMETERS AND SYSTEM PROPERTIES
In this section, we study the effect of process parameters (pore diameter, dispersed phase flow rate) and system properties (interfacial tension, dispersed phase viscosity) on the dripping to stretching transition.Threshold amplitudes are determined from an amplitude sweep and drop diameters at from image analysis.Error bars are generally large for threshold amplitudes partly due to measurement errors and partly due to the difficulty to repeatedly estimate the threshold.
A. INFLUENCE OF PORE DIAMETER
Fig. 4 reports the variations of the threshold amplitude and generated drop diameter as a function of the forcing frequency for two pore diameters, i.e., 0.11 mm and 0.32 mm (more pore diameters were tested in another paper in the case of the reference system 19 ).Threshold amplitude variations with forcing frequency are monotonous (fig.4(a)) and do not exhibit the V-or W-shape reported by W&B 17 or Kim 18 , respectively.In the latter cases, bound drop volume (thus eigenfrequencies) are fixed, independently of .On the contrary, in our setup, the drop grows until its eigenfrequency coincides with .
The threshold amplitude decreases as the forcing frequency increases (fig.4(a)), in accordance with Eq. (11).Eq. ( 11) predicts a ⁄ scaling but our data scale differently: . .for 0.11 mm and . .for 0.32 mm.Also, thresholds are twice higher for 0.11 mm than for 0.32 mm.This contradicts the expected ⁄ scaling (Eq.( 11)).We return to this in section V.
The drop diameter decreases with increasing forcing frequency (fig.4(b)).The / scaling predicted by Eq. ( 10) was verified for four pore diameters ranging from 0.11mm to 0.75mm 19 : we specifically find for 0.32 mm.The larger the pore diameter, the larger the drops produced.However, from experimental data 19 , it is difficult to conclude on the relevance of the predicted ⁄ scaling as the deviation of the data to the ⁄ scaling is large (0% to 31%, depending on ).
B. INFLUENCE OF DISPERSED PHASE FLOW RATE
Four dispersed phase flow rates were applied to the reference system, for 0.32 mm: 2.5 µL.s -1 , 4.3 µL.s -1 , 6.5 µL.s -1 and 14.4 µL.s -1 .Threshold amplitudes and drop diameters do not vary significantly with these flow rates according to the error bars (see supplementary material C).This is consistent with Eq. ( 10) and (11).For higher flow rates, this parameter could become significant.A drop may no longer have time to reach large-amplitude oscillations at resonance for stretching mode.Also, a transition to jetting would occur 21,26,27 (out of the scope of this paper).
When increases from 2.5 to 14.4 µL.s -1 , the mean number of oscillations between two drops at 100 Hz decreases from 28 to 4. As the threshold is little affected by in the investigation range, we infer that the steady-state oscillation regime is reached in just a few oscillations.
C. INFLUENCE OF INTERFACIAL TENSION
Experiments were carried out for the reference system and systems 1 and 2 ( 50.7 mN.m -1 , 19.0 mN.m -1 and 5.4 mN.m -1 , resp.) for 0.32 mm.The threshold amplitude scaling is not significantly affected by : . .
for system 1 and . .
for system 2 compared to . .for the reference system (fig.5(a)).However, it is not in accordance with the predicted ⁄ scaling (Eq.( 11)).Higher SDS concentrations result in lower interfacial tensions, leading to lower threshold amplitudes for a drop to detach in stretching mode (fig.5(a)).This is in qualitative agreement with Eq. ( 11) but it is difficult to conclude on the relevance of the predicted ⁄ scaling, as the deviation of the data to this scaling is large (8% to 32%, depending on ).
The / scaling of the drop diameter is maintained when the interfacial tension is decreased from 50.7 mN.m -1 to 5.4 mN.m -1 (fig.5(b)): we find . .
for system 1 and . .for system 2. Smaller drops are generated for lower interfacial tensions.The drop diameter roughly scales as ⁄ in accordance with Eq. ( 10) (deviations of 1% to 18%, depending on ).
D. INFLUENCE OF DISPERSED PHASE VISCOSITY
Experiments were carried out for the reference system and systems 3 and 4 ( 1.34 mPa.s, 1.79 mPa.s and 3.24 mPa.s, resp.) for 0.32 mm.Threshold amplitudes increase when increases (fig.6(a)), as in W&B calculations. 17When changing the reference system for system 3 (resp.4), increases by 34% (resp.142%) and increases by 28% to 79% (resp.62% to 133%).The effect of is stronger than expected.Indeed, when the reference system is changed for system 4, the quality factor decreases from 14.6 to 13.9 (Eq.( 4)), leading to a theoretical 5% increase in (Eq.( 11)).In addition, the scaling with the reference system is not conserved for systems 3 and 4: we find . .
for system 3 and . .
As mentioned above, a system with a greater continuous phase viscosity was also studied: results and analysis are reported in supplementary material D.
V. FURTHER ANALYSIS
In this section, we synthesize drop diameter data of section IV.Then, we analyze the elongation ratio for detachment.Finally, we propose a LFHO model that better reflects the data.
A. MODE 1 RESONANCE
Drop diameters at the threshold are consistent with the detachment at resonance when the bound drop mode 1 eigenfrequency coincides with the forcing frequency.To quantify the discrepancy between our data and Eq. ( 3) of Bisch et al. 6 , we plot the dimensionless forcing pulsation (our data) and dimensionless drop eigenpulsation (Eq.( 3)) against the drop to pore diameter, for different pore diameters and interfacial tensions (fig.7 12) of S&S 7 for the reference system (solid line), system 3 (dash-dotted line) and system 4 (dotted line).
Our data are well represented by Eq. ( 3) of Bisch et al. 6 (dashed line) until / 5.For / 5, our data are markedly above the Bisch et al. 6 curve.As stated, Eq. ( 3) was validated until / 7.As the validity of the Bisch et al. 6 law is restricted, we consider the theoretical results of S&S 7,8 established for any / and density ratio.They analyzed the axisymmetric vibrations of a liquid drop in an outer fluid, in partial contact with a spherical bowl (see fig. 8) under the assumptions of zero gravity, negligible viscous effects and small surface deformations.Their calculated eigenfrequency of mode is: with the eigenvalue for mode , function of the support angle Θ and phase density ratio.Assuming fig.8 is a reasonable simplification of our drop, we estimate Θ arcsin ⁄ .We calculate with Smithwick and Boulet's method 9 derived from the work of S&S 7 , using densities and interfacial tensions of table I. Our data are better fitted by the model of S&S 7 than by the law of Bisch et al. 6 (fig.7(a) and (b)), notably for / 7.As in Kim's 18 work, agreement between our data and S&S 7 calculations is remarkable as the binding constraint is different.
A critical elongation ratio
⁄ function of the drop to pore diameter ratio leads to drop detachment 17,19 .We measured ⁄ for all parameters tested (fig.9), being taken from the nozzle tip to the drop apex.The points lie roughly on the same curve (fig.9), confirming that a drop detaches in stretching mode once a critical elongation is reached.Also, we see that the critical elongation ratio is essentially a function of ⁄ .FIG. 9. Drop elongation ratio function of the drop to pore diameter.Reference system (white); system 1 (grey); system 2 (black).(□) 0.11 mm; (◊) 0.32 mm; (⊲) system 3, 0.32 mm; (⊳) system 4, 0.32 mm.Curve from Eq. (13) (dashed and solid lines).
We remind that physicochemical properties vary for systems 1 to 4 compared to the reference system.
⁄ should depend on the viscosity ratio ⁄ and on the Ohnesorge number . The viscosity ratios we tested ( 1.5 to 4.1) may not vary enough to have an impact on ⁄ .For a free drop submitted to shear, Stone et al. 28 found that breakup occurs above a critical elongation ratio, function of but for 0.1 to 1, the critical elongation ratio did not vary significantly.Similarly, we find Oh from 1.7×10 -2 to 5.3×10 -2 .These values may be too close to observe a difference in ⁄ , although for a free drop submitted to shear, these values are sufficiently different to obtain a twofold increase in aspect ratio. 29e return to the elongation-based criterion for stretching mode (Eq.( 7)).Let us note the displacement of the drop center of mass when the drop axial elongation is .In the limit ≫ , we may consider that drop deformation is entirely localized in the neck.In that case, an estimation of is given by and ⁄ roughly reads: The curve from Eq. ( 13) (solid line) is plotted against our data (fig.9).For high drop to pore diameter ratios, critical elongation ratios are well estimated by Eq. ( 13), in accordance with S&S 7 or Bostwick and Steen 10 : for large ⁄ , the bound drop essentially experiences a rigid motion with deformation localized at the neck.For low ⁄ , Eq. ( 13) is no longer valid (dashed line) and drop deformation is rather uniform.From fig. 9, we deduce that the transition from the uniform deformation regime to the localized deformation regime occurs around ⁄ ≅ 3.In our setup, the neck of the bound drop preexists (without vibration) contrary to in the configuration of S&S. 7 Thus, deformation is more quickly localized (they find values for the transition in the order of 10).
C. OSCILLATOR MODEL WITH THE TRANSIENT
The effect of process parameters and system properties on drop diameter concord with the scaling laws of section III but threshold amplitude variations are not well predicted.Thus we developed a finer model which describes drop growth and motion as a function of time.We still consider the drop as a LFHO as it probably provides the most simple framework to study growing drop oscillations.In the moving non-inertial frame of reference where the nozzle is still, the differential equation of motion of the drop center of mass reads: Drop mode 1 eigenpulsation is given by Eq. ( 12) of S&S 7 and calculated using Smithwick and Boulet's method. 9The damping coefficient is estimated from the empirical Eq. ( 4) of Bisch et al. 6 We suppose drop growth is slow enough for Eq. ( 14) to hold at every moment.The drop diameter increases in time according to: and depend on , so vary with time as well.Equation ( 14) is solved numerically by the fourth order Runge-Kutta method.The integration time step is 0.01 . We begin calculations with 0 1.01 as is not defined for 1 ⁄ . We fix initial conditions of 0 ⁄ and 0 0. Figure 3 reports simulation results (dotted line) with ranging from 0 to 0.325 mm (increment of 0.005 mm) for one dataset on the reference system (a typical drop center of mass simulated motion close to the threshold is shown in supplementary material E).Drop size at the transition is well predicted but it is overestimated far from the transition.Moreover, the threshold amplitude is underestimated: in this example, the predicted value is twice lower than experimentally.
In the example of fig.4(a), we see that threshold values from the model (dashed lines) are well below experimental ones.Since the present model accounts for the transient (contrary to the model 19 briefly reported in section III), these discrepancies cannot be attributed to the time spent by the bound drop in its resonance range as advanced earlier 19 .This is consistent with the experimental results of section IV.B which show that the dispersed phase flow rate little affects the threshold amplitude.The effect of the frequency on the amplitude variations is also not well predicted.Finally, the effect of the pore diameter on the threshold amplitude is opposite to experimentally.We infer that damping is underestimated in this model and the effect of ⁄ is not well described.
D. OSCILLATOR MODEL WITH ADDITIONAL FRICTION TERM
We estimate the quality factor by / for bound drops at different amplitudes at 100 Hz (reference system).We find lower values than from Eq. (4) (around 3 times).Damping is higher than expected, even for amplitudes of the drop excitation force as low as 0.015 mm (9% of the drop radius, well below ). is constant below the threshold amplitude , so we assume that at and below, nonlinear effects are weak and do not explain the higher damping.Our system undergoes additional friction compared to configurations in the literature. 6,7,11As the dispersed phase does not wet the nozzle tip, there is a wedge between the drop and nozzle surface, containing continuous liquid phase (fig.8).Assuming ≪ ⁄ ⁄ ≪ 1, an estimate of the wedge angle is given by ⁄ .When the drop oscillates, oscillates.The continuous phase in the wedge is driven outwards (inwards, resp.) when decreases (increases) with time.The viscous friction associated with the film flow leads to an extra friction term in the LFHO model of the oscillating drop.We note film the damping coefficient associated with the friction in the film and propose the following expression (see Appendix for details): film 16 is dimensionless.We infer depends only on the viscosity ratio ⁄ and depends on the deformation regime ("uniform" or "localized").The differential equation of motion of the drop center of mass now reads: We solve Eq. ( 17) by the same procedure as for Eq. ( 14). and are identified from experimental threshold amplitudes since the slope of the curve ( ) is related to and the curve is translated up or down by increasing or decreasing , respectively.and values are summarized in table II.
was determined by fitting simulation results to the data for the reference system with 0.32 mm and 0.11 mm.We needed to introduce two distinct values of depending on ⁄ .For 5 ⁄ , the data (obtained with 0.32 mm) are well represented with 1.9.For 5 ⁄ , the data ( 0.11 mm) are better represented with 1.4.
Table II.Identified values of coefficients and for film .When 1.9 (resp.1.4), the viscous force per length unit of pore circumference that acts against the drop oscillations scales as .(resp. . ).In the case where a viscous force opposes a contact line movement, the dependence on the wedge angle is weaker: the force per length unit of the contact line scales as .Indeed, the wedge angle is constant and the wedge translates parallel to the surface whereas in this case, the wedge angle varies, inducing the liquid flow in the wedge.
The dispersed to continuous phase viscosity ratio is of 1.5, 2 and 3.6 for the reference system, systems 3 and 4, respectively.We determined for the different ratios by fitting simulation results to the data (with previously identified).increases monotonously from 4.4 to 7.4 when ⁄ increases from 1.5 to 3.6.We logically expect that at a set , the viscous friction in the film increases with ⁄ (drop interface becomes less and less mobile).Figure 3 shows that the threshold amplitude is well reproduced by adding film (solid line).Drop diameter at the transition is also well predicted.However, drop diameters are still overestimated far from the transition.Above , this may be due to nonlinear effects.A downwards shift in resonance frequency occurs when increasing for soft nonlinear oscillators. 15,16Therefore, at the set forcing frequency, smaller drops which would usually resonate at higher frequencies resonate and detach in stretching mode. 19Below , overestimation is attributed to the excitation of a higher resonance mode than mode 1.This is not taken into account in Eq. ( 14) or (17).
When film is included, the effect of the frequency on amplitude variations is well described (fig.4(a), solid lines).The effect of the pore diameter on the threshold is also well accounted for (fig.4(a)).Below 60 Hz, no clear threshold appeared in simulations for 0.11 mm.(fig.4(a)).We expect the drop behaves as an overdamped oscillator.Experimentally, the threshold is less sharp however exists.
We note that simulations for systems 1 and 2 were performed with and identified on the reference system, since the viscosity ratios are the same.Threshold amplitudes and drop diameters are relatively well predicted for these systems from the model (grey and black points, fig.10).
Overall, the theoretical threshold amplitudes from the modified LFHO model well reproduce experimental ones (fig.10 (a)) and drop diameters are also well accounted for (fig.10 (b)).
VI. CONCLUSIONS
8][19] In the present work, we studied drop growth and detachment from an axially vibrating nozzle.We studied the impact of forcing parameters as well as nozzle inside diameter, dispersed phase flow rate, interfacial tension and dispersed phase viscosity.At a set forcing frequency, we observed a transition in drop diameter when increasing the forcing amplitude: above a threshold, drops detach at resonance, i.e., when the first eigenfrequency of the growing drop coincides with the forcing frequency.Below the threshold, larger drops detach in dripping mode, driven by buoyancy.The diameter of the drops formed above the threshold is very well correlated to the mode 1 eigenfrequency calculated by S&S 7 .We remind that the eigenfrequency depends on the support and drop diameters, phase densities and interfacial tension.The agreement between our results and calculations of S&S 7,8 is remarkable as the binding constraint is different.
We examined the critical elongation ratio for drop detachment, which depends on the drop to pore diameter.We discerned two deformation regimes: for low ⁄ , a uniform deformation regime and for larger ⁄ , a localized deformation regime (limited to the neck).The neck preexists, so the latter regime appears earlier than in the configuration of S&S. 7,8We proposed a transient model to account for the threshold amplitude variations.To our knowledge, critical amplitudes for drop ejection have not been accounted for before.We modelled the growing drop as a LFHO, with the eigenfrequency of S&S 7,8 .Since the dispersed phase does not wet the nozzle, we introduced an extra damping coefficient to account for the viscous dissipation in the film of continuous phase between the drop and nozzle surface.The friction force is described as a power law of the pore to drop diameter ratio.The exponent depends on the deformation regime and the multiplier constant on the viscosity ratio.Our model well reproduces the experimental threshold amplitudes and resulting drop diameters.
In further work, it would be interesting to study drop generation when axial vibration is coupled to the shear stress exerted by a circulating phase, to approach vibrating membrane emulsification conditions.
SUPPLEMENTARY MATERIAL
See supplementary material for insight on: A, the interfacial tension at the intermediate plateau; B, the characteristic time to reach this plateau; C, the figures for the influence of dispersed phase flow rate; D, the figures and analysis for the influence of the continuous phase viscosity and E, the drop center of mass motion with respect to the nozzle surface.
We consider a drop attached to the nozzle inner edge (fig.11).We suppose that the drop diameter is large compared to the nozzle inner diameter and that the drop shape (at rest) can be approached by a spherical cap of angle Θ .Θ is the angle of the wedge formed between the drop at rest and the nozzle surface.Θ is given by Θ arcsin ⁄ ≅ ⁄ .When the drop is submitted to vibrations, we consider that it may be described by a truncated ellipsoid of revolution that oscillates between prolate and oblate shapes.The wedge angle varies with time as the drop oscillates.Its instantaneous value is ≅ 1 4 ⁄ ⁄ in the limit of small drop deformations.The continuous phase in the wedge is driven outwards (inwards, resp.) when decreases (increases, resp.) with time.The viscous friction associated with the film flow in the wedge leads to an additional friction term in the LFHO model of the oscillating drop.We note film the corresponding friction force that acts against drop axial oscillations.Under the assumption ≪ ⁄ ⁄ ≪ 1, we infer that film depends on , , , and .From dimensional arguments, we deduce: film , A1 In the case of a viscous force that opposes a contact line movement, the vicinity of the contact line is usually described as a wedge with a well-defined dynamic contact angle.The force per length unit of the contact line is proportional to the liquid viscosity and is inversely proportional to the dynamic contact angle.In analogy to this, we seek a law in the generic form: film A2 with ⁄ the wedge angle and a function of ⁄ .We note that, for a moving contact line, the wedge angle is constant and the wedge translates parallel to the surface whereas in our case, the wedge angle varies, leading to the liquid flow in the wedge.We deduce the expression of the damping coefficient film (associated with film ) appearing in Eq. ( 17): film A3 where ⁄ depends on the dispersed to continuous phase viscosity ratio.
1.A single glass capillary (nozzle) of inside diameter emerges into a tank with the stationary continuous phase.Two pore diameters are presently tested: 0.32 mm and 0.11 mm.The dispersed phase is supplied through the pore at a flow rate 1.1 µL.s -1 to 14.4 µL.s -1 (PHD Ultra Syringe Pump, Harvard Apparatus), leading to mean flow velocities 4 / .Reynolds numbers for the flow in the nozzle are of Re 3.0 to 32.8 (Re / ) (laminar flow).The nozzle is fixed on a vibrating exciter (Bruel & Kjaer 4810) which induces a sinusoidal motion in time : sin 2
FIG. 2 .
FIG. 2. Visual summary of the output data.Drop detaching in stretching mode in time t for the reference system, 0.32 mm, 3.6 µL.s -1 , 100 Hz, 0.209 mm.
FIG. 8 .
FIG.8.Analogy between the present drop bound to a nozzle (left) and a drop in partial contact with a solid spherical cap (right) as defined by S&S 7,8 .
FIG. 11 .
FIG. 11.Sketch of an attached drop oscillating between prolate and oblate shapes.
Table I .
Properties of the different systems investigated. | 9,375.8 | 2016-10-12T00:00:00.000 | [
"Physics"
] |
Microsphere Coupled Off-Core Fiber Sensor for Ultrasound Sensing
A compact fiber ultrasound-sensing device comprising a commercially available Barium Titanate (BaTiO3) glass microsphere coupled to an open cavity off-core Fabry–Perot interferometer (FPI) fiber sensor is proposed and demonstrated. The open cavity is fabricated through splicing two segments of a single mode fiber (SMF-28) at lateral offsets. The lateral offset is matched to the radius of the microsphere to maximize their coupling and allow for an increased sensing response. Furthermore, the microsphere can be moved along the open-air cavity to allow for tuning of the reflection spectrum. The multiple passes of the FPI enabled by the high refractive index microsphere results in a 40 dB enhancement of finesse and achieves broadband ultrasound sensing from 0.1–45.6 MHz driven via a piezoelectric transducer (PZT) centered at 3.7 MHz. The goal is to achieve frequency detection in the MHz range using a repeatable, cost effective, and easy to fabricate FPI sensor design.
Introduction
Fiber optic interferometric sensors have gained interest in recent times due to their distinctive characteristics, such as compact size, repeatability, and multi-parameter sensing capabilities. Due to their diverse sensing applications and inexpensive designs, these sensors have applications in strain monitoring [1][2][3], refractive index measurements [4][5][6], and temperature [7][8][9]. More specifically, Fabry-Perot interferometers (FPI) have been investigated thoroughly due to desirable characteristics, such as high finesse-associated narrow spectrum peaks and high sensitivity, compact size, and immunity to electromagnetic interference. FPI are typically formed by cascading reflective surfaces or reflectors along the light's propagation path and their interference. Fiber optic FPI sensing devices can be divided into two subcategories: intrinsic FPIs, where light interactions are inside of the fiber, which can be formed through techniques, such as thin film deposition [10]; Bragg gratings [11]; and micro machining [12], and extrinsic FPIs, where light interacts with external cavities, such as air or other polymers [13]. These open-air cavity FPIs are typically fabricated through expensive and complex methods, such as fs laser micromachining [14]. The sensing capabilities of these type of fiber optic sensors have been exploited and enhanced through new and old fiber optic technology innovations, such as fiber grating [15][16][17], surface plasmon resonance [18], and specialty fibers [19].
To mitigate the cost for these open-air cavity sensing devices, splicing two sections of single mode fiber segments at a lateral offset to form a simple and compact sensor has been explored [20][21][22]. Splicing several offset segments beyond two sections has also been explored [23]. At each silica-air interface there will be a reflection, essentially turning the device into multiple cascading FPIs. The large offset of this FPI allows light to spread from the main incoming fiber to both the offset cladding and the surrounding open-air cavity, this will yield two definitive interferometer arms with a large refractive index difference.
These types of open cavity FPIs have been demonstrated to have a diverse range of sensing capabilities, such as temperature [24], refractive index [25], stress and strain [26,27], and ultrasound detection. For ultrasound detection, Fan et al. detail that the importance of high reflectivity, relatively large contrast, and linewidth would benefit the high frequency ultrasound sensor response [28]. Because the displacement of ultrasound sensing at MHz can be as small as sub-um [29], which is impossible to be detected with interferometers due to the diffraction limit of the laser wavelength, to increase the detection sensitivity for intensity detection, the sensor must operate at a quadrature point where relative intensity change versus wavelength shifts due to the displacement of the ultrasound signal, defined as intensity slope. To increase the sensitivity of the ultrasound sensing at the highest frequency, the intensity slope should be at its maximum. To improve this slope for FPI, the finesse is proportional to intensity slope change. For a fixed reflectivity and cavity length, the spectrum of the FPI exhibits narrow linewidth with a high finesse F . Light propagates in more FP cavities representing a higher finesse F N contributed to the product of the finesse of N individual FPIs with the same resonant frequency, these individual FPIs are represented by curved arrows in Figure 1b. To reach this target, we added a microsphere, as shown in Figure 1a, with a high refractive index between two sections of the off-core fiber (open cavity); through the front and back surfaces of the microsphere, the first FPI reaches the double pass, and the same principle is applied to the second FPI, the front and back surface of the microsphere introduced the 2nd double pass FPI, which leads to a total of F 6 , which can be seen in Figure 1b, circled in red. In addition to the FPI finesse enhancement, the microsphere itself is a resonator device, which enables light to be stored and confined at a resonant frequency. This confined light will circulate within the device through total internal reflection.
These types of open cavity FPIs have been demonstrated to have a diverse range of sensing capabilities, such as temperature [24], refractive index [25], stress and strain [26,27], and ultrasound detection. For ultrasound detection, Fan et al. detail that the importance of high reflectivity, relatively large contrast, and linewidth would benefit the high frequency ultrasound sensor response [28]. Because the displacement of ultrasound sensing at MHz can be as small as sub-um [29], which is impossible to be detected with interferometers due to the diffraction limit of the laser wavelength, to increase the detection sensitivity for intensity detection, the sensor must operate at a quadrature point where relative intensity change versus wavelength shifts due to the displacement of the ultrasound signal, defined as intensity slope. To increase the sensitivity of the ultrasound sensing at the highest frequency, the intensity slope should be at its maximum. To improve this slope for FPI, the finesse is proportional to intensity slope change. For a fixed reflectivity and cavity length, the spectrum of the FPI exhibits narrow linewidth with a high finesse ℱ. Light propagates in more FP cavities representing a higher finesse ℱ contributed to the product of the finesse of N individual FPIs with the same resonant frequency, these individual FPIs are represented by curved arrows in Figure 1b. To reach this target, we added a microsphere, as shown in Figure 1a, with a high refractive index between two sections of the off-core fiber (open cavity); through the front and back surfaces of the microsphere, the first FPI reaches the double pass, and the same principle is applied to the second FPI, the front and back surface of the microsphere introduced the 2nd double pass FPI, which leads to a total of ℱ , which can be seen in Figure 1b, circled in red. In addition to the FPI finesse enhancement, the microsphere itself is a resonator device, which enables light to be stored and confined at a resonant frequency. This confined light will circulate within the device through total internal reflection. In this paper, an extrinsic FPI fiber sensor is proposed and fabricated by the lateral offset splicing of two single mode fiber segments (Corning SMF-28) acting as the base structure, which is then modified and enhanced by utilizing the benefits of increased finesse by adding a Barium Titanate (BaTiO3) glass microsphere (refractive index of 1.94, commercially available from Cospheric LLC) coupled in the open-air cavity. The lateral offset and microsphere diameter are carefully matched to achieve the maximum contrast; this will allow for an increase in sensitivity. Light is reflected at each air-silica boundary. From the reflection spectrum, quadrature points are selected due to the wavelength shiftinduced maximum intensity change and are used for ultrasound detection. The fabricated sensor is tested with an ultrasound source generated from a piezoelectric transducer (PZT) centered at 3.7 MHz attached to a thin steel plate with the excitation of its high-order harmonics to increase the ultrasound frequency range.
Materials and Methods
A visual representation of the proposed sensing device is illustrated in Figure 1a. To begin, after prepping two SMF segments for splicing and loading it into the fiber splicer (Ericsson Cables, Sundbyberg, Sweden, FSU 995 FA), manual mode is selected. Using the In this paper, an extrinsic FPI fiber sensor is proposed and fabricated by the lateral offset splicing of two single mode fiber segments (Corning SMF-28) acting as the base structure, which is then modified and enhanced by utilizing the benefits of increased finesse by adding a Barium Titanate (BaTiO 3 ) glass microsphere (refractive index of 1.94, commercially available from Cospheric LLC) coupled in the open-air cavity. The lateral offset and microsphere diameter are carefully matched to achieve the maximum contrast; this will allow for an increase in sensitivity. Light is reflected at each air-silica boundary. From the reflection spectrum, quadrature points are selected due to the wavelength shiftinduced maximum intensity change and are used for ultrasound detection. The fabricated sensor is tested with an ultrasound source generated from a piezoelectric transducer (PZT) centered at 3.7 MHz attached to a thin steel plate with the excitation of its high-order harmonics to increase the ultrasound frequency range.
Materials and Methods
A visual representation of the proposed sensing device is illustrated in Figure 1a. To begin, after prepping two SMF segments for splicing and loading it into the fiber splicer (Ericsson Cables, Sundbyberg, Sweden, FSU 995 FA), manual mode is selected. Using the two views in the splicer (side and top view), the top view is aligned (via lining up the SMF core) while the other is set to an offset distance h (the distance from the incoming SMF center point of the core to the 1st segment SMF's edge), circled in red in Figure 1a. The offset is imperative as it will affect the beam path through the air and microsphere interfaces, which is vital for characteristics, such as contrast and reflectivity. Once the leading fiber and 1st segment is fused, it is carefully cleaved under a confocal microscope to the desired length L 1 . This length will be the air cavity enclosed by two SMF segments (incoming SMF and end segment SMF). Similarly, the end segment is spliced and fused to the previous SMF segment, making sure all the cores align in the top view. Once the end segment is matched to the same offset it is spliced and fused; it is brought under the confocal microscope again for the final cleave at the desired length L 2 . Under the microscope and shielded from the environmental effects, a single BaTiO 3 microsphere is selected and picked using fabricated fiber half tapers with a waist diameter of~5 µm. Combining with translational stages at different positions, the microsphere is deposited onto the open-air cavity, and its position can be moved using the same fiber tapers. This process is illustrated in Figure 2.
two views in the splicer (side and top view), the top view is aligned (via lining up the SMF core) while the other is set to an offset distance h (the distance from the incoming SMF center point of the core to the 1st segment SMF's edge), circled in red in Figure 1a. The offset is imperative as it will affect the beam path through the air and microsphere interfaces, which is vital for characteristics, such as contrast and reflectivity. Once the leading fiber and 1st segment is fused, it is carefully cleaved under a confocal microscope to the desired length L1. This length will be the air cavity enclosed by two SMF segments (incoming SMF and end segment SMF). Similarly, the end segment is spliced and fused to the previous SMF segment, making sure all the cores align in the top view. Once the end segment is matched to the same offset it is spliced and fused; it is brought under the confocal microscope again for the final cleave at the desired length L2. Under the microscope and shielded from the environmental effects, a single BaTiO3 microsphere is selected and picked using fabricated fiber half tapers with a waist diameter of ~5 µm. Combining with translational stages at different positions, the microsphere is deposited onto the open-air cavity, and its position can be moved using the same fiber tapers. This process is illustrated in Figure 2. In the most basic case (just the off-core), we can analyze the sensor as cascading FPIs and derived with 2-beam approximation with three mirrors. The total reflected electrical fields is approximated as Er given by [30]: where the input field (E0), transmission loss in the cavity (α), round trip propagation phase shifts (ϕ1,2) are given by: Using Equations (1)-(4), the total reflection spectrum can be simplified to: From the above equations: a change in refractive index n between nsmf and nair would change the phase ϕ and adding an extra path length along the air for the light to travel would change the spectrum. The reflection spectrum of an FPI is wavelength dependent intensity modulation, generally caused by the optical phase difference between the two beams. The optical path difference caused by the microsphere is given by [31]: where n is the microsphere refractive index and d is the diameter of the microsphere. The initial FPI structure (without the microsphere) has been shown previously to be capable of enhancing high-order mode interference, which increases multi-mode In the most basic case (just the off-core), we can analyze the sensor as cascading FPIs and derived with 2-beam approximation with three mirrors. The total reflected electrical fields is approximated as E r given by [30]: where the input field (E 0 ), transmission loss in the cavity (α), round trip propagation phase shifts (φ 1,2 ) are given by: Using Equations (1)-(4), the total reflection spectrum can be simplified to: From the above equations: a change in refractive index n between n smf and n air would change the phase φ and adding an extra path length along the air for the light to travel would change the spectrum. The reflection spectrum of an FPI is wavelength dependent intensity modulation, generally caused by the optical phase difference between the two beams. The optical path difference caused by the microsphere is given by [31]: where n is the microsphere refractive index and d is the diameter of the microsphere. The initial FPI structure (without the microsphere) has been shown previously to be capable of enhancing high-order mode interference, which increases multi-mode interference and improves the intensity change with wavelength shifts (i.e., spectrum slope). The slope of the reflection spectrum given by S = dR/dλ, where S is the slope and R is the reflectivity; this details that a greater slope indicates a stronger multimode interference, which leads to better sensitivity. This corresponds to the spectrum's quadrature points. The off-core FPI has the capability to add extra modes via the air and silica cladding to enhance the quality factor with an additional one or two round trips, and in turn leads to an improved ultrasound response. The light propagating from the fiber core will also be coupled to the microsphere. The barium in the glass microsphere composition allows for an increase in refractive index to 1.94 (previously mentioned). The higher refractive index allows for the confined light within the microsphere to slow down through this medium allowing for more bends total internal reflection, all of this will lead to more efficient refraction.
Results and Discussion
An experimental setup can be seen in Figure 3. The reflection spectrum is obtained through an optical circulator using an erbium-doped fiber amplifier source (INO, Quebec, QC, Canada, FAF-50) and optical spectrum analyzer (Yokogawa, Tokyo, Japan, AQ6375). The reflection spectrum of the microsphere coupled off-core sensor is analyzed under different microsphere diameters, illustrated in Figure 4, which have an offset distance of 11.4 µm. Different microsphere diameters yield different reflection spectrum for a specific offset. The microsphere diameter and core offset distance play a direct contribution to the reflection contrast. This enhanced contrast and reflectivity determine the dynamic range and signal strength of this device. When the input light from the main fiber core travels through the first air-silica boundary into the open cavity it will meet the microsphere; this leads to a corresponding large number of reflections within the microsphere, which allows for high-order modes and a large contrast increase. This is heavily dependent on the alignment and core offset. When the microsphere diameter is too large compared to the core offset it leads to large propagation loss and a diminished contrast resulting in weak multi-mode interference. The offset distance h should be equal to the radius of the microsphere. This alignment is crucial because the light path from the incoming core will be able to directly interact with the apex point of the microsphere (facing the first silica/air interface) allowing for an increase in reflection and coupling. Since the offset distance is 11.4 µm, the 5 µm, 10 µm, 16 µm, and 50 µm diameter microspheres are not ideal. The ideal microsphere for this specific offset distance would be with a 22.8 µm diameter.
interference and improves the intensity change with wavelength shifts (i.e., spectrum slope). The slope of the reflection spectrum given by S = dR/dλ, where S is the slope and R is the reflectivity; this details that a greater slope indicates a stronger multimode interference, which leads to better sensitivity. This corresponds to the spectrum's quadrature points. The off-core FPI has the capability to add extra modes via the air and silica cladding to enhance the quality factor with an additional one or two round trips, and in turn leads to an improved ultrasound response. The light propagating from the fiber core will also be coupled to the microsphere. The barium in the glass microsphere composition allows for an increase in refractive index to 1.94 (previously mentioned). The higher refractive index allows for the confined light within the microsphere to slow down through this medium allowing for more bends total internal reflection, all of this will lead to more efficient refraction.
Results and Discussion
An experimental setup can be seen in Figure 3. The reflection spectrum is obtained through an optical circulator using an erbium-doped fiber amplifier source (INO, Quebec, QC, Canada, FAF-50) and optical spectrum analyzer (Yokogawa, Tokyo, Japan, AQ6375). The reflection spectrum of the microsphere coupled off-core sensor is analyzed under different microsphere diameters, illustrated in Figure 4, which have an offset distance of ~11.4 µm. Different microsphere diameters yield different reflection spectrum for a specific offset. The microsphere diameter and core offset distance play a direct contribution to the reflection contrast. This enhanced contrast and reflectivity determine the dynamic range and signal strength of this device. When the input light from the main fiber core travels through the first air-silica boundary into the open cavity it will meet the microsphere; this leads to a corresponding large number of reflections within the microsphere, which allows for high-order modes and a large contrast increase. This is heavily dependent on the alignment and core offset. When the microsphere diameter is too large compared to the core offset it leads to large propagation loss and a diminished contrast resulting in weak multi-mode interference. The offset distance h should be equal to the radius of the microsphere. This alignment is crucial because the light path from the incoming core will be able to directly interact with the apex point of the microsphere (facing the first silica/air interface) allowing for an increase in reflection and coupling. Since the offset distance is ~11.4 µm, the 5 µm, 10 µm, 16 µm, and 50 µm diameter microspheres are not ideal. The ideal microsphere for this specific offset distance would be with a 22.8 µm diameter. Figure 5a depicts the change in the reflection spectrum as the microsphere distance from the first silica-air interface increases (along the center) for a constant open cavity segment length L 1 . As the distance between the beginning of the cavity and microsphere increases, the free spectral range of the reflection decreases. It is determined that the microsphere location with the highest contrast is located between the first 1 / 3 to 1 2 distance of the off-core open-air segment. Figure 5b shows the test repeated with different samples (with relatively the same h), changing L 1 to ensure its consistency where the distance is normalized with respect to the open-air cavity length. Once the microsphere is positioned longitudinally (along the fiber direction), it can be moved by a few micrometers (much smaller than the diameter of the microsphere, which is 22 µm, to ensure low loss from leaky modes) laterally (perpendicular to the fiber direction) to display a full FSR shift, which can be seen in Figure 5c. This shift will generally maintain the same shape and FSR of the spectrum due to the FP cavity between the off-core fiber and microsphere, allowing further tunability. Figure 5a depicts the change in the reflection spectrum as the microsphere distance from the first silica-air interface increases (along the center) for a constant open cavity segment length L1. As the distance between the beginning of the cavity and microsphere increases, the free spectral range of the reflection decreases. It is determined that the microsphere location with the highest contrast is located between the first ⅓ to ½ distance of the off-core open-air segment. Figure 5b shows the test repeated with different samples (with relatively the same h), changing L1 to ensure its consistency where the distance is normalized with respect to the open-air cavity length. Once the microsphere is positioned longitudinally (along the fiber direction), it can be moved by a few micrometers (much smaller than the diameter of the microsphere, which is 22 µm, to ensure low loss from leaky modes) laterally (perpendicular to the fiber direction) to display a full FSR shift, which can be seen in Figure 5c. This shift will generally maintain the same shape and FSR of the spectrum due to the FP cavity between the off-core fiber and microsphere, allowing further tunability. Figure 5a depicts the change in the reflection spectrum as the microsphere distance from the first silica-air interface increases (along the center) for a constant open cavity segment length L1. As the distance between the beginning of the cavity and microsphere increases, the free spectral range of the reflection decreases. It is determined that the microsphere location with the highest contrast is located between the first ⅓ to ½ distance of the off-core open-air segment. Figure 5b shows the test repeated with different samples (with relatively the same h), changing L1 to ensure its consistency where the distance is normalized with respect to the open-air cavity length. Once the microsphere is positioned longitudinally (along the fiber direction), it can be moved by a few micrometers (much smaller than the diameter of the microsphere, which is 22 µm, to ensure low loss from leaky modes) laterally (perpendicular to the fiber direction) to display a full FSR shift, which can be seen in Figure 5c. This shift will generally maintain the same shape and FSR of the spectrum due to the FP cavity between the off-core fiber and microsphere, allowing further tunability. Analyzing the data given in both Figures 4 and 5, a sample is fabricated seen in Figure 6a, ensuring the offset distance and the microsphere radius is comparable, and is located between ⅓ and ½ of the total open cavity distance. A green light is sent through the fiber to confirm the offset and microsphere are aligned (Figure 6b). The specific sensor param- Analyzing the data given in both Figures 4 and 5, a sample is fabricated seen in Figure 6a, ensuring the offset distance and the microsphere radius is comparable, and is located between 1 / 3 and 1 2 of the total open cavity distance. A green light is sent through the fiber to confirm the offset and microsphere are aligned (Figure 6b). The specific sensor parameters for the device under the test can be seen in Table 1. As a result of the multibeam interference, enhanced finesse via an increased number of FPIs and strong dips in the reflection spectrum allow for a broad tuning range, typically associated with higher sensitivity. The reflection spectrum and its initial spectrum before adding the microsphere are shown in Figure 7. A maximum contrast of~44 dB can be achieved, which is a drastic increase in contrast when comparing it to its initial off-core case with no microsphere of~4.2 dB, an increase of over 10 times. If we suppose a single pass FPI (without the microsphere), the finesse is 4, corresponding to the blue spectra in Figure 7, with 6 passages, the new finesse is 4 6 which is equivalent to 4096, 30 dB. Although, with the multiple surface reflections due to the high reflective index within the microsphere, the new finesse is 4 8 , 45 dB, which has an enhancement factor close to the red spectra.
(c) Analyzing the data given in both Figures 4 and 5, a sample is fabricated seen in Figure 6a, ensuring the offset distance and the microsphere radius is comparable, and is located between ⅓ and ½ of the total open cavity distance. A green light is sent through the fiber to confirm the offset and microsphere are aligned (Figure 6b). The specific sensor parameters for the device under the test can be seen in Table 1. As a result of the multibeam interference, enhanced finesse via an increased number of FPIs and strong dips in the reflection spectrum allow for a broad tuning range, typically associated with higher sensitivity. The reflection spectrum and its initial spectrum before adding the microsphere are shown in Figure 7. A maximum contrast of ~44 dB can be achieved, which is a drastic increase in contrast when comparing it to its initial off-core case with no microsphere of ~4.2 dB, an increase of over 10 times. If we suppose a single pass FPI (without the microsphere), the finesse is 4, corresponding to the blue spectra in Figure 7, with 6 passages, the new finesse is 4 6 which is equivalent to 4096, 30 dB. Although, with the multiple surface reflections due to the high reflective index within the microsphere, the new finesse is 4 8 , 45 dB, which has an enhancement factor close to the red spectra. To test the device for its ultrasound sensing response, a piezoelectric transducer (PZT) fixed to a thin steel plate (0.25 mm thickness) is used as the ultrasound source driven by a function generator (Agilent, Santa Clara, CA, USA, 33250A) (seen previously in Figure 1 boxed in red). A tunable laser source (Agilent 81940A) is set to the specific wavelength 1545.7 nm, which corresponds to a quadrature point in the reflection spectrum for its steep slope and large contrast. It should be noted, although a large contrast is desirable for improved sensitivity, a steep slope plays a more crucial role. The locked probe wavelength can respond to the periodic modulation from the propagating ultrasound waves, which can be analyzed through an electronic spectrum analyzer (ESA) (Rohde & Shwarz, Columbia, MD, USA, F&W Signal Spectrum Analyzer) via a photodetector (Thorlabs, Newton, NJ, USA, PDB45QC-AC). The device under test is encapsulated inside of an acrylic case for added protection from external disturbances, and the ESA is placed in a separate room separated by a concrete wall to eliminate any sort of antenna effect from the ultrasound generation. Figure 8a shows the sample's frequency response from 0.1-45.6 MHz. The importance of selecting a proper quadrature point is crucial to the device's ultrasound detection sensitivity. Figure 8b depicts the ultrasound response of an unideal offset-microsphere pairing tested to highlight its importance. Different quadrature points correspond to different maximum frequency responses. This is due to the different resonant modes in the microcavity and its multimode interference. This device can be compared to Fan et al. [26,28], where the microsphere is glued to the off-core fiber and without a microsphere at all, which limited the tuning range; numerous samples were required due to the fixed nature of the off-core FPI cavity to optimize the spectral contrast, which limited the specific contrast, reflectivity, and linewidth. With the flexible lay sphere in our approach, one sample is adequate to optimize the spectrum for maximum contrast in the reflection spectra. If the microsphere radius equals the offset distance, the spectrum can be aligned at quadrature points for ultrasound sensing measurements with the highest sensitivity. To test the device for its ultrasound sensing response, a piezoelectric transducer (PZT) fixed to a thin steel plate (0.25 mm thickness) is used as the ultrasound source driven by a function generator (Agilent, Santa Clara, CA, USA, 33250A) (seen previously in Figure 1 boxed in red). A tunable laser source (Agilent 81940A) is set to the specific wavelength 1545.7 nm, which corresponds to a quadrature point in the reflection spectrum for its steep slope and large contrast. It should be noted, although a large contrast is desirable for improved sensitivity, a steep slope plays a more crucial role. The locked probe wavelength can respond to the periodic modulation from the propagating ultrasound waves, which can be analyzed through an electronic spectrum analyzer (ESA) (Rohde & Shwarz, Columbia, MD, USA, F&W Signal Spectrum Analyzer) via a photodetector (Thorlabs, Newton, NJ, USA, PDB45QC-AC). The device under test is encapsulated inside of an acrylic case for added protection from external disturbances, and the ESA is placed in a separate room separated by a concrete wall to eliminate any sort of antenna effect from the ultrasound generation. Figure 8a shows the sample's frequency response from 0.1-45.6 MHz. The importance of selecting a proper quadrature point is crucial to the device's ultrasound detection sensitivity. Figure 8b depicts the ultrasound response of an unideal offset-microsphere pairing tested to highlight its importance. Different quadrature points correspond to different maximum frequency responses. This is due to the different resonant modes in the microcavity and its multimode interference. This device can be compared to Fan et al. [26,28], where the microsphere is glued to the off-core fiber and without a microsphere at all, which limited the tuning range; numerous samples were required due to the fixed nature of the off-core FPI cavity to optimize the spectral contrast, which limited the specific contrast, reflectivity, and linewidth. With the flexible lay sphere in our approach, one sample is adequate to optimize the spectrum for maximum contrast in the reflection spectra. If the microsphere radius equals the offset distance, the spectrum can be aligned at quadrature points for ultrasound sensing measurements with the highest sensitivity.
Conclusions
This paper gave an overview and background of a proposed extrinsic FPI sensor enhanced by multiple pass FPI due to a BaTiO3 glass microsphere coupled to the open cavity of a two segment SMF off-core device. The light coupled within the microsphere allows for scattering and internal reflection, which enhances the finesse. Offset alignment of the main incoming SMF and the center of the microsphere is imperative for maximum slope and contrast in the reflection spectrum, which allows for ideal quadrature points to be determined that are best suited for ultrasound detection. This simple and inexpensive sensor offers more applications beyond ultrasound sensing, including refractive index sensing, temperature monitoring, and biological sensing.
Conclusions
This paper gave an overview and background of a proposed extrinsic FPI sensor enhanced by multiple pass FPI due to a BaTiO 3 glass microsphere coupled to the open cavity of a two segment SMF off-core device. The light coupled within the microsphere allows for scattering and internal reflection, which enhances the finesse. Offset alignment of the main incoming SMF and the center of the microsphere is imperative for maximum slope and contrast in the reflection spectrum, which allows for ideal quadrature points to be determined that are best suited for ultrasound detection. This simple and inexpensive sensor offers more applications beyond ultrasound sensing, including refractive index sensing, temperature monitoring, and biological sensing.
Author Contributions: Conceptualization, methodology, formal analysis, investigation, resources, data curation, writing-original draft preparation, writing-review and editing and visualization, G.T. and X.B. (All authors); supervision, X.B. All authors have read and agreed to the published version of the manuscript.
Data Availability Statement:
The data presented in this study are available upon request from the corresponding author. | 7,497 | 2022-07-01T00:00:00.000 | [
"Physics"
] |
Simple, Low Cost, and Efficient Design of a PC-based DC Motor Drive
: In industrial applications, requiring variable speed and load characteristics, the DC motor is the attractive piece of equipment; due to its ease of controllability. Pulse-width modulation (PWM) or duty-cycle variation methods are commonly used in speed control of DC motors. A simple, low cost, and efficient design for a control circuit uses the PWM to adjust the average voltage fed the DC motor is proposed in this paper. The objective of this paper is to illustrate how the DC motor's speed could be controlled using a 555 timer. This timer works like a changeable pulse width generator. The pulse width can be changed via relays to add or remove resistors in the timer circuit. Using relays enable the proposed circuit to drive higher-power motors. The designed circuit controls the speed of a Permanent Magnet PM DC motor by means of the parallel port of a PC; therefore, the user will be able to control the speed of the DC motor. C++ computer program is used to run the motor at four levels of speed. An interface circuit is used to connect the motor to the parallel port. PC based control software is chosen to get simplicity and ease of implementation.
Modern evolutions in science and technology result in a numerous applications of high efficiency DC motor drives in multiple fields like electric trains, chemical process, rolling mills, home electric appliances, and robotic manipulators, which need speed controllers to carry out tasks.
For a long time, the DC motors are widespread in the industry control field, because they have numerous good characteristics, such as; high starting torque, easily linear controlled, and high response performance, etc.The variant motor control method depends upon its variant performance.The peripheral control apparatuses are adequate which contribute to more comprehensive achievement in the industrial control system.Hence, the DC motor control is easier than other types of motors.Nowadays, the control and measurement system could be implemented based on the computer [1].
Speed control represents the important advantage of DC motors.The motor speed has direct proportion with the armature voltage and inverse proportion with the magnetic flux of the poles; therefore, the rotor speed could be adjusted according to the field current and armature voltage.Speed control can be achieved by variable battery tapping, variable supply voltage, resistors or electronic controls [2].
In 21th century, the computers system have been applied in various application because it easy to monitor.To access a system, the user only interface with the PC software without need explore about hardware or manually control in computer system.It is not practical, in the contemporary technology period, to use a manual controller because it may waste cost and time.So as to minimize time and cost, it is necessary to suggest a controller based on PC, because it is portable.The users could monitor their system at specific place without going to the plant (machine), specifically in industrial application.As well as, the power could be minimized and preserved with computer which is more reliable and precise.The computer assisted by developed software is able to interfacing with hardware system, making the computer system reliable [2].
One of the important advantages of using the PWM technique in speed control of DC motor is that the signal stays digital always from the processor to the controlled system without need to digitalto-analog conversion, which minimizes the noise effects.Therefore, the DC supply is chopped into either fully ON or fully OFF.The voltage/current supply is fed into analog load through a repeating series of the ON/OFF pulses.By giving sufficient bandwidth, any analog value can be encoded with PWM.Another advantage of the PWM is that the pulses can extend into the full supply voltage and yield higher motor torque, and capable to surmount the internal motor resistance more easily [2].
The PWM is an efficient way of digitally encoding the levels of analog signals.PC based electrical appliances control is an interesting PC based research, mainly useful for industrial applications, home automation, and supervisory control applications.PC based PWM speed controller has become an essential in many implementations, starting with routines such as gate openers, window shutters through PC fire alarms and metering to automotive implementations such remote keyless entry and tire pressure monitoring systems [1].
There is enough number of research works in the literature talk about utilizing the solid-state devices in the PC-based control of DC drives.Huang and Lee [1] designed a PID controller to change the DC motor speed using the Lab VIEW software program, and demonstrated the motor speed in realtime to get the response of the PID controller based system.Sánchez1 and Valenzuela [2] had been proposed a control scheme in real-time without using a data acquisition board.So that the design was based on using the PC parallel port and two microcontrollers to achieve data feedback, Meha et al. [3] had investigated the speed control of DC motor using PWM technique.The desired speed of the motor had been programmed with C# language through the communication with 8051 microcontroller with a standard PC serial port.The microcontroller based closed-loop automatic DC motor speed control had been introduced by Dewangan et al. [4].A PMDC motor adjustable speed drive control was implemented by Ravindran and Kumar [5] with software program in Visual Basic code and hardware setup.The output of the proposed system is accomplished from the GUI of the LABVIEW.Gupta and Deb [6] presented a cost effective method to control the speed of a low cost brushed DC motor used in electric cars by integrating an IC 555 Timer with a high boost converter.This converter was used since electric cars need high voltages and currents.
Yadav et al. [7] presented an open loop scheme for the speed control of a PMDC motor using an AVR Microcontroller.The PC interfacing has been done using serial port (DB9 Connector).Kumari et al. [8] had made an attempt to control the axes motion in CNC machine tools by controlling the speed of both DC as well as Stepper motors.A PC-to-Motor interface and driver circuit board had been designed and developed for the presented system.The software of the system had been developed using LabVIEW-based graphical programming language.Chauhan and Semwal [9] implemented a PWM based speed control of PMDC motor through RS232 serial communication port with PC.Controlling the motor speed through tachogenerator as a speed feedback was executed using an ATmega8L microcontroller.Shah and Deshmukh [10] implemented a PWM technique with the help of LM3524 for the speed control of PMDC motor fed by a DC chopper.[11] presented an experimental setup for PWM controlling the speed of a DC motor used to drive a conveyor belt.An H-bridge had been used to supply the DC motor that permits to reverse the direction of the motor rotation.An ARDUINO UNO board, controlled by a program given in the LabVIEW 2013 programming environment, and combined with an Atmega 328 microcontroller was used to generate a the PWM signal.
Petru and Mazen
The purpose of a digital DC motor control is to use a digital signal that describing the demanded average voltage which needed to supply the DC motor.Operating and driving speed concept of a DC motor need to be study.Therefore, this paper has to take a part to design and develop a computer-based DC motor speed drive interface system.The paper has divided to two parts which are DC motor drive circuit design and Personal Computer (PC) to parallel communication interface.
The current paper is developing to provide an efficient, simple, and low cost method for controlling the speed of a DC motor via interfacing it with a PC through parallel port and using PWM technique.The PWM signal can be generated by using an IC -NE555 timer.The pulse width can be changed using relays by inserting or splitting resistors in the 555 circuit.These relays can good interface enabling the proposed circuit to drive higher-power motors.
Permanent Magnet DC Motor [12]:
The Permanent Magnet (PM) DC motor is one of the most widely used prime movers in industry today.PMDC motors became increasingly widespread in applications require relatively low torques and efficient use of space.The PMDC motors have construction differs from other DC motors, in which; the magnetic field of the stator is generated by suitably located poles, made up of magnetic materials.Differently, these motors do not require a field excitation, whether by means of the selfexcitation or separately techniques.
The equations that represent the PM motor operation are shown in equivalent 1 through 7.
The motor torque generated is relating to the armature current, Ia with a torque constant, kt which can be defined by the motor geometry: Like the traditional DC motor, the rotation of the rotor generates a back emf, Eb, that is linearly related to motor speed ωm by a voltage constant, ke: The PM motor equivalent circuit is quite simple, since it does not require modeling the field winding effects.The equivalent circuit and the torque-speed characteristic of a PM motor are shown Fig. 1.
Fig. 1: Equivalent circuit and torque-speed characteristic of the PMDC motor
The circuit model shown in Fig. 1 can be used to extract the torque-speed characteristic, as follows.For a fixed speed and thus fixed current, the inductor might be considered as a short-circuit and get the following equation stated in (3) [12]: Where Vs: motor input voltage source, Ra: motor armature winding resistance, T: motor torque, Eb: motor back emf, Ke: voltage constant, and ωm: motor speed.thus getting the speed -torque equations (4 to 7) [12]: Where To and ωmo are zero speed torque and speed at no load, respectively.
Pwm DC Motor Drive:
There are different types of DC motor drives used to drive different types of loads with different values of speed.Therefore, many speed-controlling devices are greatly needed.The most common speed control method is PWM technique.This technique depends on switching the power device ON and OFF at a certain frequency, by changing the ON and OFF times "duty cycle".
Many applications employ a microcontroller to produce the required PWM signals.On the other side, the 555 PWM circuit proposed here will produce an easy and low cost to build circuit, and suitable understanding of the pulse width modulation idea.The main advantage of using 555 timer is because that it does not require coding.It is very cheap, also useful in different applications where the PWM setting needed only, sometimes be changed.
The PWM 555 timer circuit is formed as an astable oscillator.Once an input power is applied, the 555 will be oscillated without using any external trigger.
The NE555 Timer [13], [14]:
NE555 can be introduced as a multipurpose integrated circuit IC that could execute both multivibrator functions; monostable and astable.This circuit distinguishes with a greater accuracy, repeatability, flexibility provided in the IC packages, and ease of application.The NE555 timer circuit is able to produce precise pulses (time delays) or oscillation.In the time-delay "monostable" mode, the pulse duration or time delay can be adjusted by using an external RC network.In the astable "clock generator" mode, the output frequency may be changed by adding two external resistors R1, R2 and one capacitor C. Fig. 2 shows typical circuits for the NE555 in both modes of operation, monostable and astable operation.Also, it can be noted that the threshold and the trigger levels could be externally controlled.
The pulse width, in the monostable circuit, can be computed as depicted in Eq. ( 10): The positive pulse width, for the astable timer circuit could be determined as seen in Eq. ( 11) and the negative pulse width could be defined as in Eq. ( 12):
Fig. 2: NE555 timer
In the proposed circuit, a stable mode is used so that the resistors R1 and R2 help in varying the frequency of the output from the comparator of the timer.This helps in generating a pulse train used to switch the transistor that used on.The biasing voltage used in the circuit is VCC.The output of the comparator is a square wave with VCC amplitude as shown in Fig. 3:
Astable Operation [13], [14]:
The astable (or multivibrator) circuit does not require trigger for starting.Once timer is powered, the output will start to oscillate between VCC volts and 0 volts as shown in Fig. 3.
The astable circuit could be oscillated very quickly (up to millions of cycles/sec) or slowly (down to many minutes/cycle).The time when the output is High is called ON time, or charge time (or mark), while the time of Low output is called OFF time, or discharge time.The connection of 555 as a stable mode is seen in Fig. 4.
Fig. 4: Connections of 555 timer in astable mode
As the capacitor voltage extends (2VCC /3), the discharge transistor is enabled (pin 7), and this point in the circuit will be grounded.Capacitor C now will discharge through R2 alone.Starting at (2VCC /3), it discharges towards ground, but again is interrupted halfway there, at (VCC /3).So, the discharge time will be t2=0.693R2*C.
The astable timer circuit performance could be represented by Eqs.(13-16): Where T is the total period of the pulse train, f is output circuit frequency, and Vav is the average output voltage.
PC Parallel Port as Analog I/O Interface:
A parallel port (or printer port) is an interface type placed on computers used to connect various peripherals.The parallel port data pins are Transistor-Transistor-Logic (TTL) outputs and generate a typical logic high of (3-5V) DC and a logic low of 0V [15].PC interfacing is the art of connecting computers and peripheral devices.The controller designed in this paper utilizes the PC parallel port as an analog I/O interface.Only four bits are used as analog interfaces through a PWM technique.This technique permits to build an analog interface without using A/D or D/A converters.The analog voltages and currents could be used to control processes directly.The analog control may seem as an intuitive and simple, it is not always practical or economically attractive.Analog circuits tend to be drifted over time and difficult to tune.When the analog circuits are digitally controlled, system power consumption and costs could be drastically minimized.The PWM is an efficient technique for controlling analog circuits via digital signals.The PWM is a method to digitally encode the analog signal levels.The duty cycle of a square wave shown in Fig. 3 is modulated for encoding a specific analog signal level.
Generation of PWM Waveform Using IC 555 Timer:
In controlling DC motors, it is possible to utilize transistor, resistor, autotransformer, etc. to execute linear current control, but this method has very large power consumptions.Nowadays, the PWM controlling devices are the most often used.The PWM circuit operates by producing a square wave with a changeable ON/OFF ratio.The average ON time might be changed from (0-100 %).Consequently, an adjustable amount of electric power can be fed to the load.The PWM circuit is more efficient than a resistive power controller [13].
The PWM and the driving motor circuit are respectively related to each other.The PWM is generated by using IC 555 timer so as to control the DC motor speed.The principle is based on using of square wave (duty cycle) for variation value of waveform.This is for generating the motor drive signal.The torque loaded on the motor is determined by PWM duty cycle.The speed of the DC motor is depending on duty cycle of PWM signal.PWM is also space saving, economical, and noise immune.
The PWM control can implemented by switching the power applied to the motor ON and OFF very rapidly.The DC voltage is transferred to a square-wave signal.By changing, the duty cycle of the signal (modulating the pulse width), the average input power and thus the motor speed could be controlled.
Generating PWM on parallel (LPT1) port data pins (D0-D3), using C++ is very simple.For ON period, high logic (1 means 3.49V) has to be applied on that data pin and low logic (0 means 0.09V) for OFF period of pulse.
Design and Implementation
The proposed design can be divided into two parts; the first one is the design of Astable mode by using 555 device with modeling of its duty cycle, while the second one is the design of driving circuit for PMDC motor.If R2 >> R1 then ON time (t1) / OFF time (t2) ratio = 1 approx and the output is a square wave with duty cycle of 50%.
Design of Astable
If R2 << R1 then ON time (t1) / OFF time (t2) ratio = ∞ approx and the output is a constant DC with duty cycle of 100%.
If R1=R2 then ON time (t1) / OFF time (t2) ratio = 2 and the output is a square wave with duty cycle of 2/3 % = 66.67%.The duty cycle and corresponding output voltage of the time is shown in Fig. 7, they can be calculated from Eqs. (15 and 16) and the value of the frequency.It can be seen that the duty cycle of the 555 timer circuit in astable mode cannot reach less than 50%; Duty Cycle (50-100) %.
To extend the duty cycle to be from full off (0% duty cycle) to full on (100% duty cycle), some modification may be done on the timer circuit as shown in Fig. 8. Diodes D1 and D2 will be added to the circuit in Fig. 4 to be forward and backward paths for charging and discharging the capacitor C, respectively.
Modified Duty Cycle (0-100) %
Selecting the ratios of R1 and R2 in Eq. 15 varies the duty cycle accordingly.If a duty cycle of smaller than 50% is needed, even if R1=0, the charging time cannot be made smaller than the discharging time since the charge path is: R1+R2, while the discharge path is: R2 alone.Hence, it is necessary to insert a diode D1 in parallel with R2, cathode toward the timing capacitor.Another diode D2 is not mandatory (in series with R2), cathode away from the timing capacitor.Thus, the charge path will be R1, through the D1 into C, while the discharge is being through the D2 and R2 to the discharge transistor.This schematic will give a duty cycle ranging from less than 5% to greater than 95%.It should be noticed that for reliable practical operation, a minimum value of 3kΩ for R2 is needed to confirm that oscillation starts.When the capacitor C begins to charge through R1 and D1, the voltage on C rise to 2VCC/3, the threshold (pin 6) will be activated, which makes the output (pin 3), and discharge (pin 7) are going Low.
When the capacitor C starts to discharge through R2 and D2 and the voltage on C drops below of VCC /3, the output (pin 3) and discharge (pin 7) pins are going High, and the cycle repeats.Pin 5 is not used for an external voltage input; therefore, it is bypassed by a 0.01uF capacitor to ground as shown previously in the Fig. 3.
Assuming R2 value to be fixed, the duty cycle varies only with respect to R1.Therefore, Charging time; t1 = 0.683R1C (high output) … ( 17 All the desired values to be designed for the wanted duty cycle (or average output voltage) and the oscillation frequency for the PWM output of the timer can be calculated according to the modified equations (17-22) and using the curves shown in the Figs.(9and10).Therefore, the DC motor speed could be controlled using a 555 timer over the full range by changing the signal mark-space ratio across the full range, so it is possible to get any desired average output voltage ranging (0-5V).
Drive Circuit Design
The PC employs a software program to control the motor speed.The motor is connected via an interface circuit to the PC.The interface circuit shown in Fig. 11 includes IC1 (74LS244 buffer); IC2 (ULN2003 driver); IC3 (NE555 astable multivibrator circuit); relay switches S1, S2, S3 and S4; and T1 (2N2222) motor driver transistor.In the case of load currents up to about 600mA, a 2N2222A NPN transistor is advised.For higher-power motors, the BJT transistor might be replaced by an IGBT or a power MOSFET.The 555 timer works as a changeable pulse width generator.A Freewheeling diode, D1 is used to prevent back-emf induced from inductive loads like brushed motors from destroying the switching transistor.The pulse width can be changed by utilizing relays to insert or split resistors in the 555 timer circuit.IC3 has a square wave as an output voltage.This voltage is applied into the base of transistor T1 through a current limiting resistor R3.The transistor T1 is utilized to drive the DC motor.
The computer program controls these resistors.In the first case, the switching relays; S1 and S2 are in ON case, and the charging resistor is R1C, where R1C≈0.1×R2=0.1*47k=4.7kΩ, is used to reduce the on time of the pulse signal and, then, the motor speed to the lower limit.
When relays S1 and S3 are on, the IC3 555 generates a pulse signal with a duty cycle of 50%.Therefore, the charging resistor, R1b, is equal to the discharging resistor, R2.In the third case, when the relays S1 and S4 are on, and the charging resistor is R1a, where R1a ≈ 10×R2=10*47=470kΩ.This will increase the on time of the pulse signal and, thus, the motor speed will be 90% of its maximum speed.
When, S1 is on while all other switches S2, S3 and S4 are off, the 555 timer output is adjusted to logic one with a 100% duty cycle and thus driving the DC motor with its maximum speed.The conditions of the ON/OFF operation of the relays and their corresponding motor speeds are summarized in Table 1.The code is prompting to choose a specific speed, stores the selection as an integer variable choice, produces the right digital sequence, and stores it with another integer variable.By using the outportb function, the value of the integer variable data at the PC's parallel port could be placed.The program used the kbhit function for stopping the DC motor when hitting any key on the PC keyboard.The software has been written in "C++" language and compiled using Turbo C++ compiler.Firstly, as the motor is switched off, the program is prompting to press ''Enter" key to start the motor.Once the key is pressed, the motor begins running at low speed.After a few seconds, the program will ask to press any key on the keyboard to go to the next screen for controlling the motor speed.This screen has options to increase and decrease the motor speed and also to exit from the program.To vary the motor speed, enter the choices (1-4) and press ''Enter" key This work varies the motor speed, one step at a time, and the message of "Speed decreased" or "Speed increased" will be shown on the screen.To return to the main menu, certainly, again press "Enter" key.
The circuit prototype has been built on a PC board to experimentally validate the designed PWM speed control DC motor drive as demonstrated in Fig. 12.
Results and Discusion
The experimental results of the motor drive prototype are depicted as seen in Figs.(13)(14)(15).Figs.(13(a)-15(a)) show the output no load voltage waveforms with 10%, 50%, and 90% duty cycle PWM signals, to get speeds listed as in Table 1.
With 0V (or 0 pulses), the output voltage waveform will produce 0% PWM duty.Elevating the voltage from (0-5V) causes increasing the PWM duty from (0-100%) and turning the motor.As it can be seen from the figures, a signal of 10% duty cycle is on for 10% of the wavelength and off for 90%, while a signal of 90% duty cycle is ON for 90% and OFF for 10%.These signals are transferred to the DC motor at a high enough frequency when the pulsing has no effect on the motor.As a result, the overall power fed to the motor can be controlled from off (0% duty cycle) to full on (100% duty cycle) with good efficiency and stable control.
In PWM control technique, the input voltage with fixed time period and magnitude and variable duty cycle is switching rapidly across the motor armature, however, because the motor current is influenced by their internal inductance and resistance, the resulting motor current is shown as in Figs.
(13(b)-15(b)).These figures show the input current of the motor operated on periods 10%, 50% and 90% of the time, respectively.As the duty cycle becomes higher, the average input motor current gets higher and the motor speed increases.
Conclusions
The aim of this paper is to present an efficient, flexible, simple, low cost, lightweight and accurate design method for PWM controlling the speed of a DC motor using voltage control method by interfacing it with a PC via the parallel port.Program interface is user-friendly enabling simple and flexible operation.The PWM circuit was developed using the NE555 timer.The user can adjust timer control by picking up different resistors and capacitors.In this paper, PWM acts as a tool to control DC motor speed (in four levels).Since the average value of the armature voltage is controlled, the motor speed could be controlled only below the rated speed.The motor could be run as low as 230 rpm and as high as 3000 rpm, approximately.The PC is utilized in controlling process because of simplicity and ease of programming, especially as a demonstrative prototype.The proposed circuit may be used in 5 Volt systems.This circuit has been used to control the motor speed for small DC fans of the type used in computer power supplies.Using relays enable the proposed circuit to drive higher-power motors.
CONFLICT OF INTERESTS.
-There are no conflicts of interest.
Fig. 9 :Fig. 10 :
Fig. 9: Frequency vs. R1 for different values of C for modified duty cycle
Fig. 11 :
Fig. 11: Speed control of a 5V PMDC motor via the PC's parallel portTable 1: Switch States and Generated PC Sequences
Fig. 12 :
Fig. 12: Photograph of Hardware a) overall setup b) DC motor driver | 5,996.2 | 2018-10-21T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
$\Lambda_b \to \Lambda_c$ Form Factors from QCD Light-Cone Sum Rules
In this work, we calculate the transition form factors of $\Lambda_b$ decaying into $\Lambda_c$ within the framework of light-cone sum rules with the distribution amplitudes (DAs) of $\Lambda_b$-baryon. In the hadronic representation of the correlation function, we have isolated both the $\Lambda_c$ and the $\Lambda_c^*$ states so that the $\Lambda_b \rightarrow \Lambda_c$ form factors can be obtained without ambiguity. We investigate the P-type and A-type current to interpolate the light baryons for a comparison since the interpolation current for the baryon state is not unique. We also employ three parametrization models for DAs of $\Lambda_b $ in the numerical calculation. We present the numerical predictions on the $\Lambda_b \rightarrow \Lambda_c$ form factors and the branching fractions, the averaged forward-backward asymmetry , the averaged final hadron polarization and the averaged lepton polarization of the $\Lambda_b \to \Lambda_c \ell\mu$ decays, as well as the ratio of branching ratios $R_{\Lambda_c}$, and the predicted $R_{\Lambda_c}$ can be consistent with the LHCb data.
Introduction
Λ c , there exists large power corrections from the expansion on 1/m c . In the present work, we do not perform the heavy quark expansion on the charm quark field, and take advantage of the charm quark field in full QCD to construct the intepolation current of Λ c baryon in the correlation function.
• We will employ the full set of the three-particle DAs of Λ b up to twist-5, which is accomplished in [51], where the projector of the DAs in the momentum space is also presented. Since the models of the DAs of Λ b -baryon is not well established, we will adopt three different models, i.e., the QCDSR model which is constructed based on QCD sum rules, the exponential model and the free parton model which is proposed by mimicking the B-meson DAs for a comparison.
• In the previous studies, the heavy b quark is expanded in HQET and only leading power contribution is considered. To improve the accuracy of our predictions, we will include the 1/m b corrections to the heavy quark field in HQET in the present work.
• When evaluating the correlation function on the hadronic representation, we insert not only the Λ c meson, but also the parity odd counter particle of Λ c baryon, which can help us to extract the form factor without ambiguity by solving the equation of the obtained sum rules.
This paper is organized as follows: In the next section, we calculate the analytic expression of the Λ b → Λ c form factors with the Λ b -LCSR at tree level, and investigate the power suppressed contribution from the power suppressed heavy quark field. In section 3, we will present the numerical results of the form factors and the experimental observations. We summarize this work in the last section.
Since in this paper we do not perform heavy quark expansion with respect to the charm quark, and only take the heavy quark limit of the bottom quark, then there exist two independent form factors in the Λ b → Λ c transition, which are denoted by ζ 1 (q 2 ) and ζ 2 (q 2 ). The form factors f i (q 2 ) and g i (q 2 ) can be expressed as f 1 = ζ 1 − ζ 2 , g 1 = ζ 1 + ζ 2 , In the next section, we will estimate the power suppressed contributions from the heavy quark expansion, while the above relation still holds after including this power correction since we have neglect the contribution from four-particle LCDAs of Λ b . In the literature, there exists another widely uses parameterization of Λ b → Λ c form factors, i.e.
the form factors defined above are related to f i , g i defined in Eq.(1) by after taking the heavy bottom quark limit, the form factors F i and G i can expressed in terms of f i as follows
Interpolating currents and correlation function
Following the standard strategy, we start with construction of the correlation function where the local current η a interpolates the Λ c and j µ,i stands for the weak transition currentū Γ µ,i b with the index "i" indicating a certain Lorenz structure, i.e., For the interpolation current of the Λ c baryon, as discussed in [52], there exist the following three independent choices and i, j, and k are the color indices and C is the charge conjugation operator. The correlation function will vanish if the S-type current is employed, thus we only adopt the P-type and A-type operators in our study. The coupling of Λ c as well as its party odd partner with the interpolating current η a (the decay constant), is defined as At hadronic level, the correlation function can be expressed in terms of the matrix elements of the currents sandwiched by the hadronic states where ρ h ai (s) denoting the hadronic spectral densities of all excited and continuum states with the quantum numbers of Λ c and Λ * c . It is then a straightforward task to write down the hadronic representations for the correlation functions defined with various weak currents. For the vector current, we have For the Π a µ,A (p , q), only the replacement f i → g i ,f i →g i is required. Through the analysis of the Lorentz structures, the correlation function can be parameterized as then the scalar correlation functions can be expressed in terms of the form factors as follows For the correlation function with axial vector part of the weak current, Π i → Π i , the replacement f i → g i ,f i →g i is needed in the Eq. (14).
Tree-level LCSR
Now, we turn to compute the correlation function Π iµ,a (p, q) with space-like interpolating momentum with |n · p| ∼ O(Λ) and n · p ∼ m Λ b at partonic level. The correlation function can be factorized into the convolution of the hard kernel and the LCDAs of Λ b -baryon, i.e.
where the definition of the most general light-cone hadronic matrix element in coordinate space [51] is given by Performing the Fourier transformation and including the next-to-leading order terms off the lightcone leads to the momentum space light-cone projector in D dimensions where we have adjusted the notation of the Λ b -baryon DA defined in [51]. Applying the equations of motion in the Wandzura-Wilczek approximation yields Evaluating the diagram in Fig. 1 leads to the leading-order hard kernel Figure 1: Diagrammatical representation of the correlation function Π µ,a (n · p ,n · p ) at tree level, where the black square denotes the weak transition vertex, the black blob represents the Dirac structure of the Λ c -baryon current and the pink internal line indicates the propagator of the charm quark.
where k = k 1 +k 2 with k 1,2 standing for the momentum of the two soft light quarks inside Λ b -baryon. Inserting the hard functions and the DAs into the correlation functions, we can arrive at the partonic expression of the correlation functions. We note that in order to match the light-like vector n and n in the definition of the DAs of Λ b baryon and the momentum p , q in the parametrization of the correlation function, we need to perform the replacement whereψ(ω) = ω 0 ηψ(η)dη. The obtained invariant amplitudes Π a i can be expressed by the following dispersion integral Taking advantage of the quark hadron duality ansatz, namely, equalizing the contributions from the continuum states and higher states in the hadronic expression and the dispersion integral with the lower limit being the threshold s 0 in the partonic expression of the correlation function, and performing the Borel transform, we can obtain the sum rules at leading power. For the P-type current, the sum rules of the form factors can be written by where the nonzero spectrum densities read For A-type current, we have where the nonzero spectrum densities are given below The form factors g i can be obtained from f i directly so that we do not present the explicit expressions.
Power suppressed contribution from heavy quark expansion
Now, we discuss the power suppressed contribution from heavy quark expansion, to achieve the target, we should replace the leading power heavy quark field in the heavy-to-light current by the NLP suppressed one in the QCD calculation, i.e.
Then the correlation function(we take the correlation function with P-type interpolation current as an example) turns to Contracting the charm quark field, we have The QCD equation of motion indicates that The matrix element of the second term results in the convolution of the hard function and the four-point LCDA of the Λ b -baryon, which has not been studied yet, thus we leave this part for the future study. In addition, the derivative on the gauge link will also result in an additional gluon field, which will also be neglected in the present study. Then the correlation function reads The first term can be evaluated directly. Taking advantage of the definition of the heavy quark field in HQET, the partial derivative leads to a simple nonperturative parameter where the nonperturavtive parameter is regarded to be the mass different between the Λ b -baryon and the b-quark for a good approximation, i.e.,Λ m Λ b − m b . For the second term, performing the integration by part yields additional ω = v · (p − k) in the integrand. Combine this two parts together, we have Finally, we arrive at the sum rules of the form factors at NLP and they are written by From the above result, we can see that the power suppressed contribution considered in the present work is to add a factor (Λ − σm Λ b )/(2m b ) in the integrand of the leading power contirbuiton if P-type interpolation current is employed. For the A-type interpolation current, we need to perform a more complicated modification since there existψ i (ω) in the integrand. The specific operation is as follows: in the spectrum density
Numerical analysis
DAs of the Λ b baryon are the fundamental ingredients for the LCSR of the form factors considered in the present paper, but they are not well established so far due to our poor understanding of QCD dynamics inside the heavy baryon system. In [51,53,54] several different models of the LCDAs for the Λ b baryon have been suggested up to twist-4(not including the twist of the heavy quark field), we consider the following three different models. The first one is obtained from the calculation with QCDSR [53], thus it is named as the QCDSR-model. The specific form for the LCDAs with N = s Λ b 0 0 ds s 5 e −s/τ . τ is the Borel parameter which is constrained in the interval 0.4 < τ < 0.8 GeV and s Λ b 0 = 1.2 GeV is the continuum threshold. The other two phenomenological models are proposed in [51], and they are called Exponential-model and Free parton-model respectively. For the Exponential-model, where ω 0 = 0.4 ± 0.1 Gev measures the average of the two light quarks inside the Λ b baryon. The DAs in the free-parton model take the following form where θ(2Λ − ω) is the step-function, andΛ = m Λ b − m b ≈ 1 ± 0.2 GeV. The first-order terms off the light-cone is not significant numerically, while they are required to guarantee the gauge invariance. In this work, the DAs of these terms are given by : where ω 0 = 0.4 ± 0.1 GeV. The numerical values of the other parameters, such as the masses of the corresponding baryons, the quark masses, the coupling parameters of the baryons, the Borel mass, the threshold parameters are collected in Table. 3. In this table, we use M S mass for charm quark which appears in the partonic evaluation of the correlation functions. For the bottom quark mass, we take advantage of the potential subtraction(PS) mass for b-quark, since it appears in the heavy quark expansion and the PS mass is less ambiguous than the pole mass.
Since the LCSR is valid only at small q 2 , we first present the results of the form factors f 1 and f 2 at q 2 = 0, which are displayed in Table 2. In order to highlight the power suppressed contribution from the heavy quark expansion, both the leading power contribution and NLP contribution are listed for a comparison, and it is obvious the NLP contribution can reduce the leading power contribution about 20 %, which will significantly change the results of the physical observables. Of cause we should note that the power corrections considered in this paper is very preliminary, it is necessary to perform a more careful treatment of the NLP contributions. In this table, the form factors f 1 (0) and f 2 (0) are evaluated with both P-type and A-type interpolation currents, and the results indicate that A-type current leads to a larger results for all the three models of the LCDAs of Λ b -baryon, and in general they can be consistent within the error area. The total uncertainties shown in this table are obtained by varying separate input parameters within their ranges and adding the resulting separate uncertainties of the form factors in quadrature. The results from QCDSR model and free parton model of Λ b LCDAs are well consistent with each other, and results from the exponential model are smaller for both A-type and P-type currents. The result of the form factor f 3 (0) still satisfies f 3 (0) = 0 which is expected in heavy b-quark limit. Although we have considered the power correction from heavy quark expansion, it does not yield nonzero contribution to f 3 (0). The relations between the form factors displayed in Eq.(2) which are from the heavy quark symmetry are still valid as shown the numerical results in Table 2. In Table 3, we collected the Table 2: Form factors f P i (0) and f A i (0) at q 2 = 0; predictions of the form factor f i at q 2 = 0 from the light-front quark model [23,57], the relativistic quark model [21], the covariant constituent quark model [58], the QCD sum rule [35] and the Lattice QCD simulation [19], together with our results. Lattice simulation is valid at large q 2 , the prediction here depends on the extrapolation model and it is smaller than the other predictions, which leads to too small branching ratios of the semileptonic decays compared with the experimental measurement. The different predictions is in general consistent with each other if the uncertainties are taken into account, and in our calculation there are two preferable scenarios: the A-type interpolation current together with the exponential model of the DAs of Λ b and the P-type interpolation current together with the QCDSR model or free parton model of the DAs of Λ b . Therefore it is hard to distinguish different models or the interpolation current from the predictions of the form factors from the current calculation. The form factors g i is directly related to f i , so we will not give more discussions. The results for form factor at q 2 = 0 of this work are compared with those of other methods In order to predict the experimental observables, we extrapolate our results in small q 2 (0 ≤ 5GeV 2 ) to the whole physical region. To this end, we employ the simplified z-series parametrization [59] based upon the conformal mapping which transforms the cut q 2 -plane onto the disk |z(q 2 , t 0 )| ≤ 1 in the complex z-plane. We choose the parameter t ± to be t ± = (m Λ b ± m Λc ) 2 , and t 0 = t + − √ t + − t − √ t + − t min in order to reduce the interval of z after mapping q 2 to z with the interval t min < q 2 < t − . In the numerical analysis, we take t min = −6 GeV 2 . Keeping the series expansion of the form factors to the first power of z-parameter, we propose the following parameterizations where the mass of B * c (1 − ) and B * c (1 + ) appears in the pole factor, while they have not been measured. There are some theoretical estimations on these masses, and here we adopt m B * c (1 − ) = 6.336GeV, m B * c (1 + ) = 6.745GeV [55]. The Fitted results of a i 1 , b i 1 are given Table 4. Since the form factors have been extrapolated to the whole physical region, we plot the q 2 -dependence of the form factors with Figure 2: The form factors f 1 (q 2 ) and f 2 (q 2 ) from A-type interpolation current. The blue band, yellow band and green band denote the the form factors with the QCDSR model, the exponential model and the free parton model adopted respectively. different DAs of Λ b baryon in Fig. 2. The uncertainties shown in the bands are obtained by adding the resulting separate uncertainties from f i (0), a i , b i in quadrature.
In the following, we aim at exploring phenomenological applications of the obtained Λ b → Λ c form factors which serve as fundamental ingredients for the theory description of the Λ b → Λ c ν decays which are regarded as a good platform to further investigate the R D(D * ) anomaly. In order to calculate the phenomenological observables such as the branching ratios, the forward backward asymmetries, etc., it is convenient to introduce the helicity amplitudes which are defined by where the λ Λ b , λ Λc , λ W − denote the helicity of the Λ b baryon, the Λ c baryon and the off-shell W − which mediates the semileptonic decays, respectively. The helicity amplitudes H V,A λ Λc ,λ W − can be expressed as functions of the form factors where Q ± is defined as Q ± = (m Λ b ± m Λc ) 2 − q 2 and M ± = m Λ b ± m Λc . The negative helicities can be obtained by The total helicity amplitudes are then written by The differential angular distribution for the decay Λ b → Λ c ν has the following form where G F is the Fermi constant, V cb is the CKM matrix element, m is the lepton mass( = e, µ, τ ), θ is the angle between the three-momentum of the final Λ c baryon and the lepton in the q 2 rest frame, p is the three-momentum of Λ c baryon, and the amplitudes A i are defined as The differential decay rate can be obtained by integrating out cos θ l In addition, the other observables such as leptonic forward-backward asymmetry (A F B ), the final state hadron polarization(P B ) and the lepton polarization(P ), are defined as and the differential widths with definite polarization of the final state can be written by The numerical results of the relevant observables in the semi-leptonic decays Λ b → Λ c ν are presented in Table. 5 where both the A-type and P-type interpolation current are considered. Three different models of Λ b baryon are employed in the calculation so that they can be compared with the experimental result to determine which one is more preferable. The central value of the life time of Λ b is adopted as τ Λ b = 1.470ps and the CKM matrix element |V cb | has been present in Table 3. From the Table. 5, we can see that the integrated branching ratio for the semi-leptonic decay Λ b → Λ c − ν from P-type interpolating currents is slight smaller than that from A-type current. Compared with the experimental data Br(Λ b → Λ c ν) = 6.2 +1.4 −1.3 %, the prediction of A-type operators seems more consistent with the data if the exponential model is adopted. We note that our result is from the tree level calculation of the leading power contribution in the heavy quark limit plus a rough estimation of the power corrections from the heavy quark expansion, this conclusion is very preliminary and a more careful study is required to distinguish different models of Λ b DAs and the interpolation currents. The numerical results of leptonic forward-backward asymmetry (A F B ), the final hadron polarization(P B ) and the lepton polarization(P ) are also presented in the Table 5. Since these observables are not very sensitive to the form factors at small q 2 region, the predictions from different models of LCDAs and different interpolation currents are very close. To compare our results and the prediction from the other methods, we collect the numerical results from various studies in Table 6. We can see that the integrated branching ratio from various studies does not significantly deviate form each other, while the other observables are more sensitive to different approaches, which can serve as the basis to distinguish different methods. We also present the ratio of branching ratio R Λc in the Table 5, it is not very sensitive to the interpolation current and the model of the LCDAs of Λ b , and the central value of our prediction is a little smaller than some recent study [60], but can be consistent with the recent LHCb reported result R(Λ c ) = 0.242 ± 0.026 ± 0.040 ± 0.059 [61]. Our predictions for the branching ratios have large uncertainty, to improve the theoretical precision, one can make progress in the following two aspects: one is to reduce the uncertainty of the parameters inside the DAs of heavy baryon by global fit or the Lattice calculation, the other is to include the loop corrections and more power corrections.
Summary
We have calculated the form factors of Λ b → Λ c transition within the framework of LCSR with the DAs of Λ b -baryon, and further investigated the experimental observables such as the branching ratios, the forward-backward asymmetries, the final state polarizations of the semileptonic decays Λ b → Λ c ν and the ratio of the branching ratios R Λc . Since the interpolating current of the baryon is not unique, we employed P-type and A-type interpolation current for a cross check of our predictions. Following a standard procedure of the calculation of heavy-to-light form factors by using LCSR approach, we can arrive at the sum rules of the Λ b → Λ c transition form factors. In the hadronic representation of the correlation function, we have included Λ * c state in addition to Λ c state so that the Λ b → Λ c form factors can be evaluated without ambiguity. The LCDAs of Λ b -baryon are not well determined so far, thus we employed three different models, i.e, the QCDSR model, the exponential model, the free-parton model for a comparison.
Since the DAs of the Λ b baryon are defined in term of the large component of b-quark field in HQET, a direct calculation will lead to the form factors at heavy b-quark limit, and only two of them are independent. To improve the accuracy of the predictions, we include the power suppressed contribution from the power suppressed bottom quark field in the heavy quark expansion. However, we neglected the contribution from four-particle DAs of the Λ b baryon since there is no studies on these DAs so far. As as result, the power suppressed contribution considered in this paper does not change the form factor relations in the heavy b quark limit. Numerically, the power suppressed contribution reduced the leading power result about 20%. The total results of the form factors from the P-type interpolation current is smaller than that from the A-type interpolation current, it is hard to distinguish which one is more preferable since the result also depends on the DAs of the Λ b baryon. The LCSR is valid at small q 2 region, thus we extrapolate our results to the whole physical region using z-series expansion, then we can obtain the q 2 dependence of the form factors which is important to predict the experimental observables. We further obtained the predictions of the total branching fractions, the averaged forwardbackward asymmetry A F B , the averaged final hadron polarization P B and the averaged lepton polarization P l of the Λ b → Λ c µ decays, as well as the ratio of branching ratios R Λc . Our predicted branching ratios from the A-type interpolation current are more close to the experimental data once the exponential model of the DAs of Λ b -baryon is adopted, and they are also consistent with the predictions from the relativistic quark model, the light-front quark model, etc.. The ratio of branching ratio R Λc is not very sensitive to the interpolation current and the model of the LCDAs of Λ b , and the central value of our prediction can be consistent with the recent data of LHCb. Moreover, we only performed a tree-level calculation of the correlation function, and the QCD corrections to the hard kernel in the partonic expression of the correlation function are needed to increase the accuracy. In the literature [47], the QCD corrections to the leading power form factors Table 5: The predictions for the branching fractions, the averaged leptonic forward-backward asymmetry A F B , the averaged final hadron polarization P B and the averaged lepton polarization P l for Λ b → Λ c l −ν l under two interpolating current(A-type and P-type) with three different LCDA models of Λ b baryon(QCDSR, Exponential and Free-parton).
Model
l Br(×10 −2 ) A F B P B P l R Λc A-type current of Λ b → Λ have been calculated, the method can be directly generalized to the Λ b → Λ c transition. The power suppressed contributions have been shown to be sizable, and a more careful treatment on the power corrections is of great importance. The above mentioned problems will be considered in the future work. | 6,189.4 | 2022-06-24T00:00:00.000 | [
"Physics"
] |
The Function of LmPrx6 in Diapause Regulation in Locusta migratoria Through the Insulin Signaling Pathway
Simple Summary LmPrx6 of the insulin signaling pathway is significantly associated with diapause induction in Locusta migratoria L. as per our pervious transcriptome data. In the current study, we first cloned and sequenced the gene and demonstrated its similarity to other Prxs using phylogenetic analyses. Later on, we knocked down Prx6 using RNAi and showed that phosphorylation of proteins associated with the insulin signaling pathway and responses to oxidative stress were altered. Knockdown of Prx6 also resulted in a reduced ability to enter diapause, and hence, we are of the opinion that this gene could serve as an effective target for RNAi-based control of L. migratoria L. The study has provided some helpful insights into the diversified roles of Prx6 in locusts and will be of interest to other insect pests for examining the relatively unexplored group of proteins as well. Abstract Peroxiredoxins (Prxs), which scavenge reactive oxygen species (ROS), are cysteine-dependent peroxide reductases that group into six structurally discernable classes: AhpC-Prx1, BCP-PrxQ, Prx5, Prx6, Tpx, and AhpE. A previous study showed that forkhead box protein O (FOXO) in the insulin signaling pathway (ISP) plays a vital role in regulating locust diapause by phosphorylation, which can be promoted by the high level of ROS. Furthermore, the analysis of transcriptome between diapause and non-diapause phenotypes showed that one of the Prxs, LmPrx6, which belongs to the Prx6 class, was involved. We presumed that LmPrx6 might play a critical role in diapause induction of Locusta migratoria and LmPrx6 may therefore provide a useful target of control methods based on RNA interference (RNAi). To verify our hypothesis, LmPrx6 was initially cloned from L. migratoria to make dsLmPrx6 and four important targets were tested, including protein-tyrosine phosphorylase 1B (LmPTP1B), insulin receptor (LmIR), RAC serine/threonine-protein kinase (LmAKT), and LmFOXO in ISP. When LmPrx6 was knocked down, the diapause rate was significantly reduced. The phosphorylation level of LmPTP1B significantly decreased while the phosphorylation levels of LmIR, LmAKT, and LmFOXO were significantly increased. Moreover, we identified the effect on two categories of genes downstream of LmFOXO, including stress tolerance and storage of energy reserves. Results showed that the mRNA levels of catalase and Mn superoxide dismutase (Mn-SOD), which enhanced stress tolerance, were significantly downregulated after silencing of LmPrx6. The mRNA levels of glycogen synthase and phosphoenolpyruvate carboxy kinase (PEPCK) that influence energy storage were also downregulated after knocking down of LmPrx6. The silencing of LmPrx6 indicates that this regulatory protein may probably be an ideal target for RNAi-based diapause control of L. migratoria.
Introduction
Insects have evolved diapause to adapt to seasonally unfavorable environments [1]. Diapause not only enables insects to escape a harsh natural environment but also allows the insect population to develop to a consistent stage, such as the same instar, thereby increasing the possibility of male-female pairing, ensuring highly efficient reproduction [2]. Moreover, diapause, the process opposite to those of reproductive growth, including arrest or slowing of cell division in response to anticipated stress, thereby reducing metabolism and enhancing stress tolerance [3]. Locusta migratoria is one of the most important agricultural pests worldwide and in autumn adults (Huanghua strain) in Huanghua, Tianjin, China (38 • 49 N, 117 • 18 E) prefer to diapause during overwintering, though diapause varies in different geographic locations [1]. The diapause induction in L. migratoria is a trans-generational process from maternal parents to their offspring induced by short day (light:dark = 10:14) as a maternal effect, which makes the L. migratoria as one of the most important model insects for investigating the mechanism of insect diapause induction [4][5][6].
Our previous study showed that FOXO in ISP plays a vital role in regulating locust diapause by phosphorylation [7], which can be promoted by a high level of ROS [8]. In C. elegans [9] and D. melanogaster [10], genetic screens identified FOXO as a key signaling pathway regulating lifespan. Studies on the target of insulin signaling, daf-16/FOXO, suggest that dauer arrest and lifespan are regulated by FOXO activation [11]. The C. elegans FOXO is a critical target of the insulin/IGF-1 signaling pathway that mediates stress resistance [12]. After knocking down FOXO transcript by injection of dsRNA into these diapausing mosquitoes, there is an immediate halt in the accumulation of lipid reserves [13]. A previous study showed that the presence of ROS regulates the insulin pathway during fat synthesis [14]. A large amount of ROS can activate NADPH oxidase 4 (Nox4) in early insulin-induced fat synthesis and inhibit the activity of protein-tyrosine phosphorylase 1B (PTP1B) [14,15], which plays an essential role in balancing insulin receptor (IR) and insulin receptor substrate (IRS) [16,17]. Furthermore, IRS can activate proteins downstream of the insulin signaling pathway (ISP), such as RAC serine/threonine-protein kinase (AKT) and forkhead box protein O (FOXO). The activated AKT phosphorylates FOXO downstream of the ISP [18] to mediate diapause, as shown in mosquitoes [19] and silkworms [20]. Therefore, we speculate that there is a similar molecular mechanism of diapause induction in maternal L. migratoria when they are influenced by short photoperiod condition ( Figure 1). However, the major upstream genes that regulate ROS and FOXO are still unclear and it is more important to investigate these genes.
L. migratoria has facultative egg diapause, and its diapause occurs from sensing a short photoperiod at the adult stage to anatrepsis at the egg stage. In this study, we focus on the diapause induction stage or females under a short photoperiod. In the diapause induction stage, the female adults experience the short photoperiod to form diapause signals, which would be transferred to eggs in the ovary to control egg diapause [21]. Prior to the current experiment, analysis of transcriptome differences of the diapause and non-diapause females were carried out [22], and the FPKM of LmPrx6 in diapause females was 34.9124 versus 9.9304 in non-diapause females. Log 2 FC is about 1.8. It means that LmPrx6 in diapause females is significantly higher than those of the non-diapause phenotype. However, the mechanism controlling diapause was not clear [22].
Prx6, a 1-Cys-type peroxide reductase with only one active Cys residue in its peptide chain, is oxidized to form Cys-SOH, which is produced as an electron donor, allowing the reaction to move forward and reducing the proportion of ROS [23].
Peroxiredoxins (Prxs), scavengers of reactive oxygen species (ROS) produced by active metabolism [24], are a newly discovered class of antioxidant enzymes in addition to the enzymes superoxide dismutase (SOD) and glutathione peroxidase (GPX). Prxs have an active cysteine residue at the amino terminus, which can function as an electron donor to reduce oxides [25]. Despite limited functional differences, Prxs are still classified into three types by their structures, referred to as typical 2-Cys (Prx I-IV), atypical 2-Cys (Prx V), and 1-Cys (Prx VI) [26]. Using the Deacon Active Site Profiler (DASP) tool, Prxs is classified into six groups: AhpC-Prx1, BCP-PrxQ, Prx5, Prx6, Tpx, and AhpE [27]. The AhpC-Prx1 subfamily is essentially synonymous with the "typical 2-Cys Prxs" and has also been referred to as the "A" group in the plant field [28]. Members of the AhpC-Prx1 subfamily have been linked to important roles in cellular signaling [29] and some appear to be regulated by phosphorylation [30]. The Prx6 subfamily takes its name from the first Prx to be crystallized, human PrxVI [31], formerly referred to as "ORF6". Prx6 proteins are most similar to the AhpC/Prx1 subfamily, containing a C-terminal extension and forming B-type dimers and, in some cases, higher oligomeric states and the members are predominantly 1-Cys, though 2-Cys representatives exist [26], and the direct reductant of Prx6 subfamily members is generally not known [32]. At present, studies on Prx6 in insects are limited, with studies mainly in humans [33], mice [34], nematodes [35], crustaceans [36], and fish [37]. The hypothetical molecular relationships of LmPrx6 (Prx6 of Locust migratoria) in ISP in an adult female: Female adults sense the short photoperiod (SP) and cause enhanced LmPrx6 that inhibits ROS. The reduction of ROS leads to the phosphorylation of PTP1B. The phosphorylated LmPTP1B inhibits IR, which could phosphorylate AKT. The attenuation of AKT leads to the dephosphorylation of FOXO. Then, the dephosphorylated FOXO transfers to the nucleus to induce the expression of genes of stress tolerance (catalase and Mn-SOD) and genes of storage of energy reserves (phosphoenolpyruvate carboxy kinase (PEPCK) and glycogen synthase).
Broad-spectrum chemical pesticides contaminate a huge amount of farmlands and cause resistant pests, which is not sustainable. A novel environmentally friendly approach, RNAi, could therefore be a desirable alternative to pesticides. To study the role of LmPrx6, RNAi, a powerful approach to functional analysis of genes in insects, was applied to analyze the related genes and proteins.
Insect Rearing
The Locusta migratoria L. was previously obtained from fields in Huanghua, Tianjin, China (38 • 49 N, 117 • 18 E) in November 2007. All insects were maintained until the present study in our lab at the State Key Laboratory for Biology of Plant Diseases and Insect Pests, Institute of Plant Protection, Chinese Academy of Agricultural Sciences. New first instars were kept in 40 cm × 40 cm × 40 cm rearing cages and transferred to 20 cm × 20 cm × 28 cm mesh cages until the fourth instar. Then, the cages were placed in artificial climate chambers (PRX-250B-30, Haishu Saifu Experimental Instrument Factory, Ningbo, China). The conditions were at either 27 • C and 60% RH under a long photoperiod of 16:8 L:D to produce non-diapause eggs, and at 27 • C, 60% RH under a short photoperiod of 10:14 L:D to produce diapause eggs.
cDNA Synthesis and LmPrx6 Cloning
The whole body of adult locusts was used to extract total RNA. TRIcom Reagent (Tianmo Biotech, Beijing, China) was used to extract RNA, and cDNA was synthesized according to the PrimeScript TM 1st strand cDNA Synthesis Kit (TaKaRa, Dalian, China). By analyzing the transcriptome of migratory locust, we obtained the sequence of LmPrx6 (GenBank accession: MT563098) and primers were subsequently designed in DNAMAN6. Using the cDNA of L. migratoria as a template, LmPrx6 was amplified by the specific primers LmPrx6-1F and LmPrx6-1R (Table 1). The PCR product was purified using a TIANgel Midi Purification Kit (TIANGEN Biotech, Beijing, China) connected to the pMD19-T vector (TaKaRa, Japan). Then, the recombinant was transformed into the Trans1-T1 strain of Escherichia coli. A total of 500 µL of LB liquid medium were added to the transformed E. coli. Then, the mixture was shaken at 200 rpm and 37 • C for 2 h. Bacterial solution (100 µL) was applied to LB solid medium, which included 1% ampicillin. The medium was incubated at 37 • C for 12 h. Three replicates of the single recombinant colony were transferred into 5 mL of liquid LB culture medium with 1% ampicillin, and shaken for 3-6 h at 37 • C, then the medium was used as a PCR template. Reconstructed plasmid was extracted from the transformed strains. Primers in this experiment were synthesized and reconstructed and the plasmid was sequenced by Sangon Biotech Company Ltd. (Table 1). Table 1. List of specific primers used and synthesized for the current study.
Structure and Phylogenetic Analyses of LmPrx6
For subsequent sequence analysis, the Self-Optimized Prediction Method with Alignment (SOPMA) online server (https://npsa-prabi.ibcp.fr/cgi-bin/npsa_automat.pl?page=/NPSA/npsa_sopma.html) was applied to predict the secondary structure of LmPrx6 protein and the tertiary structure was predicted through the website: https://swissmodel.expasy.org/interactive. Signal peptides were predicted as per the methodology of the hidden Markov model of SignalP4 [38]. Prx3 of Drosophila busckii (ALC46298.1) was used as an outgroup.
Synthesis and Injection of dsLmPrx6
The dsRNA was generated by transcription using the T7 RiboMAX system (Promega, Fitchburg, WI, USA) as described in the manufacturer's protocol in vitro. Templates for in vitro transcription reactions were prepared by PCR amplification from plasmid DNA of the cDNA clone of LmPrx6 using the primer pairs LmPrx6-2F and LmPrx6-2R with T7 polymerase promoter sequence at the 5 '-end ( Table 1). The length of dsLmPrx6 was 710 bp. A total of 5 µL of dsLmPrx6 (2 µg/µL) as the target gene, and water as a control were injected into the ventral part between the 2nd and 3rd abdominal segments of female adults within 72 h after molting under a short and long photoperiod. Details about the replicates and RNAi methods followed those of Hao et al. [7].
Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR)
cDNA was synthesized from the RNA samples above using M-MLV reverse transcriptase and recombinant RNase inhibitor (Takara, Beijing, China). The expression levels of LmPrx6 and the other four genes were determined by the qRT-PCR using SYBR Premix Ex Taq kit (Takara) per the manufacturer's instructions in an ABI 7500 real-time PCR system (Applied Biosystems, Foster City, CA, USA). qRT-PCR was performed as per the following conditions: 95 • C for 10 min; 40 cycles of 95 • C for 15 s, 60 • C for 45 s. Gene expression was quantified using the 2 −∆∆Ct method [39], with β-actin as the internal control for normalization of data. The specific primers used for qRT-PCR are listed in Table 1.
Diapause Rate Detection
Locusts of each treatment and replication were placed in new mesh cages (25 cm × 25 cm × 35 cm) and provided with wheat grown in a greenhouse. Subsequently, 30 adult males were presented to each replicate to mate. The bottom of the cages was covered in a 5-cm layer of sieved sterile sand, with new sand given every two days. Mating occurred for about 10 days until oviposition was observed, and eggs were collected at an interval of 48 h for 10 days using a camel paint brush and transferred into paper cups (10 mm × 5 mm), where the eggs were incubated on vermiculite, before shifting to 27 • C and 60% RH to slow down the development. Around 150 eggs were obtained from 3-4 pods, which were then used in each experimental replication. Eggs were kept under 27 • C for 20 days until eclosion of the 1st instar nymphs ceased (D1). To account for non-viable eggs, all remaining uneclosed eggs were kept at 4 • C for 60 days to receive ample time to break the diapause; afterwards, they were incubated at 27 • C for 20 days and for any further 1st instar emergence (D2). The diapause rate (DR) was calculated as: DR (%) = D2/(D1 + D2) × 100%.
LmPTP1B, LmIR, LmAkt, and LmFOXO Phosphorylation Level Detection
Enzyme-linked immunosorbent assay (ELISA) was used to monitor and measure the quantity of insulin receptor LmPTP1B, LmP-PTP1B, LmIR, LmP-IR, LmAKT, LmP-Akt, LmFOXO, and LmP-FOXO in L. migratoria using the specified manufacturer's instructions through catalogue no. SU-B97219, SU-B97220, SU-B97124, SU-B97125, SU-B97136, SU-B97137, SU-B97140, and SU-B97141 (Collodi Biotechnology Co., Ltd. Quanzhou, China). The methods were followed by Hao et al. [7]. The samples were homogenized in 1 mL of phosphate buffer saline (PBS), and the resulting suspension was subjected to ultra-sonication to further disrupt the cell membranes. After homogenates were centrifuged for 15 min at 5000 rpm, the supernatants were collected and were stored at −20 • C until being used for further analysis. All of the required reagents and samples, including micro ELISA strip plate (12 × 4 strips), standards ×6 vials (0.5 Ml × 6 vials), 3 mL of sample diluent, 5 mL of horseradish peroxidase (HRP)-conjugate reagent, 15 mL of 20 × wash solution, 3 mL of stop solution, 3 mL of chromogen solution A, 3 mL of chromogen solution B, two closure plate membranes, and a sealed bag, were prepared and properly maintained at room temperature (18 • C-25 • C) for 30 min prior to initiating the further assay procedure. We set-up standard wells, sample wells, and blank (control) wells, and then added 50 µL of standard to each standard well, 50 µL of sample to each sample well, and 50 µL of sample diluent to each blank/control well. Then, 100 µL of HRP-conjugate reagent were added to each well and covered with an adhesive strip and incubated for 60 min at 37 • C. The Micro titer plates were rinsed using Wash Buffer (1×) 4 times followed by adding gently mixed Chromogen Solution A (50 µL) and Chromogen Solution B (50 µL) to each well in succession, protected from light and incubated for 15 min at 37 • C. Finally, 50 µL of Stop Solution were added to each well. During the process, the well color changing from blue to yellow showed a proper sign and confirmation of uniformity. Colorless or green color is usually a sign of no uniformity. In such a case, the plate was then gently tapped to ensure thorough mixing. The optical density (OD) at 450 nm was read using a Micro Elisa Strip plate reader (Multiskan™ FC51119000) within 15 min of adding the Stop Solution. Standard curves of LmPTP1B, LmP-PTP1B, LmIR, LmP-IR, LmAKT, LmP-AKT, LmFOXO, and LmP-FOXO were constructed respectively and calculated accordingly to quantify the amount of sample of LmPTP1B, LmP-PTP1B, LmIR, LmP-IR, LmAKT, LmP-AKT, LmFOXO, and LmP-FOXO. The phosphorylation level of LmPTP1B, LmIR, LmAKT, and LmFOXO was then calculated as: LmP-PTP1B level =(P-PTP1B)/(PTP1B + (P-PTP1B)), LmP-IR level =(P-IR)/(IR + (P-IR)), LmP-AKT level =(P-AKT)/(AKT + (P-AKT)), LmP-FOXO level =(P-FOXO)/(FOXO + (P-FOXO)). The regression equation of the standard curve was used to determine the specificity.
ROS Activity Detection
Rapid ELISA-based quantification was used to detect and quantify the ROS activities in the female bodies of L. migtatoria using the specified manufacturer's instructions for catalogue WLB-9124701 (Welab Biotechnology Co., Ltd., Beijing, China). The body samples were homogenized in 1 mL of PBS, and the resulting suspension was subjected to ultra-sonication (power = 20%, disrupt 3 s, interval 10 s, repeat 30 times) to further disrupt the cell membranes. After this, homogenates were centrifuged for 15 min at 5000 rpm, and the supernatants were collected and stored at −20 • C until being used for further analysis. All of the required reagents and samples were prepared and were properly maintained at room temperature for 30 min prior to initiating further assay. We set-up blank (control) wells, sample wells, and standard wells. We then added 50 µL of sample diluent to each blank/control well, 50 µL of sample to each sample well, and 50 µL of standard to each standard well. Then, 50 µL of HRP-conjugate reagent were added to each well and covered with an adhesive strip and incubated for 30 min at 37 • C. The Micro titer plates were rinsed using Wash Buffer 5 times followed by adding gently mixed Chromogen Solution A (50 µL) and Chromogen Solution B (50 µL) to each well in succession, protected from light and incubated for 10 min at 37 • C. Finally, 50 µL of Stop Solution were added to each well. During the process, the well color changed immediately from blue to yellow, showing a proper sign and confirmation of uniformity. The optical density (OD) at 450 nm was read using a Micro Elisa Strip plate reader (Multiskan™ FC 51119000, Thermo Fisher Scientific Inc., Waltham, MA, USA) within 15 min of adding the Stop Solution. Standard curves of ROS were constructed and calculated accordingly to quantify the amount of ROS of each sample.
2.9. Catalase, Mn-SOD, Glycogen Synthase, and PEPCK Activities in the Adult Diapause Females Spectrophotometry was used to detect the catalase, Mn-SOD, glycogen synthase, and PEPCK activities in the female bodies of L. migtatoria using the specified manufacturer's instructions for catalogue CAT-2-Y, SOD-2-Y, GCS-2-Y, and PEPCK-2-Y separately (Comin Biotechnology Co. Ltd., Suzhou, China). The female bodies of each treatment were homogenized in 1 mL of PBS with 0.1 g of tissue, and the resulting suspensions were subjected to ultra-sonication to further disrupt the cell (power = 20%, disrupt 3 s, interval 10 s, repeat 30 times). After this, homogenates were centrifuged for 10 min at 8000 rpm, and the supernatants were collected and were stored at −20 • C until being used for further analysis. We then added 90 µL of sample diluent to each blank/control well, 90 µL of sample to each sample well, and 90 µL standard to each standard well. In total, 240 µL of Reagent I, 6 µL of Reagent II, 180 µL of Reagent III, and 510 µL of Reagent IV were successively added to each well. In such a case, the plate was then gently tapped to ensure thorough mixing. All of the required reagents and samples were prepared and were properly maintained at room temperature for 30 min. The optical density (OD) at 240, 560, 340, and 340 nm of catalase, Mn-SOD, glycogen synthase, and PEPCK treatments were separately read using a Micro Elisa Strip plate reader (Multiskan™ FC 51119000, Thermo Fisher Scientific Inc., Waltham, MA, USA). Standard curves of catalase, Mn-SOD, glycogen synthase, and PEPCK were separately constructed and calculated accordingly to quantify the amount of those of each sample.
LmPrx6 Cloning and dsLmPrx6 Synthesis
The LmPrx6 (GenBank accession: MT563098) was cloned from the L. migratoria cDNA. Results showed that the sequence of LmPrx6 cloned was identical with the transcriptome one. The sequence of LmPrx6 (Figure 2A) contained 672 nucleotides and the dsLmPrx6 with double T7 strands (a total of 38 nucleotides) ( Figure 2B) contained 710 nucleotides.
Structure and Phylogenetic Analyses of LmPrx6
Phylogenetic analyses using 15 proteins demonstrated that LmPrx6 belongs to the Prx6 family and only the motif TPVCT has a Cys ( Figure 3A). The secondary structure of the LmPrx6 protein was analyzed by the SOPMA online server (https://npsa-prabi.ibcp.fr/cgi-bin/npsa_automat.pl?page= /NPSA/npsa_sopma.html) and the predicted 3-D structure was analyzed by the website: https: //swissmodel.expasy.org/interactive ( Figure 3B). The result showed that the four secondary structures (α-helix, extended strand, β-turn, and random coil) in LmPrx6 accounted for 28.18%, 19.55%, 5%, and 47.27%, respectively. Moreover, there was no signal peptide in the protein. Evolutionary analyses were conducted in MEGA6 [38]. The evolutionary history was inferred using the neighbor-joining method [40]. The optimal tree with the sum of branch length = 3.94690523 is shown in Figure 3C. The percentage of replicate trees in which the associated taxa clustered together in the bootstrap test (1000 replicates) are shown next to the branches [41]. The evolutionary distances were computed using the Poisson correction method [42] and are in units of the number of amino acid substitutions per site. The analysis involved 21 amino acid sequences. All ambiguous positions were removed for each sequence pair. There was a total of 220 positions in the final dataset. The multiple sequence alignment is shown in Figure 3C.
Functional Identification of LmPrx6 by RNAi
A previous study [22] on the transcriptomes of adults under long (16:8 L:D) and short (10:14 L:D) photoperiods showed that LmPrx6 was involved in diapause regulation in L. migratoria. To verify the function of LmPrx6 on locust diapause, the primary relative mRNA level of LmPrx6 was determined in L. migratoria treated under both short photoperiods (SPs) and long photoperiods (LPs). Results showed that LmPrx6 expression of locusts treated by SP was 3.5 times higher than that of LP ( Figure 4A). dsLmPrx6 was then injected into female L. migratoria adults to knock down LmPrx6 under LP and SP to identify the LmPrx6 function, followed by confirmation of RNAi efficiency via qRT-PCR. There is no significance in locusts under LP and SP when LmPrx6 was knocked out ( Figure 4A). The variable expression of LmPrx6 with significant (p < 0.05) change in dsLmPrx6 treatments and CK (ddH 2 O) under both photoperiods indicated the acceptability of RNAi efficiency of LmPrx6 ( Figure 4B). Under LP, the average egg diapause rate (2.6%) in the dsLmPrx6 treatment was significantly lower ( Figure 4C) than the CK (4.2%). Similarly, under SP, the average egg diapause rate (65.6%) in the dsLmPrx6 treatment was significantly lower ( Figure 4D) than that of the control (91.4%). This shows that the knockdown of LmPrx6 could inhibit diapause of L. migratoria under both photoperiods.
Impact of LmPrx6 on the Phosphorylation Level of Downstream PTP1B, IR, AKT, and FOXO
A prior study in our lab showed that ISP plays a vital role in regulating locust diapause [43]. The phosphorylation levels of proteins involved in ISP, including LmIR, LmAKT, LmFOXO, and LmPTP1B, were analyzed. ELISA was performed to determine the phosphorylation level of these four key proteins in the ISP.
ROS Activity Regulated by LmPrx6
To identify the effect of dsLmPrx6 injection on ROS, dsLmPrx6-treated and controlled locusts under both SP and LP were determined subsequently. Results showed that the ROS activity of locusts treated by SP was significantly (t = 20.2633, p < 0.0001) higher than that under LP-treated locusts ( Figure 6A). Under SP, the ROS activity was 632.222 IU/g in the LmPrx6 treatment, significantly (t = 18.3069, p < 0.0001) lower ( Figure 6B) than that of CK (ddH 2 O) (757.096 IU/g). In a similar manner, the ROS activity in the LmPrx6 treatment (535.007 IU/g) was also significantly (t = 14.6086, p = 0.000127722) lower than that of the control (611.516 IU/g) under LP ( Figure 6B). These results above indicate that LmPrx6 probably positively regulated the ROS activity and subsequently promoted diapause induction under LP and SP. All results were expressed as means ± standard error (SE) of the three replicates.* Indicates probability level of p < 0.05 by Student's t-test.
Discussion
Insects enter a static stage by sensing changes in the external environment, such as the photoperiod, temperature, and food volume [2], often accompanied by inhibition of metabolism, for example, slow growth or even stagnation, decreased respiratory rate, etc. [3]. Locusta migratoria exhibits facultative egg diapause, and egg diapause has been shown to be influenced by the maternal photoperiod in L. migratoria. Although the diapause varied in different geographic locations in a previous study [1], our Huanghua strain (collected in Huanghua, Tianjin, China (38 • 49 N, 117 • 18 E)) enters diapause under short photoperiod (SP) conditions. Before the current experiment, transcriptome analysis of the diapause and non-diapause phenotypes showed that LmPrx6 was involved in the diapause maternal locusts, but the mechanism was not clear.
LmPrx6 was verified by the alignment with Prx6 proteins from 14 other species and the phylogenetic tree showed that the Prx6 family members are highly conserved. All 15 proteins have the same TPVCT motif, indicating that the only conserved Cys is probably the active one.
Previous studies showed that the presence of Prx6 inhibits the activity of ROS in vivo [25,46], while excess ROS can directly inactivate the transcription factor FOXO downstream of the ISP by phosphorylation [47], causing weaker signaling and the lower diapause rate. In the current study, qRT-PCR was initially applied to check the mRNA level of LmPrx6. The LmPrx6 expression under SP was significantly higher than under the long photoperiod (LP) ( Figure 4A). Thus, these results suggest that LmPrx6 might positively regulate diapause. To test this hypothesis, RNAi was performed to check LmPrx6 functions. One previous researcher showed that RNAi sensitivity in migratory locust varies because of strains rather than because of different genes in one species [48]. Our results showed that LmPrx6 expression decreased by 93.6% and 66.1%, respectively, under SP and LP after knocking down LmPrx6 ( Figure 4B). This suggests that the RNAi efficiency is acceptable in the Huanghua strain. Moreover, the diapause rate significantly decreased after dsLmPrx6 injection ( Figure 4C). The RNAi result was identical to the qRT-PCR outcome. It revealed that LmPrx6 promotes diapause induction in L. migratoria under both photoperiods.
To confirm the relationship between LmPrx6 and the ISP at the protein level, the phosphorylation level of LmFOXO was determined in CK (ddH 2 O) and dsLmPrx6-treated locusts under both LP and SP. Results demonstrated that the phosphorylation level of LmFOXO, which represents LmFOXO inactivity, increased after knocking down LmPrx6. Results also indicated that LmPrx6 could activate LmFOXO to induce diapause of L. migratoria. Moreover, we detected the upstream protein LmPTP1B of the ISP [49] and found that the phosphorylation of LmPTP1B protein was significantly reduced, while LmFOXO phosphorylation was significantly increased. The phosphorylated LmFOXO, moving from the nucleus to the cytoplasm, cannot regulate the expression of fat synthesis genes and the ISP was inhibited. In contrast, when LmPrx6 was attenuated, the mRNA level of FOXO decreased sharply ( Figure 5D), indicating that LmPrx6 probably functions at the transcriptional level.
In contrast, a large amount of ROS can inhibit the activity of PTP1B [50], which plays an essential role in balancing IR and insulin receptor substrate (IRS) [16,17], and ultimately increases FOXO activity [51]. A previous report also showed that the ROS level increased when diapause induction of L. migratoria occurred under SP [37]; thus, our recent findings are consistent with our previously verified results for rai1. Additionally, we found that ROS activities were increased after knocking down LmPrx6 under both LP and SP. This expression was almost similar to the knockdown of rai1 under SP [22]. The result indicated that LmPrx6 inhibited LmFOXO activity, which ultimately induced locust diapause.
Enhanced stress tolerance is one of the important features of diapause, and is essential for successful overwintering. Several lines of evidence suggest that genes encoding two antioxidant enzymes, catalase and superoxide dismutase-2, are critical in generating these characteristics during diapause in overwintering adults of the mosquito Culex pipiens [45]. Mn-SOD has already been identified as an important target downstream gene of FOXO in mice [52] and nematodes [53]. In L. migratoria, the expression of Mn-SOD and SOD was significantly higher in samples under SP than under LP [54]. The mRNA expression levels of catalase and Mn-SOD in the whole body of L. migratoria and the enzyme activities of these two proteins were consistent in tendency ( Figures 5C and 7A). After knocking down LmPrx6, the mRNA level of these genes and enzyme activities of these proteins were both decreased ( Figures 5D and 7B). It suggested that LmFOXO could positively regulate catalase and Mn-SOD, which may be delivered to eggs, hence enhancing the stress tolerance of diapause eggs. It also suggests that catalase and Mn-SOD are probably critical links between the ISP and adult diapause induction Efficient storage of energy reserves during diapause is crucial not only for surviving prolonged periods of developmental arrest but also for maximizing reproductive success when diapause has terminated and development resumes [45]. Glycogen synthesis is regulated by glycogen synthase kinase-3 (GSK-3) in response to insulin, which was involved in diapause processing in Bombyx mori eggs [55]. Insulin stimulation results in the phosphorylation and inactivation of GSK-3, rendering it incapable of inhibiting glycogen synthase activity, thus leading to increased glycogen synthesis [55][56][57][58]. In Aphidius gifuensis, glycogen synthase involved in trehalose synthesis was differentially expressed in diapause and non-diapause A. gifuensis [59]. In this paper, the mRNA expression levels and the enzyme activity of glycogen synthase (Figures 5D and 7B) of female adults were both significantly downregulated with the attenuation expression of LmFOXO after LmPrx6 interference. These results suggested that dsLmPrx6 might indirectly inhibit the delivery of sufficient nutrition to eggs to complete the diapause process. Phosphoenolpyruvate carboxykinase (PEPCK) is part of the gluconeogenesis pathway, and has higher expression levels during diapause in Sarcophaga crassipalpis [60]. Low expression of PEPCK is related to levels in pyruvate biosynthesis, resulting in low pyruvate diapause-destined pupae to induce lifespan extension or diapause via low metabolic activity [61]. In our results, the mRNA expression levels and the enzyme activity of PEPCK ( Figures 5D and 7B), which might contribute to energy storage, were also significantly downregulated with the reduction of LmFOXO after LmPrx6 interference.
Conclusions
We successfully cloned LmPrx6 for the first time from L. migratoria. Structure and phylogenetic analyses showed that LmPrx6 was highly conserved in different species, with the motif TPVCT probably containing the only active Cys. Results showed that the expression of LmPrx6 was higher in female locusts under diapause conditions than non-diapause females. This consequence is consistent with transcriptome data. The female adults sensing the short photoperiod cause the increase of LmPrx6, which inhibits proteins in ISP through ROS. Finally, the activities of diapause-related enzymes (catalase, Mn-SOD, glycogen synthase, and PEPCK) were also significantly enhanced by LmPrx6. The regulatory mechanism of LmPrx6 promoting diapause in L. migratoria is associated with the decreased activities of ISP proteins and enhanced activities of FOXO and diapause-related proteins. When autumn locusts lay eggs, the transgenic approach and utilization of dsLmPrx6 could be used as a green pesticide to control locusts. The targeting of LmPrx6 in this manner could provide an efficient and environmentally sustainable approach to reduce the agricultural damage caused by L. migratoria.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,576.8 | 2020-11-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
The Upper Bound Estimation of Abelian Integral for a Class of Quadratic Reversible System under Small Perturbations
In this article, by using the Riccati equation method, we investigate the maximal values of the isolated zeros of the Abelian integral for a set of quadratic reversible systems (r11) that belong to genus one, while experiencing varying 3rd, 2nd, and 1st-polynomial perturbations. Specifically, we aimed to find the upper bound for the maximal zeros of the system’s limit cycle (a special dynamic behavior in a stable state, characterized by the existence of specific periodic orbits). We know that the Abelian integral is a function of h, so when studying the maximal zeros of the function related to h, we not only consider the highest degree of the relevant function but also take into account the parity of the function and the range of values of h. Then through variable substitution, a smaller upper bound can be obtained: our findings show that the maximal values of the isolated zeros count under varying 3rd, 2nd, and 1st-polynomial perturbations is 12, improving upon previous results where the upper bound was 34 for the 3rd polynomial perturbation and 22 for the 2nd and 1st polynomial perturbations. This study represents an improvement upon previous research.
Introduction and Main Conclusion
Consider the quadratic reversible system ( , ) ( , ) ( , ), ( , ) ( , ) ( , ) where is a small parameter, and ( , ) x H x y G x y are quadratic polynomials in , xy, while ( , ) u x y , ( , ) v x y are polynomials of degree ( 1, 2,3) nn in , xy .The system given by ( 1) is a quadratic reversible system with a center that is integrable when the value of is zero.In this case, the function ( , ) H x y serves as the first integral of the system and is accompanied by an integrating factor ( , ) Gxy, which permits the construction of a continuous periodic domain can be defined as follows is the largest open interval.The primary focus of this article is to determine how many limit cycles can arise from the periodic domain h of system (1) for any given decimal value of .It is well-established that the quantity of isolated zero points in the Abelian integral () Ihas follows which sets a ceiling on the maximum number of limit cycles of system (1) that can exist in any compact region of the periodic orbits [1][2][3][4].
Reference [5] was the first to use the Riccati equation method to study the maximum number of zeros of the Abelian integral of system (1).Reference [6] divided quadratic reversible systems of genus one into 22 classes, specifically ( 1) r . Using the Riccati equation method, the maximal number of zeros that can be obtained in Hamiltonian systems ( 1) r and ( 2) when n is small was explored in reference [7]; Reference [8] studied systems ( 3) r -(
6) r
; References [9][10][11][12] with the integrating factor and nearly all of its trajectories are sixth-degree curves.Reference [9] provides the following theorem on the upper bound of Abelian integral for system ( 11) r .Theorem 1.1 [9] For arbitrary n -degree polynomials ( , ) u x y and ( , ) v x y , the maximum number of the isolated zeros of the Abelian integral ()
Ih
of system ( 11) r linearly depends on n .Specifically, the maximum number is 21 29 and the maximum number is 22 when The central finding of this paper can be summarized in the following theorem.Theorem 1.2.For arbitrary n -degree polynomials ( , ) u x y and ( , ) ,the maximum number of the isolated zeros of the Abelian integral ()
Ih
of system ( 11) r is: the maximum number is 12 when 1, 2,3 n .are any polynomials, then by equation (2) , we can obtain that Theorem 1.2 states the Abelian integral () Ih which has the following expression
Simple Expression for the Abelian Integral I(h)
To simplify the derivation, the integral function presented here will be used throughout the following calculations: Since the periodic orbit h possesses axisymmetry about the x -axis, it can be concluded that , ( ) 0 rs Ih when s is even.Therefore, only the odd values of s need to be taken into account.Then we have where . 9 The formulas (21) and ( 22) in document [9] .
Riccati Equation
In document [9], the following lemma's conclusion is established for the system ( 11) r .Lemma 3.1 [9] If 1 n… , then () Ih obeys the given Raccita equation as follows where
( ) ( ) ( ) ( ), B h E h W h A W h D h W h G h
where .In order to fully demonstrate the validity of Theorem 1.2, we will present the following lemma.
( ) G h B h E h F h B h E h F h
Lemma 4.1 [7] If the functions . Finally, we will utilize the technique of Riccati equations to prove Theorem 1.2.Proof.We can derive the following conclusion by using Lemma 3. .Therefore, from equation (21), we can obtain ( ) 2 0 1 3 6 2 12.
Conclusion
For system ( 11) r , this paper applies the Riccati equation method to study the upper bound of the number of isolated zeros of the Abelian integral under arbitrary 3rd, 2nd, and 1st-polynomial perturbations.When studying the maximal zeros of the function related to h , we not only consider the highest degree of the relevant function but also take into account the parity of the function and the range of values of h .Then we obtained the results are as follows: when 1, 2,3 n , the upper bound is 12.These results are improvements upon the original ones.
4 .
The Upper Bound Estimation of Abelian IntegralIn this paper, better results are obtained by considering only the degrees of the functions() h, while also taking into account the range of values for h and the parity of the functions.The purpose of this section is to utilize the Riccati equation method in order to demonstrate Theorem 1.2.Let () Ih ♯ represent the count of zeros of the Abelian integral that lie within the interval
Table 1 .
Comparison between the new results and the original ones. | 1,526 | 2023-10-01T00:00:00.000 | [
"Mathematics"
] |
Encoding Human Visual Perception Into Deep Hashing for Aerial Image Classification
Accurately calculating the labels of each high-resolution image is an unavoidable technique in remote sensing. In this article, we propose a novel image assortment model that personate each aerial image by optimally encoding a gaze shifting path (GSP). At the same time, wrong semantic model can get absent with it. More specifically, for each aerial image, we reference visually/semantically noticeable representational rogue interiors. To encode their analysis attributes, we mean a small graph comprise of spatially conterminous motivational wall, and extract GSPs on it by active literature algorithm rules. GSP can accurately capture humans perception over many aerial image areas when the notice senses are placed in each image. Subsequently, a double deep learning framework is proposed to intelligently exploit the semantics of these GSPs, with three attributes: label noises reduction, visual manner-unchanging semantics, and adaptive data chart updates are seamlessly integrated. The proposed framework can iteratively solved, with each graphlet re-form into a base. Finally, the GSP-compliant summaries in each aerial have shown the quantized vectors for visual understanding. To qualitatively and quantitatively assess how GSP affects information aerial image classification, we notice that the phantom copy of our progress classification is more accurate than its competitors, and the GSPs propagated by Alzheimer's patients are discriminative from those produced by typical observers, making the classification competitive.
disasters, end combustion, diluvial, earthquakes, and disembark subsidence. In expertness, human gaze apportionment can be essentially typify by an also, wherein each face grounds pairwise sequentially intuit goal or their ability. In electronic computer specter, dozens of shallow/deep visual classification/parsing fashion have been proposed to describe airy photos. Representative performance intercept the following: 1) multiple token science/convolutional nerve netting (CNN)-supported opposed localization second-hand weak compartmentalize [1], [2]; 2) graphical-pattern-based semantic propagation for aerial photo parsing [3], [4]; 3) carefully purpose intense architectures for semantic annotation toward atmospheric picture [5], [6], [7]. Experiments and commercialized systems support their achievement, oblige, and extensibility. To our lite notice, however, the existent example cannot optimally particularize lofty images due to the following three reasons.
1) In manner, each aerial likeness may contain tens to hundreds of field objects with the spatial distributions. Efficiently and effectively exploiting their basic semantics is difficult. Potential challenges embody: a) how to mathematically fork the complex spatial interactions among estate objects, and b) how to design a deep architecture that transfigurate the sculpturesque spatial interactions into imovable-piece optic shape. Besides, encoding diverse spatial interactions within each aerial effigy into a test classifier (e.g., SVM or softmax [8]) is another challenge. The large numeral of show within each aerial image companion it impossible to enumeratively commentate all the ground aim at pixel-level. Owing to the remarkable progress in weakly inspection learning, only image-just category is required for draw region-flat semantics. In this way, in arrangement to uncover the regional semantics inside each high image, we have to exploit the weakly superintend user-provided labels associated with it. However, these use-provided labels might be subjective and even corrupted. 2) In artifice, constructing a cry-forbearing label purification works is a crabbed undertaking; toward an effective airy conception assortment pipeline, it is necessity to characterize the relish distributions in the feature space exactly. Nevertheless, due to the imperfect user-provided compartmentalize, the initially fitted prospect disposition might be grinder optimal. Actually, we trust an accurate design that adaptively updates the ideal swatch distribution during the label elegance. Apparently, constructing a solvent multi-attribute optimization model prescribes no-trivial expertise. To handle or at least allay these challenges, we converse a biologically inhaled antenna appearance assortment framework.
The key novelties are twofold: 1) sequentially selecting multiple visually/semantically prominent graphlets to establish gaze flitting paths (GSPs), and 2) a binary matrix factorization (MF) that intensely transnature the GSP from each airy image into the two silence digest, wherein the influential unbecoming semantic ticket can be jointly optimized. More specifically, given a large number of conceptions, each of which may enclose one or manifold pollute semantic price, we first descent a set of appearance-aware image patches (namely, goal patches) from each lofty image. Next, we torch a determine of spatially adjacent object patches to form multiple graphlets, supported on which an lively learning summon [9] algorithm is leveraged to construct a GSP that bag how humans sequentially notice visually/semantically jumping regions within each airy show. Noticeably, GPSs are more descriptive than the authoritative visual saliency plant since stare floating sequences can be encoded. Thereafter, a din-tolerant MF converts the graphlets into the corresponding dyadic hash digest, supported on which pairwise graphlets can be obtain quantitatively and rapidly. The MF can seamlessly combined three reputation, e.g., optimal category grid gentility, and effigy-level to patch-straightforward semantics coak. Based on the calculated dyadic digest of each graphlet, the Boolean codes of each GSP can be succeed wherefore. By calculating the binary comminuted digest from the entire training GSPs, we can vert the graphlets inside each ethereal appearance into the nucleus-induced shape vector, supported on which a several-sign SVM is well informed for aerial image classification. Extensive quantitative comparisons among the situation-of-the-art thorough recognition fashion have demonstrated the fight of our bluestocking classifier.
In addition, to qualitatively and quantitatively show the meaning of GSPs in aerial appearance assortment, we get the GSPs prediction by our advanced and those recorded from 37 exact observers. We observe that the soothsay GSPs are over 90% harmonious with those monument by humans. We also record GSPs from 33 Alzheimer's patients, wherein their GSPs are way dissimilar from our foreshow ones and the normal observers. Correspondingly, the accuracy is far from sufficient, contemplative that visual discernment impairment will hurt aerial semblance classification.
Totally, this duty has the following three-pen contributions: 1) an unhardy-supervised atmospheric appearance classification pattern that intelligently eschew incorrect pictureimpartial drip; 2) an upgraded MF that seamlessly encodes three attributes for calculating the comminuted codes of each graphlet; 3) a wide use ponder by 70 normal observers and Alzheimer's patients that quantitatively analyzes the serviceableness of GSPs in aerial effigy assortment.
II. RELATED WORK
Many graphical models [10] have been discussed to encode the sophisticated topologies of manifold idol patches.
Demirci et al. [11] proposed to think the multiple relation between vertices from two boisterous and top-annotated graphs. Felzenszwalb and Huttenlocher [12] sculptured the deformable supercilious-mandate relationships of object ability by a spring and further established image-to-likeness writings by the cost service minimization. In [13], the diagram vertices present both the predictable and unpredictable show ability. Thereby, each object's type label is deduct by those of its spatial neighboring. Duchenne et al. [14] conversed a conception nucleus machine by deriving graphs' writing for labeling object categories. Lin et al. [15] formulated a semantic parsing algorithmic rule second-hand the oppose-informed depict graph. It dynamically updates the graphical model that progressively fuzes the in-front of-defined random grammar. Furthermore, Lin et al. [16] designed a hierarchical graphical standard by decay compositional end into different parts. The multiple show parts coupled with their relationships are delineate by an AND-OR diagram encoding the random reputation. Zhang et al. [17] proposed an intense diagram twin(prenominal) ecclesiology by prying the keypoints sunder from hominine posturize. Based on the delineation-nmoment quotepnp algorithm, this process can reckon the keypoints on show and the 6-D human poses. To aid graph matching, Tang et al. [18] integrated an analysis situs-informed tetragonal urgency into a unmixed fork. The outward is to enhance the unary geometrical prior and pairwise textural context. Notably, the abovementioned graphical models are all dataset specific. Actually, we penury a principled method that describes all types of aerial copy without any prior erudition.
Bronstein et al. [19] proposed the well-known obliquemodality measure learning, supported on which they bestow the unimodal hashing to the multimodal diverse. Kumar et al. [20] synthetic the flag unimodal spectral comminuted algorithmic program [21] to the multimodal scenario. Zhu et al. [4] modeled each form modality by a low-rank anchor diagram. Afterward, a divide hamming room is flow in the stop graph space. Finally, the intra-and intermodality correlations are simultaneously exploited worn a generative example. Yu et al. [22] sketched the distinguishing conjugate dictionary hashing framework for advance multiorigin media retrieval. They characterized multiple feature modalities by disperse codes lettered from the portion semantically distinctive dictionary. Song et al. [23] erected a hamming room by hypothesizing that the inter-and intramodality shapes are congruous. Correspondingly, the hash duty is calculated via a lineal retrogradation. Zhu et al. [4] represented each sample by a linear confederacy of its multiple adjoin. Afterward, they design each example onto the concealed space by MF, wherein the secret semantic shape can be implicitly uncovered. However, only a small scale of pattern is purchased for hashing model science in [4]. By hypothesizing that each specimen shares the unite hash digest across different form modalities, reasoning MF [24] was speak for hashish. Liu et al. [25] visited the fusion alikeness to form the Hamming space that marks the multimodal analogy. More recently, a stream of profound silence algorithms [26], [27], [28], [29], [30] has been designed. They typically focus on formulating the objective functions to calculate discriminative and compact silence digest, supported on which promising performances have been effect. Conclusively, the abovementioned ignorant/profound hashing Fig. 1. GSP recorded from five volunteers are marked by differently colored arrows, and GSP predicted by our adopted active learning [9]. example cannot thoroughly handle noisy labels (as shown in Fig. 2). Moreover, the data distribution cannot be suit updated for discriminatively learning hash digest.
A. GSP Extraction
Practically, there are many fate of end (or their parts) internal each airy image. According to the recent biological and psychological meditations [31], humans are propense to attend an unimportant lot of visually/semantically prominent motive during visible sensation. When interpreting each concept, human ken system will perceive the forefront jumping aspect beforehand, such as the morbific tissue. Meanwhile, the pause rear are kept almost unprocessed. Apparently, we have to associate such earthborn optical perceptual experience during ethereal appearance perception. In our employment, an immovable object proposals extract conjugate with a geometry-secure brisk learning algorithm is extend to select the foreground noticeable object patches. In aerial image categorization, it is sign to steadfast avow the complicated road plexure, e.g., *-like, timber-like and grid-inclination topologies, as exemplified in Fig. 1. In artifice, these topologies can be really present by a small chart, wherein each feather-edge grounds pairwise spatially neighboring streets. In our duty, these small graphs are appeal to graphlets. We employ the well-understood BING [32] operator as the objectness measure. Noticeably, after visiting the BING speculator, there are still many oppose patches that entrail each antenna picture. In custom, humans nimbly attend to fewer than ten aspect within each high effigy. To imitate this, a powerful lively learning (for the geometry-preserved nimble literature, refer to [9]) is utilized to discover K(K < 10) representative end-beauty spot from each aerial image. It incorporates two features: 1) each aerial likeness's spatial layouts and 2) image-level semantics of object rogue, as shown in Fig. 3. Fig. 3. Elaboration of spatially adjacent object patches. The red box denotes object patch (3,2,3) while the green one represents object patch (2,2,1). They are spatially adjacent. In our work, if cell (i, j, k) is over 90% covered by an object patch, then we define this object patch's location as (i, j, k), where i denotes the pyramid level and j and k represent the xy-coordinates, respectively.
Based on the top K object patches, each graphlet is fabricated by violence wag mention [33] on the spatially near goal repair. By leveraging a three-seam spatial mount, pairwise motive beauty spots are opine as near when their cells (determined by their locations) are bordering. Next, a starting aim field is randomly selected, and a range walk process is hold to compile each graphlet. Based on the vector representation of each graphlet [34], a well-assumed active choice call [9] is adopted to select the K representative graphlets from each ethereal effigy. The quotation standard is that the K opt graphlets can maximally reconstruct the rest one within the unreal effigy. In supposition, the active learning [9] is a solution by an iterative algorithmic rule due to the intrinsic nonconvexity of its objective function, i.e., the K typical graphlets are selected sequentially based on their representativeness cut. Accordingly, we sequentially couple the K typical ones to form a gaze variable path, as typify on the true of Fig. 1.
B. Deep Graphlet Hashing
To retentive and exactly obtain graphlets essence from ethereal appearance combined with clamorous idol-even tassel, we mean a base-2 MF (spreadsheet factorization)-supported obscure silence that can intelligently crop drip outcry. It spare the most significant number ownership of the binary star compartmentalize spreadsheet, which can be mathematically expressed as follows: where Q ∈ R c×t and P ∈ R n×t denote the image-level labels and aerial images in the latent space, respectively. J quantifies the loss of MF while Θ(·) represents the regularization term. As aforementioned, the observable image-level labels T might be contaminated. Apparently, this will lead to suboptimal factorization results. To theoretically handle this issue, we attempt to learn an optimal image-level label matrix L from the observed one by sparse learning. Based on the construction of the label matrix, entity L ij is an indicator representing the relevance between the ith aerial image and the jth image-level label. In this way, we can obtain the following objective function: where J l penalizes the reconstruction of the optimal label matrix from the observed one with noises. During the hashing process, it is generally recognized the importance of preserving the underlying data structure [9], e.g., the local structure between neighboring samples. Simultaneously, the hash function should be learned, which can make the graphlet-to-graphlet comparison scalable. The binary hash codes of each aerial image are calculated by hash function: h = sgn(f (x)Z). Totally, we formulate the following objective function: Equation (3) can be reorganized into the matrix form as where β and γ are no-denying parameters that infer the solicitation of the reciprocal condition. It is supported on which the sequential statement as where R counts the aerial image categories. It is worth accenting that the optimization undertaking (5) concentrate on letters checksum activity and binary star checksum codes with before-suited data diagram, which is originate worn perhaps pandemoniscal likeness-clear compartmentalize. Such prefitted data plot remains unchanged during the learning process, which might be subideal. Ideally, we defect to continuously update the data plot in the erudition projection. Aiming at this, we propose to together learn the data chart. More specifically, when clarifying these vociferous labels, we failure the data plot M to be highly congruous with the book-learned dummy. We respect that the comprehend of the similarities between one graphlet and other graphlets is embarrassed to be one, and M ii = 0. Therefore, the goal duty in (5) can be upgraded into In the science procedure, the Laplacian array is updated by K = A − (M + M T )/2. M 0 means the drop cap data graph that is keep supported on T. The abovementioned external cosine seamlessly completes comminuted lore, semantics encoding, and optimum data diagram updating into a unified framework.
To clear up the subjective sine in (6), we have to define J, J l , and θ. Herein, the least quarrel failure J(x, y) = 1 2 (x − y) 2 is busy. To avoid the contaminated effigy-level tag, we embarrass J l (x, y) = μ|x − y|. For the regularizer terms, we obstruct Θ(X, Y) = λ 2 ||X|| 2 F + η 2 ||Y|| 2 F . In this away, the unbiased activity can be upgraded into min L,Q,H,Z,M We perceive that fair cosecant (7) is no-gibbose over all the variables. In our implementation, a repeating algorithmic program is improved to improve it. The nitty-gritty are cater in the Supplementary Material. Beyond the aforementioned simple shape engineering, to embodied cunning characteristic into our hashish scholarship framework, a several-bed profound building is adopted to spontaneously enlarge (7). More specifically, f (x) is beseech as the production of the uppermost belt. Z i depict the change matrices to manifold obscure footing [34]. Different sagacious mesh, e.g., CNNs [8], can be employed to study mysterious form from forward pass idol pixels. In detail, L, Q, H, Z i , and M are iteratively suited. The parameters of our sagacious plexus are note by back-dissemination. The drilling of our purpose obscure comminuted framework is condensed in the following. The final optimization is instrument sequacious our preallable employment [34]. Once the cunning reticulum is drag, assumed an unworn graphlet x * , its base-2 hashish digest is suited by where F signify the amount of obscure sill. Based on the base-2 digest fitted for each graphlet, inclined a GSP rake K graphlets, we can connect the graphlet-open base-2 digest into a thirst base-2 vector that depicts the GSP.
C. Image Kernel Calculation
As aforementioned, many graphlets are from each ethereal show and are afterward reborn into base-2 checksum digest. We discover that: 1) the graphlet numbers from other antenna copy are comprehensively irreconcilable; 2) the dimensionalities of two checksum digest suited from variously sized graphlets are separate. Thus, it is impracticable to absolutely input them into a flag classifier similar SVM for optic assortment. To wield this conclusion, we busy a nucleus-induced quantization mode to compute the picture-impartial exhibition, that is, nonvolatile-distance shape vector for each atmospheric show.
Given an antenna copy, we first descent the BING [32] supported aim spot to make graphlets, which are afterward reborn into Boolean silence digest second-hand our thorough hashish. Finally, graphlets within the ith unreal conception are congregate into a nucleus-induced vector v i = v i1 , v i2 , . . . , v iN , where N compute the school forward pass idol. In detail, the jth subregion constitute of v i is fitted as where R and R show the number of justly sized graphlets from the ith and jth airy cast regardfully; d J (b u , b v ) reckon the Jaccard consimilarity between binary silence digest. Given N testing atmospheric cast, succeeding (8), we can hold an N × N kernel matrix at the manage tier and an N × N nucleus spreadsheet at the cupellation stage. By operating leverage the abovementioned quantized feature vector, a several-categorise SVM is learned. Mathematically, when training an SVM distinctive between atmospheric conception from the ath and the bth categories, a binary SVM classifier can be compile as go after where l i is the tribe label (that is, "+1" or "−1") of the ith manege aerial picture, β determines the hyperplane that separated airy images in the ath group from those in the bth type, C > 0 traffic the dress complicacy off the number of nonseparable aerial images, and N ab reckoning the training lofty conception from either the ath or the bth type. Given a quantized form vector procured from a trial lofty appearance, its label is calculated as follows: where the bias and v s signify the nurture vector whose tribe is tassel by "+1." In the testing level, we manage double star classification C(C − 1)/2 clock. The terminal determination is adapted by voting, that is, v * is appurtenance by the category plant suffer the limit numeral of vow.
A. Comparative Performance
In this territory, we appraise our forward pass show assortment by comparing with its causativeness and effectiveness with a generous prepare of counterparts. We first vie our rule with cunning architectures that specifically mean for forward pass photo assortment. Subsequently, we occupy pomp-of-the-calling unmixed genera oppose/exhibition notice standard for similitude.
Meanwhile, many modern graphical models sagacious genera optical notice fork achieve inculcate on group antenna copy. In this experience, we first compare our way with ten mysterious genera aspect categorization design: the spatial mount pooling CNN (SPP-CNN) [42], CleanNet [43], excludent strainer embank (DFB) [44], several-seam CNN-RNN (ML-CRNN) [45], several-ticket chart convolutional meshwork (ML-GCN) [46], semantic-discriminating chart (SSG) [47], and several-tassel transformer (MLT) [48]. Moreover, since ethereal picture assortment can be ponder as a subaltern-subject of scenery assortment, we also compare our means with three rank-of-the-contrivance exhibition assortment shapes. For these mold, it is discernible that only the ascent digest of [49] is unavailable. Thus, we reinstrument it second-hand C++. For the ocular notice plan accomplish by ourselves, the trial settings are compendious as succeed. In [35], we exploit the ResNet-152 [50] as the spinal column, which is afterward upgraded into a several-ticket changing. Except for the last maturely joined bed (one contain is established to 17), the other couch are initialized by ResNet-152 trail from ImageNet [51]. For [36], the power in the 2048-D LSTM stratum is initialized by a momentum contain between -0.2 and 0.2. Meanwhile, the Nestrov Adam is utilized as the optimizer, wherein the literature scold is put to 1e-4. For [41], the area arrangement is instrument from the RSSCN7 adduce [40] to our compose antenna likeness regulate. The ResNet-108 [50] is busy as the steadfastness and the conjectural walking declivity hone the pure reticulation. The scholarship proportion and load impair are curdle to 1e-3 and 0.05 regardfully. The netting detriment is adapted by the indicate level delusion. For [49] we retrain the deep model rampart [52] worn our cultured 18 atmospheric semblance categories, wherein the usual-pooling tactics is attach. The liblinear is utilized as the SVM solver and the seven-infold opposition validation is visit, as shown in Table I.
B. Componentwise Model Justification
In this proof, we validate the profit and inseparableness of the two essential modules in our aerial image assortment. They are GSP composition and deep hashing for double star digest generation relatively. We restore each model [36] by a functionally perverted one and story the categorization justness on the well-given SUN dataset.
To quantitatively show the cogency of the first model, three alternatives are betake. We first repay the BING mention [32], object spot by the well-known objectness mention [53] (intense by "S11"), the several-dish combinatorial group (MCG) motive advancement enjoin [36] (S12), and the AttentionMask [?] (S13), respectively. Next, in order to quantitate the contribution of aspect piece' semblance and topology in atmospheric conception modeling, we abandon the name G 1 (S14) and G 2 (S15) particularly. Third, we repay our adopted geometry-preserved active erudition by RankNet [54] (S16) and chart-supported violent [55] (S17) particularly. We present the vicissitude of assortment accuracy in Table II, where the intersection of column "Si" and rough "Oj" corresponds to experimental configuration "Sij." We see that worn the objectness [53] equivalent to our adopted BING [32] results in a sharp classification accuracy dismiss. Moreover, cede the graphlet analysis situs well hurts the assortment accuracy. These observations demonstrate the necessary of extend graphlets to signalize dissimilar ethereal effigy categories. Subsequently, to appraise the performance of our deep hashing, three separate setups are designed to experiment the usefulness of the three ascribe. We first abandon the din reduction term in (6) (S21). More specifically, we kill the term μ||L − T|| 1 and restore L by T. Second, we leverage the star structure digest restriction of H while fight the other expression bare-bones (S22). Finally, we degrade the intense feature learning bound F to a shallow one (S23). Mathematically, we adapt the transformation grid Z i = Z, which characterizes only one single layer. As unfolded in Table II, the concert reduction and intricate feature engineering attributes are the most serious, forsake each of them acquire an over 3.1% categorization accuracy decrement. In addition, the learned binary codes restraint motive a 4.573% drop in categorization correctness. Simultaneously, the cupellation time diminution is significantly increased by 316%. In hypothesis, we set the keystone advantage of applying our indicate binary comminuted digest to describe each graphlet is the ultrafast speed to think the image-direct resemblance an aerial idol. This is inasmuch as in modern electronic computer systems, procure two codes is much faster than comparing floating-point numbers. Notably, restricting the graphlet representation to two hashish codes is not free. Practically, it will oblate the form descriptiveness. In transform, the categorization accuracy will decrease somewhat.
C. Comparative GSPs Study on Alzheimer's Patients
In this experiment, we evaluate GSPs produced by both normal observers and Alzheimer's patients [18], [56], [57], [58], [59], [60], based on which classification performances are analyzed carefully. In total, we employed 37 normal observers and 33 Alzheimer's patients for this study. The normal observers are all PhD/master's students from our Computer Science Department. There are 25 males and 12 females, which are aged between 22 and 31. They are all experienced in photography and composition. Meanwhile, the 33 Alzheimer's patients are from Hangzhou Seventh People's Hospital. There are 11 patients in the early Alzheimer's diseases stage, 13 in the medium stage, and nine in the late stage. These Alzheimer's patients are aged between 51 and 68, and there are 23 males and ten females. Herein, human gaze allocations are recorded by a head-mounted eye tracker, as shown in Fig. 4.
As shown in Figs. 5 and 6, our calculated GSPs are highly consistent with those recorded by the five normal observers, which clearly demonstrates the effectiveness of the adopted active learning in modeling human visual perception. Noticeably, GSPs produced by Alzheimer's patients are apparently different from those generated by normal observers. This observation indicates the low visual perceptual capacity of Alzheimer's patients, i.e., they are less effective to capture the visually/semantically salient aerial image regions than the normal observers.
To quantitatively compare the GSPs generated by different sources, we propose to calculate the proportion of pairwise GSPs L 1 and L 2 overlapping with each other. Specifically, the similarity between two GSPs is determined by where nP counts the pixels inside each aerial image, and nP (L 1 ∩ L 2 ) measures the shared region between GSPs. On this basis, it is observable that the overlapping percentage between GSPs produced by normal observers and Alzheimer's patients is 63.324% on average. This demonstrates their significantly different visual perceptual capacities.
V. CONCLUSION
This fabric is motivated by the pervasively interest biologically inhaled design [3], [61], [62], [63], [64], [65], [66], [67]. We converse a recent antenna conception assortment pipeline that can robustly binarize mortal look floating paths (GSPs), unconcerned of the potently corrupt family compartmentalize. By prying the BING [32] motive tract, we arrange graphlets to example the spatial layouts of visually/semantically projection front aim in each ethereal effigy. Based on this, GSPs are fitted by an brisk letters algorithmic rule. Afterward, a report-indulgent MF algorithmic program is designate to renew copy-steady ticket into obscure GSP hashish, wherein price rumor can be intelligently mitigated. Finally, the binarized GSPs are merged into a nucleus shape for group antenna copy. Comprehensive proof on our composed excessive high appearance obstruct have shown the fight of our manner. Furthermore, to confirm the profit of the fitted GSPs, we repeat GSPs from both standard observers and Alzheimer's patients. Comparative meditation has demonstrated that exactly soothsay GSPs is the keynote for accomplished airy conception assortment. | 6,309.8 | 2023-01-01T00:00:00.000 | [
"Computer Science"
] |
The burden and costs of sepsis and reimbursement of its treatment in a developing country: An observational study on focal infections in Indonesia
Objectives: This study aimed to determine the burden of sepsis with focal infections in the resourcelimited context of Indonesia and to propose national prices for sepsis reimbursement. Methods: A retrospective observational study was conducted from 2013–2016 on cost of surviving and non-surviving sepsis patients from a payer perspective using inpatient billing records in four hospitals. The national burden of sepsis was calculated and proposed national prices for reimbursement were
The burden and costs of sepsis and reimbursement of its treatment in a developing country: An observational study on focal infections in Indonesia Introduction Sepsis is estimated to involve 31.5 million cases each year worldwide (Fleischmann et al. 2016). Of these cases, 19.4 million are characterized by severe sepsis, accounting for 5.3 million deaths annually (Fleischmann et al. 2016). These estimates are derived from data compiled for high-income countries. However, the highest mortalities occur in low-income countries, followed by low-middle income countries (LMICs) (Cheng et al. 2008). There is a surprising lack of data on mortality and costs among sepsis patients in LMICs such as most African and Asian countries, including Indonesia (Fleischmann et al. 2016;Rudd et al. 2018). Indonesia, which is the most populated country in Southeast Asia and the fourth most populated country globally, has a high incidence of communicable diseases (Gupta and Guin 2010; The world bank 2018). Ascertaining the granularity of the sepsis burden in Indonesia has become essential in light of the government's introduction of a new national health insurance system (Jaminan Kesehatan Nasional) (Health Ministry of the Republic of Indonesia 2014). In 2018, universal health coverage (UHC), provided by a single national payer, became available for 203 million people (Agustina et al. 2019). During the period 2019-2020, coverage will be extended to the entire Indonesian population (approximately 264 million people) (The world bank 2018; Agustina et al. 2019). Accordingly, a national reimbursement price for each disease will need to be accounted for withinthereimbursementsystem (Pisaniet al.2017;Mboi etal.2018;Agustina et al. 2019).
The economic burden of sepsis, which includes providing medication and fluid resuscitation during hospitalization, has been reported tobe veryhigh (McLaughlinet al. 2009). In the United States, hospitalization costs for sepsis patients were approximately US$20 billion in 2011 (Pfuntner et al. 2006). A previous systematic review, which mostly included studies performed in the United States, revealed that an essential analysis of the economic burden of sepsis concerned an evaluation between survivors and non-survivors because of a major difference in the mean total hospital costs per day (US$351 vs. US$948, respectively) (Arefian et al. 2017). The difference inburdenbetweensurvivorsandnon-survivorsisunknowninLMICs. International budgetary guidelines for sepsis management mostly apply to developed countries and therefore may require cost adjustments of service bundles relating to sepsis management in resource-limited settings (Becker et al. 2009;Tufan et al. 2015).
A focal infection terminology was firstly introduced in 1910 by William Hunter, who elaborated the relationship between focal infections and systemic diseases (Reimann and Havens 1940). A focal infection is a potential source of microorganisms that may disseminate into deep tissue and spread to the bloodstream. A further impact of the dissemination of the microorganisms and their toxin in the bloodstream is activation of the inflammatory mediators and worsening organ dysfunction due to sepsis (Babu and Gomes 2011). According to the third consensus definitions for sepsis and septic shock (Singer et al. 2016), sepsis has at least an underlying focal infection as an entry of the pathogen to the systemic circulation. Each focal infection causing sepsis comes with different complications, with a wide range of costs. Therefore, the reimbursement of sepsis needs cost adjustments accordingtotheunderlying focal infection.In Indonesia, sepsis and the associated focal infections are not coded together when calculating the national price of diseases, resulting in possible under-budgeting for sepsis-related expenditure (Health Ministry of the Republic of Indonesia, 2016). Therefore, a reevaluation of the costs for sepsis has become urgent for countries like Indonesia, including dealing with underlying focal infections. This study analyzed costs for surviving and deceased sepsis patients, explicitly considering underlying focal infections. In addition, it then estimated national prices for reimbursement under UHC based on the analyzed burden and costs of sepsis.
Study design
A retrospective observational study was conducted on patients with sepsis in four Indonesian medical centers: (1) Dr. Soetomo General Academic Hospital in Surabaya, a national healthcare referral center, with 1,514 beds, serving eastern Indonesia; (2) Universitas Airlangga Hospital in Surabaya, a teaching medical center with 180 beds in Surabaya; (3) The Prof. Dr. Sulianti Saroso National Center for Infectious Diseases Hospital, with 180 beds in Jakarta; and (4) Dr. M. Djamil Hospital in Padang, a national referral center with 800 beds, serving western Indonesia. Inpatient registries and hospital discharge data were obtained from the Department of Medical Records for the period 01 January 2013 to 31 December 2016. The dataset covered patients' demographics, diagnoses, hospital-discharge mortalities, laboratory tests, and medications.
Criteria for selecting patients
All patients with sepsis and aged ! 18 years were included. The diagnosis of sepsis was clarified by the physicians. The criteria for sepsis diagnosis followed the Indonesian Ministry of Health adopted Third International Consensus Definitions for Sepsis and Shock, Sepsis-3 (Singer et al. 2016) and diagnostic criteria for sepsis entailed in the Sequential Organ Failure Assessment (SOFA) score that includes at least two of the following three 'quick' SOFA (qSOFA) criteria: systolic blood pressure 100 mmHg, respiratory rate ! 22 breaths per minute, and incorporating altered mentation (Glasgow Coma Scale score < 15) (Health Ministry of the Republic of Indonesia 2017). The study categorized single focal infections per site of the infection as cardiovascular infections (CVIs), gastrointestinal tract infections (GTIs), lower-respiratory tract infections (LRTIs), neuromuscular infections (NMIs), urinary tract infections (UTIs), and wound infections (WIs). WIs recognized at the sites of surgery were subclassified as surgical site infections (SSIs). The physicians confirmed SSI diagnoses according to the Centers for Diseases Control and Prevention (Horan et al. 1992). Focal mouth and dental infections were included in the NMI category since those infections anatomically involved soft tissues such as nerves and muscles. Sepsis patients with two or more focal infections were grouped into sepsis with multifocal infections. Moreover, an unspecified focal infection was labeled as an unidentified focal infection (UFI). The International Classification of Diseases version 10 was applied to determine and record focal infections (see Supplement 1).
Cost calculation
Cost was analyzed from a payer perspective using billing records that included the costs of beds, drugs, laboratory and radiology procedures, other medical facilities, and total costs. Bed costs encompassed hospital administration fees, daily room services, nursing and medical staff care, and technicians' services. Drug costs were extracted from the pharmacy department's budget that covered expenses relating to drugs, fluids, blood products for transfusion, disposable devices, mechanical ventilators, oxygen therapy, and pharmacy services. Physiotherapists'as rehabilitation specialistsconsultancy costs were recorded and considered under patients' bed service costs. Costs for administrations, patient transfer and ambulance, and other expenses were included in the costs for other medical facilities. The hospitalization costs per admission were analyzed, considering the days spent in an intensive care unit (ICU), presence of SSIs, types of focal infections, and whether the patient survived or not. The 2016 currency exchange rate (US$1 = 13,308.33 IDR) was used, as applied by the Organization for Economic Cooperation and Development (OECD) to convert Indonesian Rupiahs (IDR) into US Dollars (US$) (Organization for Economic Cooperation and Development 2016), with inflation rates of 6.40% for 2013, 6.42% for 2014, 6.38% for 2015, and 3.53% for 2016 (Worldwide Inflation Data 2020). The economic burden of sepsis was assessed according to the distribution of disease incidence over focal infections and the mean cost of each focal infection using a denominator of 100,000 patients with sepsis (The World Bank 2016a).
Extrapolation of the cost to the national level
The national costs for sepsis were analyzed based on the rates defined by the Indonesian Health Ministry for Indonesia Case Base Groups (INA-CBGs). The INA-CBGs' rates were used as national projections for extrapolating the sepsis costsobtained from patient's billing recordsinto Proposed National Prices (PNPs) for sepsis reimbursements by considering the following four aspects (Health Ministry of the Republic of Indonesia, 2016).
The first aspect concerned the room classes in the hospital, which were divided into three classes: Class I, patients had more privacy within one room, accommodating up to two patients; Class II accommodating three or four people; Class III service accommodating five or six people in a room (Health Ministry of the Republic of Indonesia, 2016; President of Republic of Indonesia 2016). This study provided the PNP in Class III as the reference. It calculated the actual costs from Classes I, II and III ðCP)obtained from patient's billing recordsand divided them by the specific factor (α) according to the INA-CBGs at 1.4, 1.2, and 1.0, respectively (Health Ministry of the Republic of Indonesia, 2016).
The second aspect concerned private or public sector ownership of the hospital. In the INA-CBG system, reimbursement provided by the government through subsidies was 1.03 (β) times higher for private healthcare services compared with the public healthcare services (Health Ministry of the Republic of Indonesia, 2016).
The third and fourth aspects concerned the type of hospital and the region where the hospital is located, to correspond with the specific INA-CBG prices (ICPj . The ICP for hospital type A in Region I was used as the denominator reference for ICP in the calculation of a PNP, since the actual costs were obtained from the hospitals with type A located in the INA-CBG Region I. Eventually, for a particular focal infection inpatient, in a class of room, in a specific type of hospital, in a certain region under the private or the public sectors, a PNP for sepsis with an x focal infection was defined as in the following formula: In brief, the four aspects for developing a PNP were the mean actual costs reflecting the single mean class price ðCP), the specific factor (α) of each Class room, the specific INA-CBG prices (ICPj), and the government subsidy factor (β). This study developed 280 PNPs (seven focal infections, four types of hospitals, two sectors, and five regions) for reimbursement of sepsis with particular focal infections in the five INA-CBG regions. To compare with the reference ICPs, the PNPs were categorized into three groups: those with a small difference with the ICP of < US$500, a medium difference of US$500-1,000, and a major difference > US$1,000.
Statistical analyses
Data were analyzed using IBM SPSS statistics 25, providing descriptive data on baseline characteristics in percentages. Chisquare tests were performed to determine the differences between surviving and deceased sepsis patients. 1,000 samples were bootstrapped, and in cases where the data were overly skewed the standard error (SE) was adjusted for the mean cost. An Independent Sample t-test was applied to evaluate the statistical cost difference between the surviving and deceased patient groups. Subgroup analyses of hospitalization costs relating to ICU treatment, having SSIs, and types of focal infections were performed. Statistical significance was defined when the p-value was < 0.05.
Hospitalization costs
The costs per admission for surviving and deceased sepsis patients were, respectively: US$1,011 (AE 23.4) and US$1,406 (AE 27.8) (i.e., a difference of US$396, p < 0.001). The mean cost for all sepsis cases was US$1,253 (AE 19.4). Among non-ICU sepsis patients, the average cost was lower for surviving patients (US $960 AE 24.3) compared with that of deceased patients (US$1,189 AE 23.6) per admission (p < 0.001). For ICU sepsis patients, the cost per admission was US$1,618 (AE 47.9), with respective mean costs of US $1,187 (AE 61.7) and US$1,785.5 (AE 56.3) for surviving and deceased patients (p < 0.001), respectively. The cost incurred for patients with sepsis who had SSIs was higher compared with that incurred for patients who did not have SSIs (US$2,938 vs. US$926). Table 2 shows these costs divided into unit costs for beds, laboratory and radiology, pharmacy, and other medical facilities.
The prospective national price for sepsis patients
The lowest price within the INA-CBG system (ICP) was for UFI sepsis, with the ICP at US$298 in a type D public hospital in Region 1, for which a PNP of US$803 was estimated (difference: US$505). The highest PNP was for sepsis with CVIs in type A private hospitals in Region 5 (US$4,256) compared with the ICP of US$2,270 (difference: US$1,986). A remarkable difference between the PNP and ICP was evident for healthcare services relating to sepsis with WIs in type A private hospitals in Region 5 (US$3,995 vs. US$1,421; difference: US$2,574). Reimbursement levels under the overall PNP for sepsis were higher for all types of private hospitals compared with those for public hospitals (all types) in all INA-CBG regions. Out of 280 PNPs, 87 (31.1%) had major differences from the reference ICPs (> US$1,000). PNPs with a major difference were predominantly for reimbursement of sepsis with WIs (Table 3). Supplement 4 presents the details between the PNPs and the rates specified for the ICPs for sepsis with focal infections in all five regions of Indonesia.
Discussion
In this study, the economic burden for focal infections associated with sepsis was comprehensively determined in the resource-limited setting in Indonesia. Sepsis was mostly induced by LRTIs, accounting for the high associated total cost per patient. Besides LRTIs, the findings indicated a strong correlation between high costs and having SSIs. The costs especially increased for patients with multifocal infections. In the broader scale, the economic burden of sepsis with focal infections was higher for deceased patients than for surviving patients. In the new Indonesian UHC system, the reimbursement for sepsis entails four aspects: class of patient's room, government subsidies, type of hospital, and INA-CBG region. Moreover, the current findings show the great difference in costs between PNP and ICP, especially for sepsis-related costs with the focal infections of WIs and CVIs.
There is convincing evidence of a positive correlation between LRTIs and sepsis with regard to mortality outcome (Jaja et al. 2019). Over the last decade, LRTIs have been the most prevalent (2018). The economic burden of sepsis with LRTIs in ICUs in a developing country such as Turkey was estimated at US$2,722 per patient (Gumus et al. 2019). In addition, LRTIs such as community-acquired pneumonia contribute high morbidity in terms of more hospitalizations for ICU admissions, requiring mechanical ventilators, and further sepsis complications (Sligl and Marrie 2013;Remington and Sligl 2014;Montull et al. 2016). In addition, elevated hospitalization costs for ICU patients with LRTIs were strongly associated with the use of a mechanical ventilator, presence of severe sepsis and septic shock (Gumus et al. 2019). Confirming these results, some studies have reported that in addition to being induced by LRTIs, sepsis also originates from WIs, GTIs and UTIs (approximately 16.5%, 16.7% and 28.3%, respectively) (Mayr et al. 2014;Jaja et al. 2019;Shankar-Hari et al. 2019). Sepsis arising from GTIs and WIs is mostly associated with surgical wounds (Muresan et al. 2018;Jaja et al. 2019).
Infections on the site of surgeries after elective and emergency procedures that contribute to sepsis account for 5.8% and 24.8%, respectively (Shankar-Hari et al. 2019). A previous study covering 6.5 million elective surgeries performed in the United States reported an incidence of 1.2% of post-surgical sepsis cases, with a high mortality rate of 26% (Vogel et al. 2010). The current data revealed a high case fatality rate of sepsis with SSI. SSI-related costs that include medicines, prolonged length of stay and readmission could rise to US$22,130 per patient (Purba et al. 2018).
In the current study, sepsis with CVIs presented the highest cost per inpatient but accounted for the lowest national economic burden for sepsis, with focal infections giving relatively low numbers. In a previous systematic review, endocarditis was reported to be a rare disease with costly consequences (Abegaz et al. 2017). Sepsis with UTIs, or urosepsis, commonly causes kidney dysfunction, leading to high mortality rates. In the current study, the urinary tract ranked third in incidence as an infection site associated with sepsis. The incidence of urosepsis in the United States is about 30% and is higher among women compared with men (Esper et al. 2006;Kumar et al. 2019). The study was in line with the current findings, where among UTIs the female and male ratio was at 2:1. The incidence of sepsis associated with multifocal infections remains unknown, particularly in developing countries, but it was found that they are the costliest. Identifying multisource infections with sepsis prior to the occurrence of organ dysfunction is thus an urgent task (Zhou et al. 2019).
The further impacts of sepsis-related costs should be considered when formulating a national budget to support private and public healthcare services. In 2016, Indonesia's health expenditure was approximately US$111.6 billion or 3.1% of its GDP (The World Bank 2016b). Thus, establishing sufficient healthcare facilities to support the care of sepsis patients is a challenge. According to the National Health Account data published by the OECD in 2016, Indonesia's inpatient expenditure amounted to IDR158,499.2 billion (or US$11.9 billion) (Organization for Economic Cooperation and Development 2016; The World Bank 2016b). This expenditure accounts for 40.9% of the country's national total health expenditure of IDR387,648.5 billion or US$29.1 billion (The World Bank 2016b). For the sepsis inpatient expenditure, the current findings suggest that the prices in the current INA-CBGs should be upwardly adjusted as well as made specific for infection sites. As a specific item in the INA-CBGs, each individual pays health coverage according to the class of service selected. The service class categories merely relate to the provision of rooms with specific numbers of beds. Therefore, this categorization is ineffective, as all patients receive the same medical services or even when they are placed in ICUs or isolated rooms. Additionally, community healthcare centers, which play an essential role in resourcelimited settings in preventing infection complications such as sepsis, could potentially serve as a budget control mechanism by averting hospital infections and then reducing inpatient costs (Kumar et al. 2019).
It is believed that this is the first study to assess the burden of disease, incorporating the costs and mortality outcomes of sepsis with focal infections, in a resource-limited setting. Notably, it offers a robust methodology for calculating the national price for sepsis based on a consideration of particular focal infections. However, the study had several limitations. First, it did not assess the costs associated with losses in productivity during hospitalization, and indirect costs were not recorded. Moreover, infrastructure costssuch as security systems, parking and transportationwere not included. Second, post-sepsis impact on individual patients' occupational or educational trajectories, and those of their relatives, was not assessed because the data obtained from the hospitals were not linked to the socioeconomic statuses of individual patients. Third, the national price was modeled with reference to four referral centers. Nevertheless, the resulting national model seemed reasonable. Forth, it was a retrospective study and potential bias could have existed such as misdiagnosis and under-reported focal infections. However, the study was conducted with a big sample size to provide epidemiological and health economic findings that are needed by the Indonesian government for improving the new health insurance system with a resource-limited setting. Last, it did not consider following hospital discharge, particularly for ICU patients. Evidently, the higher mortality rate among sepsis patients after being discharged was a late-onset outcome of their ICU stays (Aguiar-Ricardo et al. 2019; Biason et al. 2019;Freitas et al. 2019).
Conclusions
It is essential to consider mortality and focal infections in an assessment of the burden of sepsis. Each underlying focal infection determines the particular course of sepsis. In a resource-limited context such as that of Indonesia, where a new UHC system has been introduced, the adequate provision of healthcare services requires a reevaluation and recalculation of the price for sepsis. Furthermore, in context, sepsis cases with multifocal infections and LRTIs should be categorized as high-burden sepsis cases, reflecting the most obvious examples requiring adjustments to the national price for private and public healthcare services reimbursement.
Contributions
AKRP, NM, GA, RRW and MJP initially contributed to developing the concept and the design of the work. AKRP, NM, GA, SHW, UH, HH, and CWN provided patients, collected and confirmed the clinical data. AKRP, NM, GA, RRW, JvdS, and MJP conducted data Table 3 The proposed national price per patient for sepsis with focal infections in all five regions of Indonesia (in 2016 US$). *Including surgical site infections. Note: The colors indicate the difference between the PNP for sepsis with focal infections with the rates specified for the INA-CBGs (the green indicates a group of low PNPs with a small difference (< US$500), the blue indicates a group of middle PNPs with a medium difference (US$500-1,000), and the red indicates a group of high PNPs with a major difference (> US$1,000)). The comparison between PNP and INA-CBG rates is provided in Supplement 3. Abbreviations: CVI, cardiovascular infections; GTI, gastrointestinal tract infection; ICU, intensive care unit; INA-CBGs, Indonesia Case Base Groups; LRTI, lower-respiratory tract infection; NMI, neuromuscular infection; PNP, proposed national price; UFI, unidentified focal infection; UTI, urinary tract infection; WI, wound infection. analyses and synthesis. All authors wrote and revised the work and approved the final draft before submission.
Ethical approval
The study was approved by the ethical committee of Dr. Soetomo General Academic Hospital, Surabaya (No. 418/Panke. KKE/VII/2017), Airlangga University Hospital (No. 114/KEH/2017), and the National Center of Infectious Diseases at Prof. Dr. Sulanti Saroso Hospital, Jakarta (No. 02/xxxviii.10/5/2018). The study met the Indonesian governmental requirements on conducting research and the ethical principles for medical research involving human subjects under the Helsinki Declaration (World Medical Association 2013). All data was deidentified to guarantee patient anonymity.
Conflict of interest
MJP received grants and honoraria from various pharmaceutical companies, none of which are related to this study. The other authors declare no conflict of interest. | 5,090.4 | 2020-05-05T00:00:00.000 | [
"Medicine",
"Economics"
] |
Microorganisms, the Ultimate Tool for Clean Label Foods?
: Clean label is an important trend in the food industry. It aims at washing foods of chemicals perceived as unhealthy by consumers. Microorganisms are present in many foods (usually fermented), they exhibit a diversity of metabolism and some can bring probiotic properties. They are usually well considered by consumers and, with progresses in the knowledge of their physiology and behavior, they can become very precise tools to produce or degrade specific compounds. They are thus an interesting means to obtain clean label foods. In this review, we propose to discuss some current research to use microorganisms to produce clean label foods with examples improving sensorial, textural, health and nutritional properties.
Introduction
Clean label is a marketing concept aiming at giving confidence to consumers. Indeed, in the last few decades, consumers may have perceived the food industry as at risk of poisons in which all possibilities are used to do business at the expense of consumers, society and the environment. Applying the clean-label concept to food consists in washing the label from additives, especially those perceived as chemical and artificial, to go back to traditional foods reminding us of "Grandma's cooking".
Whereas in some fields, biotechnology is only limited by technical possibilities, in the food domain in which consumers are pushing the debate on ethical concepts, naturality and sustainability, biotechnology grows between many constraints that have arose to preserve people and the environment. As a result, the food biotechnologist is used to trying to bring
Technology Additives
Foods are usually very complex structures including all nutritional components, whatever their hydrophobicity, solubility, physicochemical status. Their textural organisation is thus prone to modification during shelf life and many chemical agents can be added to stabilise them. However, this domain of technology additives is very controversial as good quality products in terms of texture/structure and physico-chemical stability are often in the category of over-processed food, which results in bad marks in food score applications. In this context, microorganisms can bring a lot of functionalities without addition of chemicals. In this part, we will present some examples concerning how we can use microorganisms to avoid starch retrogradation in bread products and how microbial biosurfactants can bring interesting textural properties to food.
Staling
Starch retrogradation occurs in bread and starch products [1]. It is an issue in this field as it is responsible for stale bread, but it brings also desirable properties to other products like breakfast cereals or rice vermicelli. It is the result of a rearrangement of amylose and amylopectin molecules from gelatinised starch upon cooling [2]. During cooling, amylose forms a network around amylopectin granules. This network is reinforced by the rearrangement of amylose into double helices crystalline structures. Later during storage, amylopectin rearranges to form also crystalline structures, contributing to the hardness of the system. Several additives can interact with amylose, mobilising the molecules out of the network. For instance, monoglycerides, coded as E471 additives in the European system, can decrease amylose crystallisation. However, these E471 additives are typically a target of the clean-label strategy.
In the microorganism-induced clean-label strategy, microbial catalysts hydrolyse triglycerides present in natural plant oil into diglycerides, monoglycerides and free fatty acids. Contrasting with the use of enzymes, they can be labelled in the well-accepted "starter" category. One microorganism we have tested is the yeast Yarrowia lipolytica. This species is well-known and studied for its capacity to degrade hydrophobic compounds [3]. It possesses a wide family of lipases, including extracellular ones that are produced depending on the fatty substrate present in the medium [4]. From a technological point of view, mutants altered for the regulation of lipase synthesis or lipase production would be more attractive as they can be more efficient in the precision catalysis required. However, one of the constraints of microorganisms for foods is that, in almost all world markets, microorganisms for food usage cannot be genetically modified and only natural mutants are usable. This constraint is often not insurmountable even if no examples are available to produce specific lipases in Yarrowia lipolytica. Indeed, the difficulty is to find the right and easy-to-use screening procedure. Natural improvement of the tolerance of Y. lipolytica to toxic alcohols has already been made [5]. Another constraint is that Y. lipolytica must not exhibit any sensorial impact other than decreasing staling. This yeast species is well known for its ability to degrade lipids and proteins, producing thereby aroma compounds [3,6]. In the case of this aerobic yeast, this point can also be relatively easily overcome through a sequential utilisation of the yeast in the production process and inactivation after use. Eventually, the yeast must not pose any risks to consumers' health and this yeast, which is Generally Recognized As Safe (GRAS), has been studied for its applications as a starter showing high benefits [7].
Another family of additive popular to limit staling is composed of glucidic hydrocolloids. These compounds can have an impact on the plasticity of the amorphous regions of crumbs, where they can increase water retention or inhibit gluten-starch interactions [8]. Lactic acid bacteria can produce several products of this family under the form of exopolysaccharides [9]. Dextrane is one such bacterial compound which effect has been studied on starch retrogradation [10,11].
Microbial Biosurfactants
Emulsifiers are amphipathic compounds i.e., compounds possessing both hydrophobic and hydrophilic parts, exhibiting surface activity properties. They tend to accumulate at interfaces making them suitable to stabilise emulsions. These molecules can come from diverse origins, including petroleum industry and they can also exhibit many bioactivity properties. They could thus have a role to play in many modern food-related diseases [12]. Research has thus been oriented towards the development of new natural emulsifiers [13]. Biosurfactants are produced by living cells, especially microorganisms like bacteria, molds and yeasts. As emulsifiers, they are like chemical synthetic surfactants, amphiphilic compounds [14] consisting of hydrophilic and hydrophobic moieties and they can reduce surface and interfacial tensions [15]. In biosurfactants, hydrophilic moieties can be carbohydrates, carboxylic acids, phosphates, amino acids, cyclic peptides, and alcohols. However, the hydrophobic moieties of the biosurfactants are usually long-chain fatty acids, hydroxyl fatty acids and α-alkyl-ß-hydroxyl fatty acids [16]. Based on their chemical structures, the microbial biosurfactants are classified into four groups: glycolipids, phospholipids, and fatty acids, lipopeptides and polymeric biosurfactants [17,18] as shown in Table 1. [28] Biosurfactant agents also show potential properties such as emulsification, functional additives, detergency, lubrication, phase dispersion, foaming, and solubilisation in many industries [29,30]. They show unique advantages including lower toxicity, better environmental compatibility, higher biodegradability, and specific activity when compared with chemical agents [31]. Mouafo et al. (2018) [32] reported that a glycolipid biosurfactant pro-duced by Lactobacillus spp. could be used as an emulsifier in the food industry. Varvaresou and Iakovou (2015) [33] reviewed that sophorolipid ester was interesting as an ingredient in cosmetic products such as rouge, lip cream, and eye shadow. Furthermore, trehalose lipid produced by Rhodococcus erythropolis 3C-9 exhibited oil spill cleanup application [34]. In food, it can be noted that the bacteria themselves can exhibit surface active properties as shown on the use of Lactococcus strains to stabilise or destabilise emulsions [35][36][37].
Several studies are currently being carried out to develop the use of microbial biosurfactants instead of chemical ones in food. However, biosurfactants not only show the aforementioned properties, but they can also exhibit biological activities such as anti-microbial, anti-adhesion, and anti-biofilm formation activities. These properties can be of interest, but they require also a complete check before using a biosurfactant-producing microorganism.
Sensorial Additives
A major quality of food is to be attractive for consumers. This is true when a company wants consumers to buy back its products as well as to maintain a good nutritious state for patients losing their appetite. In the food transition towards a more sustainable system, sensorial properties are particularly important when new products are formulated with plants bringing off flavour or off colours. The traditional strategy in this case consists in using flavours or flavour-masking compounds that will lengthen the list of ingredients while the microorganism-based clean-label strategy proposes to select microorganisms able to produce flavour or colour and degrade off-flavours. Some examples concerning the bitterness of naringin and legumes off-colours are given in this section.
Naringin
Naringin (4 ,5,7-trihydroxy flavanone 7-rhamnoglucoside) is a flavanone glycoside that is abundant in citrus fruits, mostly in the albedo and the peel [38]. With the limonin glycoside, naringin is considered as the molecule responsible of their bitterness, major off flavour when processing juice from citrus [39]. The naringin content is closely linked to the maturity of the fruit, its content being reduced with the maturity of the fruit [40]. Because of its high rate, the industrial processing of citrus generally uses immature fruits containing high contents of naringin. Thus, researchers have put efforts into finding ways to decrease the content of naringin in citrus. To do so, some physico-chemical methods have been developed, generally implying the use of resins, affinity polymers, cyclodextrin [41][42][43]. But these techniques involve the inclusion of additives and tend to impact the organoleptic characteristics of the processed juice [43,44]. Naringin can also be converted into naringenin by naringinase, an enzyme containing both α-L-rhamnosidase (E.C 3.2.1.40) and β-D-glucosidase (E.C 3.2.1.21) activities [43,45]. First, the enzyme breaks the bond between the rhamnose and glucose moieties of the naringin, producing pruning. Pruning is then hydrolysed, producing both D-glucose and naringenin, bitterless compound. This enzyme can directly be added to the juice-freely or immobilised [42,43] and can easily be produced by microorganisms, mostly filamentous fungi [43,[46][47][48]. The enzyme production is generally induced by the addition of naringin, from 0.1 to 0.5% of the total medium nutrients [49]. The purified enzymes have a maximum activity temperature around 50 • C but are more thermically stable at 40 • C [50,51]. The range of pH stability is generally from 4 to 8 [45,50,51]. In 2016, Srikantha et al. [52] reached an activity as high as 449.58 U/g of dry matter in solid state fermentation for Aspergillus flavus. Some studies focused on the capacity of bacteria to produce naringinases, such as Bacillus spp. [53][54][55], Lactiplantibacillus (L.) plantarum [56], Clostridium stercorarium [57] or Pseudomonas paucimobilis [58]. Under optimum conditions for submerged culture, the production of naringinase reached 12.05 U/L for Bacillus methylotrophicus [54]. Similarly, Zhu et al. [55] characterized an enzyme produced by Bacillus amyloliquefaciens, which could reduce 97% of initial naringin in a pomelo juice. These results clearly indicate that both filamentous fungi and bacteria have the capacity of debittering citrus in juice processing industry. The goal now is to find a microorganism able to degrade multiple phenolic glycosides, which could be used for different applications. Indeed, most of enzymes have an activity highly specific for the nature of the bond between the glycosidic and aglyconic moiety (rutinoside-7-O-heperetin versus rutinoside-3-O-quercetin for example) and for the nature of the bond between the two sugars moieties (2 versus 6-O-α-L-rhamnosyl-D-glucose for example). Information about enzymes showing activities independent of the nature of the bond are scarce but are highly interesting for futures screening of glycosidases-producing microorganisms, which can possibly be used for a wide variety of applications.
Green-Notes in Legume Products
Legume-based products represent an interesting source of non-animal proteins due to their rich amount and diversity of essential and non-essential amino-acids [59]. In Europe, the main issue for the development of such products is the sensory acceptance by consumers. Indeed legume-based products are linked to "green", "grassy" or "leafy" descriptors [60,61]. Removing or masking undesirable tastes by means of biotechnology is a way of developing new alternative food products without using additives or heavy processes. The development of green-notes flavours is linked to the oxidative degradation of fatty-acid by enzymatic and non-enzymatic pathways during process and storage [62,63]. Green notes are related to many volatile compounds such as aldehydes, alcohols, esters, or ketones [64]. Hexanal and its derivatives have been wildly associated with green characteristics such as cut grass and leafy descriptors [65,66]. Nevertheless, green characteristics appear to depend not on the presence of isolate molecules but on the association of multiple compounds leading to various green description. Moreover, each modification on the aromatic mix leads to changes on the green perception balancing between green fruity and green grass/leafy [67]. Reducing the green characteristic of legume-based products might be complex according to multiple origins of it and its evolution during the making process. Fermentation appears to be a safe, cheap, and natural way to try to improve aromatics properties of legume-based products. This process has been wildly used since thousands of years in order to preserve and improve food quality. Fermentation by lactic acid bacteria (LAB) on legumes derivatives products such as protein extract, legume-based milk or raw legumes have been investigated among the literature. Fermentation of pea and lupin protein extracts by L. plantarum and Pediococcus pentosaceus separately, leads to a modification of green markers quantity, such as a diminution of hexanal content [68,69]. Fermentation of soy milk and peanut milk by L. acidophilus, L. (Lacticaseibacillus) casei, L. delbruckii, Streptococcus thermophilus also demonstrates the ability to decrease and eliminates hexanal from milk [70,71]. The elimination of hexanal is a good start for improving organoleptic quality of legume-based products, but not enough to completely eliminate green notes due to other compounds. Fermentation by co-cultures of L. delbruckii ssp. bulgaricus and S. salivarius ssp. thermophilus leads to a modification of the aromatic profile of peanut milk, by decreasing green flavour and enhancing creamy flavour and sourness [71]. The transformation by LAB allows us to modify the aromatic profile by decreasing green-related compounds and enhancing other flavour. Moreover, the anti-green note-effect provided by some microbial cultures can be sufficient in one food matrix but not in another. Investigations are still needed to apply this clean label mean of inactivation of off flavours in all conditions but reaching this goal might be possible by selecting strains exhibiting precise metabolic activities. Our recent results have shown that when screening LAB activities towards aldehydes, it was possible to discriminate between strains reducing all aldehydes and strains reducing preferably a class of aldehydes depending on carbon chain saturation or length [72].
Bio-Preservation and Bioremediation Agents
The use of microorganisms for bio-preservation purposes has already been the subject of several reviews papers and will not be developed in this section. Bacteria able to produce antifungal weak acids are already used in bread applications to avoid the use of chemical preservatives [73] and bacteria able to produce antimicrobial peptides such as bacteriocins are used as starter in several products [74]. In this section, we will review the use of biosurfactants-producing microorganisms in bio-preservation strategies.
However, all these strains are hardly usable as clean label starters in food because of potential hazards or sensorial impact. Fortunately, lactic acid bacteria which are often Qualified Presumption of Safety species used in foods, are also microbes reputed to produce biosurfactants [97]. Biosurfactants derived from Lactococcus lactis showed microbial inhibition against multi-drug resistant pathogens including E. coli and methicillin resistant S. aureus [98]. Lactocaseibacillus paracasei biosurfactant presented an antibacterial activity against E. coli, Streptococcus agalactiae and S. pyogene with concentration of 25 mg/mL [87]. Sharma and Saharan (2014) [99] also reported that biosurfactants from L. casei MRTL3 showed antimicrobial activity against several pathogens, including S. aureus ATCC 6538P, S. epidermidis ATCC 12228, B. cereus ATCC 11770, Listeria monocytogenes MTCC 657, L. innocua ATCC 33090, Shigella flexneri ATCC 9199, S. typhi MTCC 733 and P. aeruginosa ATCC 15442. Biosurfactant produced by L. plantarum CFR 2194 also showed antimicrobial activity against E. coli ATCC31705, E. coli MTCC 108, S. typhi, Yersinia enterocolitica MTCC 859 and S. aureus F 722 by using well diffusion method [100]. Gudina et al. (2015) [101] reported that 5 mg/mL of biosurfactant from L. agalis CCUG31450 exhibited the growth inhibition of S. aureus, P. aeruginosa and S. agalactiae.
This activity can also concern pathogenic molds. This is of course less related with food processing but can contribute to decrease the number of pesticides in food. For instance, Phytophthora cryptogea, causing rotting of fruits and flowers, was inhibited by lipopeptide produced by strains of P. fluorescent [107]. Mnif et al. (2015) [102] revealed that Fusarium solani, a potato pathogenic fungus was undergoing a 78% inhibition by B. subtilis SPB1 lipopeptide biosurfactant after 20 days of incubation. Moreover, the 0.02 and 3.3 mg/mL SPB1 lipopeptide biosurfactant also inhibited the seed-borne pathogenic fungus of R. bataticola and R. solani, respectively [83]. Furthermore, Joshi et al. (2008a) [81] studied the antifungal activity of B. subtilis 20B lipopeptide biosurfactant by using the disc diffusion method. The results of this study showed that B. subtilis 20B lipopeptide biosurfactant has antifungal activity against several natural contaminating fungi such as Fusarium oxysporum, Alternaria burnsii, Crysosporum indicum and R. bataticola. The antifungal activity of biosurfactant was explained by González-Jaramillo et al. (2017) [108]. They studied the effect of fengycin C, a lipopeptide biosurfactant from B. subtilis EA-CB0015 on Mycosphaerella fijiensis mycelium and spore morphology changes by using dipalmitoyphosphatidylcholine (DPPC), a fungal membrane model. The results revealed that fengycin C, the lipopeptide biosurfactant was able to change the fungal membrane model by dehydrating the polar head groups of cell membranes bilayer, causing the loss of its permeable properties. Moreover, repulsion of charges of amino acid and polar bilayer might also be involved in the destabilisation of cell structure [108].
As a conclusion, many microbial biosurfactants are efficient against food spoilage or pathogenic strains. LAB biosurfactants can be used against food bacteria whereas bacilli bacteria produce often antifungal compounds. However, it is important to check whether these surface-active compounds can exhibit other properties that could limit their use in food.
Bioremediation
Apart from bio-preservation, numerous microorganism can also exhibit some ability to degrade toxic substances. This is referred to as a "bioremediation process" which is a bioprocess that can convert toxic substances (e.g., pesticides) or toxic contaminants (i.e., mycotoxins) or anti-nutrients such as phytates (which cause a decrease in iron availability) or biogenic amines. Nowadays, a worldwide serious agricultural threat is mycotoxin. It is recognized as an unavoidable risk. Many factors that influence the contamination level are environmental (such as weather and insect infestation) which are difficult or impossible to control. Therefore, this section attempts to review and discuss mainly on mycotoxin bioremediation.
Mycotoxins, a large group of toxic secondary metabolites, are produced primarily by a group of filamentous fungi mainly in the genera Fusarium, Penicillium, Aspergillus and Alternaria. They can contaminate food and feedstuffs at pre-and post-harvest stages. Currently, approximately of 60-80% all global agricultural commodities are contaminated with mycotoxins [112]. The most frequently found are aflatoxins, ochratoxins, zearalenone, deoxynivalenol, fumonisin B 1 , T2 and HT-2. There are numerous strategies, either based on physical or chemical treatments, that can be applied to mitigate against this problem. However, the application of biological means of mycotoxin reduction using microorganisms is received increasing interest from scientists due to its low cost, the broad spectrum of mycotoxins that can be targeted, the minimal side effects regarding nutrient status of the food, minimal training requirements for those applying the microorganisms, and its suitability for a wide range of liquid and solid food types [113]. Mechanism of action will involve either adsorption by cell wall or degradation by enzyme depending on species and strains of microorganisms. Watanakij et al., 2020 [114] demonstrated the application of an extracellular fraction from Bacillus subtilis BCC42005 with water as a soaking agent for maize. The result revealed that aflatoxin B 1 was reduced after 120 min contact time without any changed appearance of the corn kernel. Table 2 summarises some microorganisms which exhibit the potential to reduce mycotoxin loads.
Nutritional Additives and Properties
With the population becoming older, consumers are getting more interested in health issues and big industrial food groups transform their strategy and communication around health [142]. However, putting away compounds that are undesired by some consumers may be difficult and adding some healthy additives is still based on additives. In this section, some examples of use of microorganisms to selectively destroy antinutritional factors or to produce vitamins will be given.
Cleaning Food of Their Antinutritional Factors (ANF)
Antinutritional factors (ANF) are present in cultivated legumes, seeds and cereals [143]. ANF regroups multiple compounds which are lowering nutritional value of foods by inhibiting protein digestion and nutrient intakes, have deleterious effect on the digestive tract and health or cause gut disorders like flatulence [144,145]. Based on the previous literature, protease inhibitors, tannins, phytic acid are the main molecules responsible for the decreasing of proteolytic activity due to the inactivation of gut protease and denaturation of protein (protease inhibitors and tannins respectively) and the capture of positive-charged mineral ions (phytic acid). Lectins are glycoproteins characterised by their ability to interfering with intestinal epithelium leading to inflammatory state and a lack of nutrient absorption. Flatulence is linked to the digestion of α-galactosides like raffinose, stachyose and verbascose by the microbiota. The development of legume-based diet as protein source and the demands for healthy product poses the challenge for developing processes that keep nutritional benefits and clear products from ANF. First approach consisting in thermal processes as boiling, microwaving or pressurised cooking, such processes have shown great efficiency for decreasing trypsin inhibitors, phytic acid, hemagglutinins activity (lectins), saponins and some oligosaccharides of chickpeas [146]. The second approach is based on the supplementation of the cooking by germination or fermentation. The germination of seeds has shown significant results by eliminating flatulence-linked oligosaccharides [147] and decreasing the level of phytic acid, tannins and trypsin inhibitors [148]. The combination between germination and cooking allows us to significantly decrease or eliminate ANF in seeds and cereals. Nevertheless, few legume-based foods are produced following the germination process. Fermentation could appear as a safe way to tackles ANF from ungerminated legumes. Lactic acid fermentation by L. plantarum on bean flour shown multiple effects on ANF, such as the elimination of oligosaccharides and a significant diminution of lectins level [149]. The fermentation by L. brevis also shown great improvement on soybean digestibility due to the reduction of protease inhibitors and oligosaccharides [150]. Significant decrease of raffinose, stachyose, trypsin inhibitors and tannins have been reported for lactic acid fermentation of black bean by L. casei and L. plantarum [151]. Similar results have been reported for lactic acid fermentation of pearl millet [152]. Fungi fermentation can also eliminate ANF, and Rhyzopus oligosporu has shown significant activity against oligosaccharides and protease inhibitors [147]. But the fungi fermentation must be well characterised to avoid the production of any toxic compounds. As reported by the literature, fermentation could help to reduce or eliminate some ANF without using heavy processes or chemical treatments. It can be used on raw products or at further stage of transformation. More investigations are needed due to the variability of fermentation effects caused by strains and legumes' specificity. Indeed, lactic acid fermentation of plant-based product could lead to the production of biogenic amines [153], and this production is hugely dependent on the strains and the variety of legumes. The combination of thermic processes, germination and fermentation seems to be a great way for improving nutritional quality of plant-based product, but studies must be carried out to avoid any deleterious effects. Characterisation of plant cultivars composition and the activity of microorganism on it is the only way to develop clean and healthy plant-based products.
Vitamins Like Folate
Vitamins are organic compounds involved in several metabolic functions including energy production, red blood cell synthesis, etc. They are grouped into 2 main groups: lipidsoluble (vitamins A, D, E, K) and water-soluble (vitamin C and eight kinds of B vitamins) vitamins [154].
Vitamins of group A comprise retinoids, retinol, retinal, retinoic acid and retinyl esters. Pro-vitamin A is composed of various carotenoids (β-carotene, α-carotene, and β-cryptoxanthin), which are then converted in their active forms in the body [154].
The vitamin E group is formed by different chemical forms: four tocopherol and four tocotrienol forms. Tocopherols are often used as dietary supplements for humans, food preservatives, and in manufacture of cosmetics and sunscreens. However, α-tocopherol is the most predominant and active form in most human and animal tissues [155].
Vitamin K can be divided into phylloquinone (vitamin K1) with a phytyl group obtained from plants and menaquinones (vitamin K2) [154]. Vitamin C or ascorbic acid is an essential dietary component that humans are unable to synthesize.
The absence of adequate amounts of these compounds in the diet can cause several health problems not only to humans but also to animals. Therefore, they are produced industrially and used widely not only as food and feed additives, but also as cosmetics, therapeutic agents and health and technical aids [154]. However, these processes require the use of solvents, which are undesirable pollutants harmful to the environment. To overcome this drawback several studies are focused on the selection of microorganisms able to produce vitamins (Table 3). [165] Vitamin C Antioxidant activity, biosynthesis of collagen, l-carnitine and certain neurotransmitters, protein metabolism Gluconobacter spp., Acetobacter spp., Ketogulonicigenium spp., Pseudomonas spp., Erwinia spp., and Corynebacterium spp. [166,167] Presently, several studies are focusing on Vitamin B9 or folate since it plays very important functions in human health including amino acid metabolism and DNA replication and repair and is thus essential for cell division. In pregnant women daily intake of folic acid is recommended since it reduces the risk of low birth weight, maternal anemia and neural tube defects (NTD): spina bifida and anencephaly [168]. There are many forms of vitamin B9, called vitamers, which are more resistant to technological processes. Folic acid, the synthetic form of B9 vitamin, presents only a glutamate molecule, while naturally occurring forms are characterized by a polyglutamate chain. In addition, folic acid exhibits a fully oxidized pteridine ring, while the other vitamers are generally either partially reduced (at the 7,8-position) in the case of dihydrofolate forms, or fully reduced (at the 5,6,7,8-position) in the case of tetrahydrofolate compounds [169].
Humans do not synthesize folate de novo and folate deficiency represents a problem worldwide. In fact, several countries adopted mandatory fortification programs in foods of mass consumption such as flours and rice [169]. The main strategies used to address the problem of vitamin deficiencies are (i) supplementation, (ii) food fortification, and (iii) dietary diversification [170]. Unfortunately, folate-rich foods are not always available, depending on the season, and on the geographic, agro-ecological and socio-economic context, and the intake of folic acid could exert some adverse secondary effects, such as masking symptoms of vitamin B12 deficiency and possibly promoting colorectal cancer. These side effects are not observed when natural folates, such as those found in foods or produced by certain microorganisms, are consumed [169].
The main producers of folate are LAB and bifidobacteria (Table 4). Folate production is strain-dependent and is influenced by growth kinetics and medium composition. Several studies reviewed in [169] highlighted that folate bacterial production occurs during the exponential growth phase or at the beginning of the stationary phase and is then consumed. The majority of studies concerning folate production by eukaryotic microorganisms were carried out on S. cerevisiae and A. gossypii [173]. However, also other yeast genera are reported as folate producers such as Candida, Debaryomyces, Kodamea, Metchnikowia, Wickerhamiella [174]. A. gossypii can naturally synthesize 40 µg/L of folates and after metabolic engineering is able to reach 6595 µg/L. This result was obtained overexpressing 3 genes involved in folate production (FOL1, FOL2, FOL3) and deleting the gene MET7 which encodes for a FPGS (folypolyglutamate synthetase) which catalyses the polyglutamylation of folates in their gamma-carboxyl residue [173]. The elimination of competing pathways, such as riboflavin and adenine favours folate production [173].
Despite the efforts undertaken so far, microbial folate production is still low and not competitive in terms of cost and final concentration with industrial processes. A possibility to increase folate production could be the development of co-cultures of folate producing strains or folate vitamers that are resistant to oxidation, acid pH, and heat treatments. Finally, the possibility to use probiotic strains could be an advantage since folate could be produced in the gut. Future research should also focus on the understanding the complex regulatory mechanisms governing the enzymatic activities involved in the folate pathway; the optimization of the fermentation conditions and further development of downstream processes for the recovery and purification of the product.
Use of Taste-Active Microbial Amino Acids, and Peptides in Food Fermentation
Eventually, we will see some examples concerning inactive microorganisms that can be used for some compounds active for food properties.
Salt is an irreplaceable additive, flavouring foods. Culinary salt is a chemical compound consisting of the elements sodium and chlorine. Salty taste is given mainly by Na + . The ions of the alkaline metal group exhibit also a salty taste but causing less feeling than Na + . The size of the ions Li + and K + is also close to that of Na + , creating a salty taste that is almost similar. The salinity of substances is assessed in comparison to the sodium chloride standard [181,182]. KCl is the main ingredient used to replace salt with an index of 0.6 (when the salinity of NaCl is 1).
Monosodium glutamate (MSG) gives the taste of meat and umami, which is one of the five basic tastes with sourness, sweetness, saltiness, bitterness. In 1909, Kikunae Ikeda discovered MSG from seaweed. The taste strength of glutamate is quite strong. The sensory threshold of MSG is 1/3000 (one gram over three liters of water). This intensity is much stronger than salt and sugar. However, in addition, glutamate enhances also the perception of salty taste, and helps therefore to reduce the amount of salt added to food. Reducing salt is a goal in daily meals for humans to avoid certain diseases such as high blood pressure, kidney failure. But reducing salt will lead to food with poor taste. Using KCl as a substitute for culinary salt will create a bitter and metallic taste. Research results have shown that MSG combines culinary salt, significantly improving the sensory properties of foods. Yamaguchi [183,184] reported that the addition of MSG to broth could help to decrease the rate of sodium chloride for a similar sensorial result. Thus, MSG can replace culinary salt while ensuring the deliciousness of food.
MSG is present in different amounts in most natural food sources such as tomatoes, fish meat or oysters. It can be present as a free form or bind with other amino acids to create certain peptides and proteins. The content of MSG in nature has been determined [185,186]. The highest content of free glutamate in food (100 g) are found in Pamesano cheese, 1.680 mg; seaweed, 1.608 mg, oyster, 140 mg; tomatoes, 246 mg, or Japanese fish sauce, 1.323 mg.
In the human body, approximately 70% of body weight is water, 20% is protein and of which glutamate accounts for about 2%. MSG is a natural part of metabolism and about 50 g per day is formed by the human body. The average person consumes 10-20 g of bound glutamate per day and about 1 g of free food glutamate. Daily intake of glutamate is the main source of intestinal energy.
Saccharomyces yeast is a rich-in-protein source (protein content accounts for 48-50% dry matter) and yeast hydrolysed products are considered as rich sources of amino acids and peptides. They have many applications in food such as salad dressings, ice creams, crackers or meat products. They are used as additives, enhancing the flavour of the food products. Beer production can be a source of yeast. For instance, in a country like Vietnam with beer consumption of about 4.6 billion litters in 2019 according to data from the World Bank and Euromonitor, the production can generate around 7000 tons of spent yeast that can be used for either food consumption and feed. Utilising a large source of protein from brewer's yeast to produce hydrolysed products for application in food and food additives has a high real-life benefit. The composition of some amino acids in the brewer's yeast hydrolysates (BYH) varies depending on hydrolysis techniques. Continuous circulation hydrolysis method with heat shock and processed by autolysis gives the highest total amino acid content. The glutamate content accounts for 3.14 g/ 100 g BYH (55% dry matter) when the total amino acid composition achieved 32.3 g/100 g BYH.
However, bitterness in hydrolysates is one of the major undesirable aspects for various applications in food processing. It has been reported that the bitterness of brewer's yeast hydrolysate obtained by using flavourzyme is the lowest and that this product keeps a good umami taste [187].
The second limitation in the use of yeast and hydrolysate is the high content of nucleic acid in the yeast. There are many methods for reducing or separating nucleic acids in hydrolysed products such as extracellular ribonuclease enzymes, chemical agents, thermal shock and autolysis. Using extracellular ribonuclease enzyme for hydrolysis of nucleic acid gives good efficiency but suffers high production cost. Chemical agents negatively affect the quality of the hydrolysed products used in the food industry. It has been reported that a method using combination of heat shock treatment, autolysis and continuous circulation hydrolysis techniques gave the smallest content of nucleic acid in the brewer's yeast hydrolysate in comparison with using the batch and continuous overflow process [188].
In addition to the contribution of inactivated yeast to the taste of products, this popular microorganism can also bring health-active compounds. One of the most economically important components of yeast biomass is ergosterol, which, as already discussed in the previous paragraph, could be used as a precursor of vitamin D2 and another sterol drug [189]. Thanks to advanced technology in biotechnology, modified strains of yeast have been developed to enhance the production of ergosterol or the co-production of ergosterol with other products [190][191][192]. In Vietnam, the National Institute of Nutrition has conducted investigation on the production of ergosterol from S. cerevisiae and its application in functional food production. From 50 yeast samples of bakerhouses and 50 samples of fresh grapefruits from markets in Hanoi, two yeast strains, namely MB14.2.2 and N42.2.2, were found with the highest concentration of ergosterol in comparison with dry biomass (3.7% and 3.5%, respectively). Furthermore, optimized conditions and apparatus system for ergosterol production from these strains were established. For the applications in function foods, cookies (for children) and soya milk powder (for adults) were supplemented with vitamin D2 (1600 IU/100 g and 2261 UI/100 g in cookies and soya milk powder, respectively), that was transformed from ergosterol using radiation method. After using the products, the group of children had better transformation of the z-score index height/age and body mass index (BMI). The adult group improved bone health and improved blood biochemical indicators. Concentrations of 25-(OH) D of both groups with vitamin D2 were significantly higher than that of the control group (p < 0.001). The percentage of vitamin D deficiency noticeably decreased in both intervention groups.
Furthermore, brewing yeast is a great source for β-glucan. When yeasts are grown for seasoning purposes, molasses from sugar production is used as raw material for yeast fermentation. Presently, there are three products: spray-dried whole cell yeasts, yeast extract in paste form and spray dried yeast extract. The yeast cell wall separated after centrifuge goes to wastewater and causes complications and costs in wastewater treatment. Therefore, there would be a great opportunity to add value to yeast by using the cell wall as a source for production of β-glucan, a functional food.
Conclusions
With the growing concern of consumers towards the food that they eat, the clean label strategy has been generalised in many companies. From the first efforts which could often been assimilated to green washing, some companies have now developed a systematic struggle against additives. In this cleaning effort, microorganisms can be an efficient tool. This review illustrates what microorganisms can bring to the clean label concept through examples of recent strategies. In fact, besides the use of microorganisms producing antifungal weak acids in bread products, exopolysaccharides or of strains able to consume lipids or sugars to decrease the caloric properties of foods, or compounds with a positive effect on human effects, the efficacy of microbial strains to obtain good foods without additives is always subject to evaluation. The use of microrganisms could be useful to reduce the employment of additives since some strains are able to transform food components, degrade off-flavors, antinutritional factors, toxins, and chemical pollutants, or bring new molecules that are active for taste or health. Further studies are necessary to improve this "clean label" approach to reduce the list of ingredients used in food products. | 8,519 | 2021-04-30T00:00:00.000 | [
"Agricultural and Food Sciences",
"Biology"
] |
Identification of possible non-stationary effects in a new type of vortex furnace
The article presents the results of an experimental study of pressure and velocity pulsations in the model of improved vortex furnace with distributed air supply and vertically oriented nozzles of the secondary blast. Investigation of aerodynamic characteristics of a swirling flow with different regime parameters was conducted in an isothermal laboratory model (in 1:25 scale) of vortex furnace using laser Doppler measuring system and pressure pulsations analyzer. The obtained results have revealed a number of features of the flow structure, and the spectral analysis of pressure and velocity pulsations allows to speak about the absence of large-scale unsteady vortical structures in the studied design.
Introduction
Currently the most important source for producing electrical and thermal energy is coal combustion. One of the promising technologies is the pulverized coal combustion in a vortex flow. The flow swirling allows solving a number of problems: to increase the residence time of fuel particles in the combustion chamber, which reduces mechanical underburning; to reduce the level of harmful emissions to environmental standards; to optimize the design of boiler equipment; and to ensure effective management of the combustion process. One of the stages in the development of furnaces is the study of their internal aerodynamics in the laboratory models. A detailed study of the main features of the isothermal flow structure allows optimizing the structural and operating parameters of the furnace. It is known that the intensely swirling flows under certain conditions are characterized by the loss in stability of the stationary regime, which may be expressed in the appearance of the precessing vortex core (PVC). Intensive pressure pulsations associated with the PVC cause wear of power plants and deterioration of the performance of vortex devices and have a negative impact on the furnace processes. Therefore, to improve the efficiency and reliability of operation of the vortex combustion devices under development, one must examine possible unsteady effects that occur in the working areas.
In previous works [1,2] authors visualized the vortex structure of the flow in the model of the improved vortex furnace with distributed fuel-air mixture supply and vertically oriented nozzles of the secondary blast. The obtained results were based on measuring the time-averaged flow characteristics and did not allow making an unambiguous conclusion about the vortex core dynamics. The aim of this work is to experimentally study the pulsation characteristics of the flow in this model of the vortex furnace.
Experimental setup and measurement techniques
The study of the flow structure and its pulsation characteristics was performed on an automated experimental stand. The scheme of the experimental setup with the mounted three-component (3D-) LDA system is presented on figure 1-a. The main elements of this setup are: the supply line of compressed air with control and regulating devices; the model of an improved vortex furnace; fog generator (for seeding the investigated flow with tracerparticles); measure devices and computer with specialized software. Air flows through the supply line is injected in the isothermal model of the vortex furnace. The study of this flow is based on a variety of modern optical methods.
The main elements of the vortex furnace are ( Fig. 1-b): the combustion chamber; a diffuser; and a cooling chamber ending with a horizontal flue. Dimensions of the model are XYZ 3201200256 mm (scale 1:25), and the diameter of the vortex combustion chamber is 320 mm. A distinctive feature of this design compared to the previously studied one [3] (where the additional tangential inlet is located in the lower part of the combustion chamber), is the vertical arrangement of the nozzles of the secondary blast, and the presence of the "visor" inside the vortex chamber to prevent entrainment of fuel particles from the combustion chamber. The advantages of this design of the vortex furnace are described in [1,2]. Measurements of pressure fluctuations were carried out using noise analyzer Bruel&Kjaer (pressure measurement up to 103.5 kPa, frequency measurement -4.2 Hz20 kHz, sensitivity -54.9 mV/PA). The sensor was placed inside the vortex combustion chamber with a metal sampler -thin-walled tube with a diameter of 2.2 mm and a length of 160 mm. The transfer function of the sampler is presented in [4], which shows its applicability without adjusting to frequencies of ~100 Hz. The measurements were carried out close to the conventional center of the vortex chamber (x = y = 160mm, z = 126 mm) at various operating parameters (=14, where is the ratio of the flow rates through the main and additional nozzles). The signal digitized by the ADC (L-CARD E14-440), was expanded into a spectrum using fast Fourier transform.
To diagnose the flow velocity pulsations the two-component (2D-) laser Doppler anemometer LAD-05 (frequency up to 3 kHz), developed at IT SB RAS was used. The method of laser Doppler anemometry is based on measuring the displacement velocity of the particles (tracers) suspended in the flow. The measurements were performed in the range of Reynolds numbers 310 5 <Re<610 5 , calculated on the diameter of the vortex chamber (320 mm) and the velocity module in the top burners (V0=525 m/s). These conditions (Re > 10 4 ) provide the self-similarity regime and the applicability of the results of physical modeling to the analysis of the structure of isothermal flow in a full-size furnace. The measurements were carried out in a period of time (about 1 min) sufficient to obtain 16384 measurements for each component (2 14 ) to facilitate further processing using the fast Fourier transform (FFT).
Measurement results and their analysis
Earlier, based on independent measurement techniques, using 3D-LDA [1] and Stereo PIV (Particle Image Velocimetry) systems [2], the authors studied the aerodynamics of a model of the studied vortex furnace with distributed tangential air supply. Based on the "minimum total pressure criterion" [5] the vortex flow structure was visualized. Figure 2-a presents the vector velocity field (a section in the "center of the nozzle"), obtained by PIVmeasurements [1] as well as the isosurface of dynamic pressure (pdyn=0.25 Pa, Fig 2-b), visualizing the vortex flow structure [2]. It has a W-shaped form, which is typical for the well-known vortex furnace of CBTI [6]. For comparison, the figure shows the isosurface of total pressure and the plotted distribution of the Q-criterion (Fig. 2-c), obtained by numerical simulation [7]. . 2. The vector velocity field in the cross-section "over the nozzle center" (a); isosurface of dynamic pressure (b); isosurface of total pressure (c).
To analyze stationarity of the revealed vortex structure the pressure fluctuations were measured in the central part of combustion chamber. Spectra for the microphone signal (U, Volts) obtained for different ratios of flow rates through the main and additional nozzles () are presented in figure 3. They have a complex form with multiple peaks. However, the position of these peaks neither depends on the flow rate ratio, nor on the magnitude of the total flow, which indicates the absence of the unsteady vortex structures (such as precession vortex core or other structures [8]). The presented peaks characterize the model as an acoustic resonator [6]. The noise analyzer used in the measurements is limited in frequency range. Therefore, the analysis of pulsations at low frequencies (<5 Hz) was performed with the use of the laser Doppler anemometer LAD-05. The measurements were carried out at different points of the model in the vicinity of the conventional axis of the curved vortex core and near the entrance nozzles, for different values of initial velocity. Figure 4 shows typical spectra of pulsations of the horizontal velocity component (normalized on the initial velocity V0) in the point (x=150 mm; y=100 mm, z=64 mm) in a plane passing through the centers of the nozzles. The complete absence of peaks confirms the earlier conclusion about the PVC absence. These spectra are characteristic for all points of measurements. Using the modern methods of measurements the pulsation characteristics of the swirling flow have been investigated in the improved model of the vortex furnace with distributed fuel-air supply and vertically oriented nozzles of the secondary blast. The results of the spectral analysis of pulsations of pressure and velocity of the turbulent swirling flow with different prove the stability of the stationary structure of the vortex core flow in the studied improved model of the vortex furnace. The absence of negative effects associated with the vortex core precession is one of the important practical advantages [9] of the studied furnace design.
Research was supported by the Russian Science Foundation (Project No. 14-19-00137). | 1,981 | 2017-10-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Orthogonal Chaotic Binary Sequences Based on Bernoulli Map and Walsh Functions
The statistical properties of chaotic binary sequences generated by the Bernoulli map and Walsh functions are discussed. The Walsh functions are based on a 2k×2k Hadamard matrix. For general k (=1,2,⋯), we will prove that 2k−1 Walsh functions can generate essentially different balanced and i.i.d. binary sequences that are orthogonal to each other.
Introduction
The simplest way to generate chaos is to use a one-dimensional(1D) nonlinear difference equation with a chaotic map. Chaotic sequences can be used as random numbers for several engineering applications, and there have been many works on chaos-based random number generation [1][2][3][4][5][6][7][8][9][10][11]. In general, truly random numbers should be a sequence of i.i.d. (independent and identically distributed) random variables with a uniform probability density, that is they give maximum entropy. Their typical model, for example, is a sequence obtained by trials of fair coin-tossing or dice-throwing. The design of many chaotic sequences of i.i.d. binary (or p-ary) random variables from a single chaotic real-valued sequence generated by a class of 1D nonlinear maps was established in [1][2][3], where it was shown that some symmetric binary (or p-ary) functions can produce i.i.d. binary (or p-ary) sequences if the map satisfies some symmetric properties.
In some engineering applications (e.g., communication,cryptography, the Monte Carlo method) of chaos-based random numbers, their statistical properties such as distributions and correlations are very important. Whereas there are some indices for defining chaos such as Lyapunov exponents, we concentrate on statistical properties in this paper. Thus, we discuss the statistical properties of orthogonal chaotic binary sequences generated by the Bernoulli map and Walsh functions based on Hadamard matrices, which was already discussed in [12]. As is well known, Walsh functions are the most famous orthogonal binary functions, and they can be applied to many applications (e.g., signal processing) [13][14][15][16][17]. In [12], we proved that the Bernoulli map and Walsh functions based on the 2 k × 2 k (1 ≤ k ≤ 4) Hadamard matrix can generate 2 k−1 different balanced i.i.d. binary sequences that are orthogonal to each other. Here, "balanced" means that the probability of "1" (or "0") in the binary sequence is equal to 1/2. We conjectured that this holds for general positive integers k. In this paper, we will give a rigorous proof of this for general k (= 1, 2, · · · ).
Preliminaries
For a nonlinear map τ(·), a chaotic sequence {x n } ∞ n=0 can be generated by a 1D difference equation: where x n = τ n (x) and x = x 0 is called an initial value or a seed. For an integrable function G, the average (expectation) of a sequence {G(τ n (x))} ∞ n=0 is defined by: which is very important in evaluating the statistics of chaotic sequences under the assumption that τ(·) is mixing on I with respect to an absolutely continuous invariant measure, denoted by f * (x)dx.
Definition 2.
The Perron-Frobenius (PF) operator P τ of the map τ with an interval I = [a, b] is defined by: which can be rewritten as: where g i (x) is the i-th preimage of the map τ(·) [18].
Remark 1.
The PF operator given in Definition 2 is very useful for evaluating correlation functions because it has the following important property [18]: then we have: which is obvious from Equations (3) and (6).
Hadamard Matrix and Walsh Functions
We introduce a 2 k × 2 k Hadamard matrix H k defined by [13][14][15]: which is one of the orthogonal matrices whose rows (or columns) are orthogonal 2 k -tuples. For example, H 3 is given by: Furthermore, H k can be expressed as: where ⊗ denotes the Kronecker product.
Proof. From Equation
which leads us to obtaining:
Orthogonal Chaotic Binary Sequences
For chaotic binary sequences {B (k) i (τ n (x))} ∞ n=0 (i = 1, 2, · · · , 2 k − 1) generated by a nonlinear map with I = [0, 1] and f * (x) = 1, it is obvious that: that is, the binary sequences are balanced. Note that{B i (x) ≡ 0. Furthermore, we have: which gives: This implies that the binary sequences {B (k) i (τ n (x))} ∞ n=0 are orthogonal to each other. In this paper, we employ Bernoulli map τ B (x) defined by: which has the uniform invariant density f * (x) = 1 for the unit interval I = [0, 1]. Figure 2 shows the map. i (x) and Bernoulli map τ B (x), the following relation: is satisfied. (14), we have: Furthermore, for a threshold function Θ t (x) and Bernoulli map τ B (x), the following equation:
is a balanced i.i.d. binary sequence, and they are uncorrelated (orthogonal) with each other for any time shift including = 0, that is we have: It should be noted that Equation (34) implies that 2 k−1 binary sequences {B (k) i (τ n B (x))} ∞ n=0 (i = 2 k−1 , 2 k−1 + 1, · · · , 2 k − 1) are essentially different, that is they are not time-shifted versions of the others. Table 1 shows the evaluation results for the case k = 4.
Conclusions
We theoretically evaluated the statistical properties of chaotic binary sequences generated by the Bernoulli map and Walsh functions. For given k, it was shown that 2 k−1 binary sequences {B (k) i (τ n B (x))} ∞ n=0 (i = 2 k−1 , 2 k−1 + 1, · · · , 2 k − 1) are essentially different in the sense that none of them are time-shifted versions of the others. Furthermore, we showed that each of the 2 k−1 binary sequences is a balanced i.i.d. sequence, and they are uncorrelated (orthogonal) with each other for any time shift.
As in [12,19], the Bernoulli map can be approximated by nonlinear feedback shift registers (NFSRs) [20] with finite bits, and the binary functions corresponding to B (k) i (x) can be easily realized by combinational logic circuits. We will discuss the applications of the orthogonal binary sequences using such NFSRs in our future study. | 1,436.4 | 2019-09-24T00:00:00.000 | [
"Computer Science"
] |
Sex-related differences in retinal function in Wistar rats: implications for toxicity and safety studies
Introduction: Wistar Han rats are a preferred strain of rodents for general toxicology and safety pharmacology studies in drug development. In some of these studies, visual functional tests that assess for retinal toxicity are included as an additional endpoint. Although the influence of gender on human retinal function has been documented for more than 6 decades, preclinically it is still uncertain if there are differences in retinal function between naïve male and female Wistar Han rats. Methods: In this study, sex-related differences in the retinal function were quantified by analyzing electroretinography (ERG) in 7-9-week-old (n = 52 males and 51 females) and 21–23-week-old Wistar Han rats (n = 48 males and 51 females). Optokinetic tracking response, brainstem auditory evoked potential, ultrasonic vocalization and histology were tested and evaluated in a subset of animals to investigate the potential compensation mechanisms of spontaneous blindness. Results/Discussion: Absence of scotopic and photopic ERG responses was found in 13% of 7-9-week-old (7/52) and 19% of 21–23-week-old males (9/48), but none of female rats (0/51). The averaged amplitudes of rod- and cone-mediated ERG b-wave responses obtained from males were significantly smaller than the amplitudes of the same responses from age-matched females (−43% and −26%, respectively) at 7–9 weeks of age. There was no difference in the retinal and brain morphology, brainstem auditory responses, or ultrasonic vocalizations between the animals with normal and abnormal ERGs at 21–23 weeks of age. In summary, male Wistar Han rats had altered retinal responses, including a complete lack of responses to test flash stimuli (i.e., blindness), when compared with female rats at 7–9 and 21–23 weeks of age. Therefore, sex differences should be considered when using Wistar Han rats in toxicity and safety pharmacology studies with regards to data interpretation of retinal functional assessments.
Introduction
Due to their longevity, small body size, slow growth rate, and low incidence of spontaneous tumors, Wistar Han (WH) rats are currently one of the most used strains in biomedical research (Weber et al., 2011;Gauvin et al., 2019). This strain of rat has also been recommended for use in toxicological testing in drug development in the United States (Son et al., 2010;Gauvin et al., 2019) and Europe (Gauvin et al., 2019). Sometimes visual functional tests, e.g., electroretinography (ERG) or visual discrimination behavioral tests, are included as an add-on endpoint for assessing potential retinal toxicity of new molecules (Rosolen et al., 2005;Brock et al., 2013). Ophthalmologic and histopathologic examinations have shown a higher incidence of corneal opacities and mineralization in WH rats compared with Sprague-Dawley rats (Hayakawa et al., 2013). Spontaneous microscopic lesions have also recently been reported in retinas in this strain with 5.0%-45.7% of rats examined displaying retinal degeneration and retinal rosettes/folds (Cloup et al., 2021). In previous pilot studies, as many as 11%-12% of adult male WH rats were identified as having virtually no ERG responses to a series of test light flashes, indicating a loss or decrease in visual function. Although these animals behave normally, as observed during cage-side observations, and had no findings with standard eye examinations, some of them were found to be blind based on our ERG assessments. In pharmacology or neurological studies, rats with significant photoreceptor loss (O'Steen et al., 1995) and rats with reduced visual acuity (Prusky et al., 2000) are all impaired in the Morris water task experiments, compromising the interpretation of experimental data that are dependent on visual function. It is also essential for toxicologists to be familiar with spontaneous ocular morphological and functional alternations that may occur in WH rats used in safety assessment studies. Although the visual responses at retina (Heiduschka and Schraermeyer, 2008), brain (Thomas et al., 2005), visual acuity threshold (Prusky et al., 2002) and susceptibility to light damage (De Vera Mudry et al., 2013) have been compared between pigmented and albino rats, no comparative study has quantified the visual or retinal function of WH rats in large groups of male and female WH rats.
Visual impairment or blindness can alter sensory, memory, social, and survival behavior through various compensatory mechanisms. Since the 1980s, evidence has accumulated showing that blind individuals can have better hearing than those with normal vision, due to intramodal plasticity in the cortex and subcortical auditory structures (Niemeyer and Starlinger, 1981;Liotti et al., 1998;Bavelier and Neville, 2002). Alterations in auditory brainstem responses have also been observed in blind adults (Jafari and Malayeri, 2014) and children (Jafari and Malayeri, 2016). However, these forms of intramodal compensation have not been documented in blind or visionimpaired rodents.
To fill the knowledge gaps, the current study screened male and female adult WH rats using regular ophthalmic examinations and ERG. Additionally, optokinetic response tracking (OKR), brainstem auditory evoked potential (BAEP), and ultrasonic vocalization (USV) were performed or recorded to compare the potential differences between normal-sighted and blind animals. The resulting structural plasticity in the retina, visual, and auditory pathways in the brain was also examined using conventional histology methods.
2 Materials and methods
Animals
All activities involving animals conformed to the guidelines established by the Association for Research in Vision and Ophthalmology (ARVO) statement for the Use of Animals in Ophthalmic and Vision Research, and the animal use protocol was approved by the Pfizer Institutional Animal Care and Use Committee (IACUC). Adult male and female WH rats (Crl:WI [Han], Charles River Laboratories, Raleigh, NC) were obtained at an age of approximately 6-10 weeks of age. The animals were grouphoused (2-3/cage) in Techniplast cages with Enrich-n'Pure bedding (The Andersons Inc., Maumee, OH) with a room temperature of 20°C-26°C and humidity of 30-70 %, under a 12 h:12 h light-dark cycle. They were provided with ad libitum reverse osmosis purified water and a regular irradiated Teklad Global Rodent Diet (Envigo, 2916C). ERG and OKR tests were performed on all animals between 8:00 a.m. and 3:00 p.m. Three cohorts of WH rats were ordered (see Supplementary Table S1) and assigned to four groups in this study as summarized in Table 1. Group 1 and 3 consisted of 2 separate sets of male rats, one 7-9 weeks of age (n = 52) and the other 21-23 weeks of age (n = 48). Group 2 and 4 consisted of the same set of female rats (n = 51) evaluated at 7-9 weeks of age and then again at 21-23 weeks of age.
Ophthalmologic examination
A standard qualitative ophthalmic examination was conducted either prior to ERG testing for group 2 animals (females) at Frontiers in Toxicology frontiersin.org 02 7-9 weeks of age or after the ERG assessment for groups 1 and 3 animals (males) at 7-9 weeks of age and 21-23 weeks of age, respectively. The visible ocular and adnexal anatomy were evaluated. Mydriacyl (1.0% tropicamide, Akorn Operating Company LLC, Lake Forest, IL) was applied topically to each eye to assist in the examination. In ambient lighting, indirect ophthalmoscopy was utilized to examine the retina, optic disc, and blood vessels, and a handheld slit lamp biomicroscope was employed to examine the anterior chamber.
Electroretinography
Full-field ERGs were tested at 7-9 and 21-23 weeks of age using a LKC system (LKC Technologies, Gaithersburg, MD), as previously described (Liu et al., 2015). Briefly, the male and female rats were kept in the dark for a period of 2-8 h prior to ERG testing in order to enhance retinal sensitivity (Behn et al., 2003). The animals were anesthetized with a 2.0%-2.5% concentration of isoflurane in oxygen. A dim red light, generated by an Energizer red LED 315 headlamp (Intensity:~5 μW/cm 2 ; wavelength: 620-645 nm; Energizer Holdings, Inc., MO), was briefly used to aid in animal manipulation and electrode placement. The body temperature was Representative abnormal scotopic ERG waveforms from some of the 7-9 weeks and 21-23-week-old male WH rats in response to a series of light flashes (−20 to 5 dB). Each waveform is the average of 3-9 responses to the same intensity flash. Following the flash stimulation of the eyes, no standard a-wave could be elicited, instead long (about 300 ms) negative waveforms were seen when the flash stimuli reached −20 dB, without further increase when the flashes were intensified. No standard b-wave was present in these animals.
FIGURE 2
Representative normal dark-adapted ERG waveforms from 7-9week-old female (left) and male (right) Wistar Han rats in response to flashes from −36 to 5 dB. Note that both a-and b-waves are larger in females compared with males at all tested flash levels.
Frontiers in Toxicology frontiersin.org maintained using a heated pad connected to the ground. One drop of local anesthetic was administered to prevent blinking, and 1% tropicamide was applied to induce pupil dilation. ERG lens electrodes (Medical Workshop, Groningen, Holland) were placed on both eyes using artificial tears (GenTeal Tears, Alcon, Geneva, Switzerland) as a coupling agent. After disinfecting the skin with an alcohol pad, a platinum needle reference electrode (Natus Neurology, West Warwick, RI) was inserted subcutaneously between the eyes on the forehead. After scotopic testing, the animals were exposed to standard facility lighting (~250 lux) for 10 min to allow for light adaptation prior to photopic ERG testing. A UTAS BigShot Visual Electrodiagnostics System was used to evoke and acquire ERG signals that were high-pass filtered at 0.3 Hz and low-pass filtered at 500 Hz. ERG protocols were adapted from Rosolen et al. (2005) to test scotopic and photopic luminance responses of the retina. Photopic responses were obtained with the background Ganzfeld illumination of 30 cd/m 2 (white light generated by the BigShot system and calibrated by LKC Inc.). ERG waveforms were analyzed using LKC Technologie's software and the guidelines of the International Society for Clinical Electrophysiology of Vision (ISCEV) (Rosolen et al., 2005). The amplitude of the a-wave was measured from baseline to trough and its latency was measured from stimulus to a-wave trough. The amplitude of the b-wave was measured from a-wave trough to b-wave peak and its latency was measured from the stimulus to b-wave peak.
Optokinetic tracking response
Visual acuity was measured in the animals with normal (n = 9, male) and abnormal (n = 9, male) ERGs in group 3 (Table 1) using an optokinetic testing apparatus (OptoMotry; Cerebral Mechanics, Inc., Lethbridge, AB, Canada) at 21-23 weeks of age. It tested if the animal had reflexive head movement as the responses to rotating strips displayed on four computer monitors (optokinetic reflex) surrounding the animal (Chowers et al., 2017). A standard stepwise protocol was adapted, and the final score was calculated by the program, and the test videos were captured for post-experiment Comparison of scotopic a-and b-wave luminance responses between male and female WH rats with normal ERG signals at 7-9 weeks. Male Wistar Han rats had lower mean amplitudes of rod-mediated ERG a-wave (A) and b-waves (B) tested with −36 to +5 dB flashes that were statistically significant when compared with subset of female Wistar Han rats with normal ERG, but there were no statistically significant differences in the latencies of rodmediated luminance response a-or b-waves (C, D) between the two groups of animals. SEM = standard error of the mean. * Indicates significant differences between male (filled circle) and females (open circle) at the same flash intensities of stimulation (2-way ANOVA, F (1,94) = 36.98, ****p < 0.0001 for (A) and F (1.94) = 56.02, p < 0.0001 for (B).
Frontiers in Toxicology frontiersin.org 04 review and confirmation. Three observers' judgments were pooled for determination of animal's OKR responses.
Brainstem auditory evoked potential
Rats with normal (n = 9, male) and abnormal (n = 9, male) ERGs from group 3, used for OKR test, were also tested for BAEP at 21-23 weeks of age. The animals were anesthetized with 2.5% isoflurane and placed on a heated pad to maintain a body temperature of approximately 37.5°C. Acoustic stimuli were created using a digital stimulator (WPI DS8000, World Precision Instruments, Sarasota, FL) in the form of click stimuli with a 100 μs duration and a monopolar waveform. The stimuli (75 dB) were delivered bilaterally to the rat's external auditory canals via earplugs.
Six hundred and fifty stimuli were administered at a 5 Hz frequency. Auditory potentials were recorded from the right ear only through a subcutaneous Grass ® platinum needle electrode (F-E2, Natus Neurology, Galway, Ireland) placed at the vertex (active) and parietal-occipital area ventrolateral to the right ear (Alvarado et al., 2012). The signals were amplified 10,000 times, bandpass filtered between 300 Hz and 3,000 kHz, and sent to an Axon Digitizer (1550B, Molecular Devices Corp, Sunnyvale, CA) for analog-to-digital conversion. The responses were averaged 650 times, and the averaged waveforms were analyzed within a 20 msec post-stimulus window. Clampfit software (Molecular Devices, ver. 10.6) was used for measurements and analysis of amplitude and latency of evoked auditory responses. The peak amplitudes and latencies of waves II, III, IV, and V were determined relative to the onset of the acoustic stimulus (Alvarado et al., 2012).
Ultrasonic vocalization
Rats with normal (n = 8, male) and abnormal (n = 8, male) ERGs from group 3, used for OKR and BAEP tests, were also tested for USV at 21-23 weeks of age. To reduce social isolation effects on USVs (Brudzynski and Ociepa, 1992), rats were pair-housed in 8 cages for 24-h continuous recording of USVs. In the test cage, an ultrasound microphone was inserted and fixed in the center of the short wall to capture USV signals emitted by the rats. The emissions were captured by the UltraSoundGate condenser ultrasonic microphone (CM16, Avisoft Bioacoustics, Berlin, Germany), which is sensitive to frequencies between 15 and 180 kHz and has a flat frequency response between 25 and 140 kHz (±6 dB). The microphone was connected to a computer via an UltraSoundGate IH8 (Avisoft Bioacoustics), and acoustic data were recorded by Avisoft Recorder software (version 2.95, Avisoft Bioacoustics), using a sampling rate of 250,000 Hz in 16-bit format and a recording range of 0-125 kHz (Hwang et al., 2022). Fifty and 22 KHz signals were analyzed off-line.
Histology
One to 3 weeks after behavioral testing (OKR, USV and BAEP tests), the 18 male rats were selected for necropsy and tissue collection. These animals were deeply anesthetized using isoflurane and then euthanized by exsanguination. The brains were rapidly and carefully removed, sliced in half coronally, and then fixed overnight in 4% neutral buffered formalin. The following day, the specimens were trimmed coronally at the level of the striatum, corpus callosum, and motor cortex, as well as at the level of the mid-cerebellum and medulla oblongata (levels 2 and 6, as described in (Bolon et al., 2013)). The two most rostral sections of each brain level were processed and embedded into the same paraffin block, and 5 μm sections were taken. The eyes were enucleated immediately after the brain was collected and fixed in Davidson's fixative. The eyes were then processed into slides for microscopic evaluation. For each eye, a horizontal section was taken just below the optic nerve and at least five step sections were taken at 100 μm intervals, starting from below the optic nerve and Comparison of photopic b-wave luminance responses between male and female Wistar Han rats with normal ERG signals at age of 7-9 weeks. Male Wistar Han rats had lower mean amplitudes of conemediated ERG b-wave (A) tested with −8 to +5 dB flashes that were statistically significant [2-way ANOVA, F (1,657) = 19.14, ****p < 0.0001] when compared with the female Wistar Han rats. But there were no statistically significant differences in the latencies of conemediated luminance response b-waves (B) between the two groups.
Frontiers in Toxicology frontiersin.org 05 proceeding toward the optic disk. All brain and eye sections were stained with hematoxylin and eosin (H&E) for microscopic evaluations.
Data analysis and statistics
For ERG data, a two-way analysis of variance (ANOVA) with repeated measures was performed to compare and assess the luminance responses to light flashes (Inamdar et al., 2022), using GraphPad Prism (Version 9.0.0, GraphPad Software, San Diego, CA). Student t-test was used to compare the differences in ERG, OKR, USV, and BAEP parameters between normal sighted animals and those with abnormal ERGs. Fisher exact test was used for rate or incidence comparison. The statistical significance of the comparisons was determined at a level of α = 0.05 (Liu et al., 2015).
Abnormal ERG in male Wistar Han rats
The scotopic and photopic luminance responses to a series of flashes were tested in four groups of WH rats aged 7-9 and 21-23 weeks. For female animals, ERGs were tested at 2 ages within the same animals (group 2 and 4). Interestingly, some animals in both age groups displayed abnormal ERG waveforms, characterized by a large negative inflection followed by a flat line (Figure 1), without clear a-or b-waveform as seen in normalsighted animals ( Figure 2). In addition, these waveforms did not increase in amplitude as the flash stimuli were intensified ( Figure 1). Notably, this type of abnormal ERG waveform was only observed in males in groups 1 and 3 (13% and 19%, respectively), but not in age-matched females (0%, p = 0.0126, Fisher exact test, Table 1).
Normal ERG response comparison between male and female Wistar Han rats
Since only male WH rats manifested abnormal ERG waveform, we wondered whether there were also any differences between males and females that had normal ERG responses. Therefore, we compared scotopic a-wave, b-wave, and photopic b-wave parameters between male and female animals at 7-9 weeks (45 males vs. 51 females) and 21-23 weeks (39 males vs. 51 females) of age. At 7-9 weeks, male WH rats (group 1) had lower mean amplitudes for rod-mediated scotopic ERG a-wave (Figures 2, 3A) and b-wave (Figures 2, 3B), which were statistically significant when compared with female WH rats. However, there were no differences in the latency of scotopic a-
FIGURE 5
Comparison of scotopic a-and b-wave luminance responses between male and female Wistar Han rats with normal ERG signals at 21-23 weeks. Female Wistar Han rats had lower mean amplitudes of rod-mediated ERG a-wave (A) but not b-waves (B) tested with −36 to +5 dB flashes that were statistically significant [2-way ANOVA, F (1,88) = 8.210, **p = 0.0052] when compared with a subset of male Wistar Han rats. But there were no statistically significant differences in the latencies of rod-mediated luminance response a-or b-waves (C, D) between the two groups of animals.
Frontiers in Toxicology frontiersin.org 06 or b-waves between males and females ( Figures 3C, D). The mean amplitudes of scotopic oscillatory potential were significantly lower in males compared to females, with a 43% difference (p < 0.0001, t-test). The male WH rats also had lower mean amplitudes for conemediated photopic b-wave ( Figure 4A), but no differences in the latency of photopic b-waves between males and females ( Figure 4B). At 21-23 weeks, in contrast to the comparative results obtained at weeks 7-9, male WH rats (group 3) had slight but significantly larger mean amplitudes ( Figure 5A) and similar latency ( Figure 5C) of a-wave, and similar b-wave amplitude and latency of rod-mediated scotopic ERG responses. Likewise, cone-mediated b-wave amplitude was larger in males compared with females ( Figure 6A). There were no differences in the latency of b-weave ERG parameters tested at this time point (Figures 6B).
Normal ERG responses comparison between 7-9 and 21-23 weeks in female Wistar Han rats
Given the opposite difference of ERG responses between male and female animals at 7-9 and 21-23 weeks, we longitudinally compared the animals ERG responses in cohort 2 female animals (group 4 vs. group 2). Interestingly, the amplitudes of both scotopic ERG a-and b-wave, though not the latencies, were significantly decreased in 21-23 weeks compared with 7-9 weeks (a-wave: 213.0 µV vs. 335.9 µV at 5 dB; b-wave: 498.3 µV vs. 698.6 µV at 5 dB, all p < 0.0001, Figures 3A vs 5A and Figures 3B vs 5B).
Visual acuity in weeks 21-23
To evaluate whether animals with abnormal ERG would exhibit normal visual-dependent behavior, we performed a visual acuity behavior test (OKR) in male rats at 21-23 weeks of age. We measured and compared visual acuity between 9 animals with normal and 9 animals with abnormal ERG waveforms. In the nine rats with normal ERG waveforms, the average visual acuity was 0.165 ± 0.102 cycle/degree (mean ± SD), whereas the mean visual acuity was only 0.040 ± 0.073 cycle/degree in 9 animals with abnormal ERG waveforms. Thus, the animals with abnormal ERG waveforms resulted in statistically significantly smaller mean visual
FIGURE 6
Comparison of photopic b-wave luminance responses between male and female Wistar Han rats with normal ERG signals at age of 21-23 weeks. Female Wistar Han rats had lower mean amplitudes of coned-mediated ERG b-wave (A) tested with −8 to +5 dB flashes that were statistically significant [2-way ANONA, F (1,88) = 11.21, **p = 0.0012] when compared with the male Wistar Han rats. But there were no statistically significant differences in the latencies of conemediated luminance response b-waves (B) between the two groups.
FIGURE 7
Comparison of spatial frequency thresholds of the acuity measured with OKR between animals with and without normal ERG waveforms at 21-23 weeks. The animals with abnormal ERG waveforms had significantly smaller grating thresholds (0.04 cycle/degree), compared with the animals with normal ERG (0.17 cycle/degree, t-test, t (Bavelier and Neville, 2002) = 2.978, **p = 0.0089).
Frontiers in Toxicology frontiersin.org 07 acuity scores compared with the animals with normal ERG waveforms. (Figure 7).
BAEP in week 23
To determine if the animals with abnormal ERG have altered hearing function to compensate for poor vision, we measured and compared brainstem auditory-evoked potentials in the same group of animals tested for visual acuity (3.4). There were no statistically significant differences in the amplitudes or latencies of waves II, III, IV, and V between the animals with normal ERG and those with abnormal ERG waveforms (Supplementary Table S2; Supplementary Figure S1).
USV comparison in weeks 21-23
Ultrasonic vocalization, an important mean of communication between rats, was evaluated to investigate if there was compensation in USVs in the blind animals. We recorded the USV from 8 rats with normal and 8 with abnormal ERG previously used for BAEP test continuously for 24 h. The poor-sighted animals had similar circadian patterns and 24-h total USV counts as the normalsighted animals in both 50-kHz and 22-kHz USV call counts (all p > 0.05, Supplementary Figure S2).
Clinical and ophthalmic observations
No signs of abnormal behavior or morbidity were observed in any animal throughout the 3-month period. The ophthalmological analyses revealed no abnormalities in the retina and other components of the eyes in group 2 animals (females) at 7-9 weeks of age or groups 1 and 3 animals (males) at 7-9 weeks of age and 21-23 weeks of age, respectively.
Histology
There were no abnormal microscopic findings in the retina, brainstem, and visual and auditory-related areas in the rats with abnormal ERG responses at 21-23 weeks of age (Supplementary Table S3).
Discussion
In this study, we evaluated retinal function in male and female WH rats at two ages to determine the presence of spontaneous retinal functional deficits in the albino strain, and to explore any potential compensations in other sensory systems. Our results showed that a fraction of male WH rats had abnormal ERG signals and poor visualmediated tracking responses at both 7-9 weeks and 21-23 weeks of age, without any changes in retinal or brain morphology. Even in normalsighted rats with normal ERG signals, we found that the scotopic and photopic luminance responses were smaller in male WH rats compared with age-matched females at 7-9 weeks, but not at 21-23 weeks. Here, we chose the ages to mimic the duration of regular 3-month subchronic toxicity studies (Galijatovic-Idrizbegovic et al., 2016), which usually starts at 5-9 weeks of age (Baldrick, 2008). We did not observe evidence of compensations in brainstem auditory potential, ultrasonic vocalizations, or auditory morphology in the visual pathway of blind rats at 21-23 weeks of age (to mimic the time point at which histopathology is routinely evaluated). In conclusion, these findings confirm the presence of spontaneous retinal ERG deficits in 13% of adult male WH rats at 7-9 weeks of age and ERG and OKR deficits in 19% of adult male WH rats at 21-23 weeks of age, respectively.
The most notable outcome of our study is that a subset of naïve male WH rats showed abnormal ERG responses when their eyes were stimulated with flashes. As depicted in Figure 1, the amplitude of the scotopic ERG barely increased as the stimuli grew brighter, a phenomenon similar to the waveforms reported previously in 8.5week-old albino rats with retinal dystrophy (Dowling and Sidman, 1962). The missing amplitudes of both a-and b-waves in these rats could be a result of weaker or no activity of photoreceptors, or minimal input from photoreceptors (Dowling and Sidman, 1962) into the postphotoreceptor circuits in the neuroretina, such as bipolar cells. In addition to the abnormal ERGs, we also evaluated the visual acuity of rats with and without normal ERG waveforms at 21-23 weeks of age. The sighted animals had an average acuity of 0.17 c/d, which is slightly lower than the values (0.36 c/d) reported for male WH rats at 7-9 weeks (Redfern et al., 2011). This difference might be due to observer bias. Despite this apparent decrease, the animals with abnormal ERGs had significantly lower acuity values, providing further evidence of vision impairment in this subset of male WH rats. Our data analysis of other animals with normal ERG waveforms, similar to the results reported for 8-26 week-old Sprague-Dawley rats (Chaychi et al., 2015), confirmed that the average ERG luminance responses in male WH rats were significantly smaller than those of females (Figures 2, 3) at 7-9 weeks old (p < 0.01), suggesting functional differences in photoreceptors, particularly the rod photoreceptors. Interestingly, our microscopic evaluations showed no noticeable thinning or reduction of the photoreceptor nuclei layer in 21-23 weeks old male rats with abnormal ERGs compared to those with normal ERGs. This is consistent with a recent review paper, which found retinal degeneration in control WH rats only after 52-and 104-week toxicity studies (Cloup et al., 2021). Likewise, the routine ophthalmic examination did not find any abnormality in the eyes of male and female WH rats at 7-9 weeks of age or male HW rats at 21-23 weeks of age. For the female animals in group 4, no ophthalmic examination was repeated at 21-23 weeks of age, since the ophthalmic examination is less sensitive compared with histopathology or ERG in spontaneous (Taradach et al., 1981), light-induced (Jaadane et al., 2015) or systemically administered drug-induced (Huang et al., 2015) retinal damages. We hypothesized that visual functional impairment occurs before any morphological changes can be seen in these animals. We also don't attribute the current observation to well-documented lightinduced retinal damage often seen in albino rats [see review (De Vera Mudry et al., 2013)], since in our vivarium environment, 12 h on/12 h off cyclic illumination was applied during all the study course, which is less damaging to the retina than constant illumination (De Vera Mudry et al., 2013). We and animal vendor also used~300 lux lighting 1 m above the floor (Supplementary Table S4), which was approved safe and no phototoxic retinopathy concern for rats (Bellhorn, 1980). Furthermore, the animal cages were rotated vertically in the rack Frontiers in Toxicology frontiersin.org on a weekly basis as suggested (Rao, 1991). Rather, it may be inherited and related to albinism. It is well established that albino rats, such as Sprague-Dawley and WH, have impaired visual acuity (Prusky et al., 2002) and altered visual signal transmission latency from the retina to the superior colliculus (Thomas et al., 2005) compared with pigmented strains. These investigators did not further differentiate between sex, though. In humans, the influence of biological sex on retinal function as measured with the ERG has been known for over 60 years (Karpe et al., 1950). ERGs are typically reported to have larger amplitudes in women compared to men (Birch and Anderson, 1992;Brule et al., 2007). Estrogens have been demonstrated to be neuroprotective against a variety of insults in both in vitro and in vivo models of neurodegenerative diseases. It is believed that the differences in retinal function and structure between the sexes may be governed by differences in sex hormone profiles. The presence of estrogen receptors mRNA (Wickham et al., 2000) and protein (Kobayashi et al., 1998) in various layers of the rat retina (Kobayashi et al., 1998;Kumar et al., 2008) suggests that this hormone plays an important role in maintaining normal retinal function in females (Yamashita et al., 2010;Yamashita et al., 2011). Additionally, the menstrual cycle and accompanying hormonal fluctuations, particularly estrogen, have been observed to potentially modulate several ocular structures, including the retina in humans (Barris et al., 1980;Bassi and Powers, 1986). Preclinical experiments demonstrated that estrogen protects against postischemic tissue damage in rat retina (Nonaka et al., 2000), and glutamate-induced cytotoxicity in the retinal photoreceptor cells (Nixon and Simpkins, 2012) and ganglion cells (Kumar et al., 2005). In a light-induced photoreceptor degeneration rodent model, estrogen reduced rod and cone photoreceptor cell damage functionally and structurally (ARVO Annual Meeting Abstracts, March 2012). Other sex-dependent differences, such as retinal pigment epithelia or neurotransmitters (glutamate and GABA (Blaszczyk et al., 2004)) in the retina might play a role in our observation, but none of them has been compared between retinas of male and female albino rats. The next intriguing question is how blind animals handle communication and orientation without the use of their major sensory function. In other words, whether or not the blinded animals had altered sensory functions as compensation. To answer these questions, we recorded USVs continuously for 24 h, and the animals with abnormal ERGs appeared to have similar circadian patterns to those with normal ERGs in both 50-kHz and 22-kHz call counts. The data suggest that in these blind rats, the eye may still retain its ability to detect light cues for coordinating circadian rhythms, similar to blind mole-rats (Hetling et al., 2005). However, it was not known if there was compensation in other sensory channels, such as USV or auditory function. According to our 24-h recording, the spontaneous USV call count per 30 min and total count of 50-kHz (Schwarting, 2018) and 22-kHz (Simola, 2015) over 24 h didn't show any significant difference between these animals and other normalsighted animals as groups. For BAEPs to click stimuli, the sources of waves I, II, III, IV, and V of the potential are the cochlear nerve, cochlear nuclei, superior complex, dorsal and rostral olive extrusion, and lateral lemniscus, respectively (Shaw, 1988;Chen and Chen, 1991). BAEP increase during the postnatal period and are sensitive to brainstem lesions such as tumors, trauma, hemorrhage, ischemia and demyelination (Legatt, 2002). Our results indicate the auditory function in the brainstem level of the animals with abnormal retinal or visual function appears the same as those in the normal-sighted animals. This study is the first to investigate compensatory mechanisms of WH rats with impaired vision. We did not observe compensatory responses in USVs and BAEPs as well as the histology of auditory and visual pathway in these animals. Further studies need to be performed to explore additional systems or functions potentially altered in these animals. The mechanism underlying the retinal functional differences and potential compensation remains to be elucidated in further studies. Transcriptomic analysis might provide more details (e.g., immune response, inflammation, apoptosis, Ca2+ homeostasis or oxidative stress (Kozhevnikova et al., 2013). Other sensory modalities, for example, the olfactory function, which has been found age-related (Kraemer and Apfelbach, 2004), might be worth exploring for possible sensory compensation in blind rats.
In conclusion, our study shows 13%-19% incidence of retinal functional deficits in naive males WH rats at 7-23 weeks of age. Therefore, sex differences should be considered when using Wistar Han rats in toxicity and safety pharmacology studies with regard to data interpretation of retinal functional assessments. In addition, pigmented rats, such as Long-Evans rats with less spontaneous (Heiduschka and Schraermeyer, 2008) or light-induced (Wasowicz et al., 2002) visual impairments, could be considered for stand-alone retinal toxicity tests (Heiduschka and Schraermeyer, 2008;Perlman, 2009;Liu et al., 2015;Shibuya et al., 2015), although it is not a standard toxicity study strain and has less information available for other non-ocular tissues. Pre-screening the male WH rats in the pre-dose phase of the planned toxicity studies with ERG endpoint is also recommended.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
Ethics statement
The animal study was reviewed and approved by the Pfizer Institutional Animal Care and Use Committee (IACUC).
Author contributions
C-NL, KW, and MB contributed to conception and design of the study. CT and S-KH collected and analyzed data. BJ performed the ERG data statistical analysis. RS performed eye examination and data analysis. BM performed and interpreted histologic evaluation. C-NL wrote the first draft of the manuscript. All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
Funding
The authors declare that this study received funding from Pfizer. The funder was not involved in the study design, collection, analysis, Frontiers in Toxicology frontiersin.org 09 interpretation of data, the writing of this article, or the decision to submit it for publication. | 7,770 | 2023-05-23T00:00:00.000 | [
"Biology",
"Psychology"
] |
LAND COVER CLASSIFICATION FROM FULL-WAVEFORM LIDAR DATA BASED ON SUPPORT VECTOR MACHINES
In this study, a land cover classification method based on multi-class Support Vector Machines (SVM) is presented to predict the types of land cover in Miyun area. The obtained backscattered full-waveforms were processed following a workflow of waveform pre-processing, waveform decomposition and feature extraction. The extracted features, which consist of distance, intensity, Full Width at Half Maximum (FWHM) and back scattering cross-section, were corrected and used as attributes for training data to generate the SVM prediction model. The SVM prediction model was applied to predict the types of land cover in Miyun area as ground, trees, buildings and farmland. The classification results of these four types of land covers were obtained based on the ground truth information according to the CCD image data of Miyun area. It showed that the proposed classification algorithm achieved an overall classification accuracy of 90.63%. In order to better explain the SVM classification results, the classification results of SVM method were compared with that of Artificial Neural Networks (ANNs) method and it showed that SVM method could achieve better classification results. * Corresponding author
INTRODUCTION
In the last decade, Light Detection And Ranging (LiDAR) has become an important source for acquisition of the 3D information of targets.It has been widely applied in many fields of remote sensing, such as, environment monitoring, disaster assessment, land cover classification.According to land cover classification, the traditional LiDAR usually uses 3D coordinates information of targets since it can only records several echoes and obtain limited information about the targets.Compared to the traditional LiDAR system, full-waveform LiDAR systems can record the entire backscattered waveform of the targets.The waveform features reflecting the properties of targets can be retrieved from the waveforms and are now extensively used for a large variety of land cover classification (Mallet and Bretar, 2009;Heinzel and Koch, 2011).This paper aims to study the land cover classification using full-waveform LiDAR data.Some scholars have studied land cover classification based on full-waveform features.In 2008, Straub et al. presented a processing procedure for automated delineation and classification of forest and non-forest vegetation which was solely using full waveform laser scanner data as input.An overall accuracy of 97.73% was reached.However, only forest and non-forest vegetation were classified (Straub et al., 2008).In 2008, Reitberger et al. described an unsupervised species classification method based on features that were derived by waveform decomposition of full waveform LiDAR data.The classification grouped the data into two clusters (deciduous, coniferous), which leaded to an overall accuracy of 80 % in a leaf-on situation.The presented results clearly showed the potential of full waveform data for the comprehensive analysis of tree structures (Reitberger et al., 2008).In 2010, various statistical waveforms parameters, such as standard deviation, skewness, kurtosis and amplitude were used as inputs to an unsupervised classification method, Kohonen's Self-Organizing Map (SOM), to separate vegetation (trees and grass) and nonvegetation (pavement and roof) surfaces.However, there was no quantitative evaluation of the classification results (Zaletnyik et al., 2010).These studies were based on the unsupervised classification methods, but the supervised classifiers were preferred since they offer a higher flexibility.In remote sensing field, Support vector machines (SVM), which is a supervised classifier, has been used for classification under different applications, multispectral measurements, DEM Generation from Aerial LiDAR Data, Synthetic Aperture Radar (SAR) images.Thus it has played a major role in classification problems (Yang and Lunetta, 2012).Some scholars have investigated the potential of full-waveform data for land cover classification using the SVM classification method.In 2009, Bretar showed that LiDAR amplitude and width contained enough discriminative information on bad lands to be classified in land, road, rock and vegetation.A 3-D land cover classification was performed by using a SVM classifier.However, the classification accuracy was only 79.1% when the amplitude, width and Digital Terrain Model (DTM) information were combined (Bretar et al., 2009).In 2011, Mallet et al. used a SVM classifier to label the point cloud according to various scenarios based on the rank of the features.The results showed that echo amplitude, cross section and backscatter coefficient significantly contributed to the high classification accuracies (around 95%).However, only three land cover types (building, ground and vegetation) were classified.And adding redundant features in a same set prevented from concluding on the contribution of each feature (Mallet et al., 2011).In 2015, Tseng et al. combined LiDAR waveform data, orthoimage data and the spatial features of waveform data with SVM to classify the land cover point clouds.However, only by using fused waveform and orthoimage information, the highest overall accuracy could be achieved in land cover point clouds classification (Tseng et al., 2015).In this paper, a classification method based on multi-class SVM using full waveform features, i.e. distance, intensity, FWHM and back scattering cross section was presented to predict the types of land cover in Miyun area as ground, trees, buildings and farmland, and the method was compared to ANNs.The remainder of this paper was organized as follows: In section 2, waveform processing methodology, including waveform decomposition, features extraction was introduced.Then multiclass SVM classifier theory was presented.In section 3, the workflow of SVM classification method was introduced.Four land cover types in Miyun area were classified based on SVM and the results were compared to the ANNs in this part.The conclusions were given in section 4.
Waveform processing methodology
Full-waveform LiDAR system records the entire backscattered waveform signal from targets, which is actually a sum of partial scattering response signals convolved with the scanner's system waveform.Thus it not only provides 3D point clouds, but also obtains abundant information of the targets.In the workflow of processing full waveform data, waveform decomposition is the most important step.
Waveform decomposition
The waveform decomposition includes these parts: preprocessing of waveform data, waveform decomposition, and components detection.Before waveform decomposition, the noise of the waveforms needs to be removed.The widely used filtering methods include Wiener filter and Gaussian smoothing.However, the Wiener filter is very sensitive to noise (Jutzi and Stilla, 2006).For the Gaussian smoothing, it is difficult to select an appropriate kernel width for each echo pulse reflected from the complex terrain.By analysing the characteristics of the waveform intensity, Median Absolute Deviation (MAD) method was used for waveform filtering and had great effect on original waveform (Persson and Mallet, 2005).Figure 1 shows the raw waveform of an echo and the waveform filtered by MAD.It can be seen that MAD method has certain smooth effect on the raw echo waveform.Since the transmitted laser pulse is modulated as Gaussian pulse, and the scattering of laser pulse for most targets can be approximated by a Gaussian reflection, so the backscattered waveform component can be modeled as a Gaussian function.Indeed, most waveforms can be very similar to an ideal Gaussian function whereas other laser impulse responses are slightly asymmetric.Consequently, it may not be an accurate representation that using a sum of Gaussians to approximate the waveforms which depends on the targets.Therefore, generalized Gaussian function was used for waveform modeling in this paper which could better represent the backscattered patterns from different targets.In this way, fitting of asymmetric, peaked or flattened echoes located both in different areas could be improved (Chauve et al., 2007).
Waveform features extraction
The waveform features can be determined through the component parameters.In this paper, the extracted waveform features include distance, intensity, FWHM and back scattering cross section.The distance indicates the distance from laser transmitter to the target, which is determined by estimating the positin of the waveform component.Ideally the peak position is considered as component position and the time lag is used to calculate the distance (Mallet and Bretar, 2009).Intensity is a combination of emitted energy, distance, atmosphere attenuation and reflective capability of illuminated targets.In practice, the echo amplitude is most commonly regarded as intensity (Wagner et al., 2008).The FWHM denotes the extension of waveform in the incident direction, which is shown in Figure 2. It is closely related to the geometry of targets, terrain slope and targets material (Wagner et al., 2006).The backscattering cross section delineates the backscattering ability of the targets and is a comprehensive indicator of distance, intensity and FWHM (Wagner, 2010).Some factors, such as angle of incidence, atmospheric, range, surface characteristics, etc., have influence on the waveform features.Therefore, these features can hardly be used without radiometric calibration (Lehnera and Briesea, 2010).To reduce such influence and further improve the effectiveness of waveform features for land cover classification, this work has made a comprehensive correction over the extracted waveform features.The detailed methodology was introduced in published article (Zhou et al., 2015).
Multi-class SVM Classifier
SVM is a supervised classifier.For supervised classification algorithm, classification usually involves separating data into two sets that are training and testing sets, respectively.Every instance in the training set comprises one "target value" (i.e. the class labels) and several "attributes" (i.e. the features or observed variables).In this paper, the attributes are the extracted waveform features including distance, intensity, FWHM of the waveform and back scattering cross section, as described in Section 2.1.2.Based on the training data, the goal of SVM is to produce a model that predicts the target values (class labels) of the test data given only the test data attributes.In our experiments, the model was used to discriminate the four classes of interest: buildings, trees, farmland and ground.For SVM based binary-class classification, given a training set of instance-label pairs ( i x , i y ), i=1, 2…l, where (2) (3) Here training vector i x are mapped into a higher dimensional space by the function .SVM searches a linear separating hyperplane in the higher dimensional space.C>0 can be regarded as the penalty parameter of the error term.Due to the possible high dimensionality of the vector variable w , usually we solve the following dual problem: Where is a vector of all ones, Q is a l by .There are two parameters C and γ for SVM using RBF kernel.In order to find the best C and γ for a given problem, parameter searching is required.A grid-search using cross validation is applied here.In x-fold cross-validation, the training set is divided into x subsets which have the same size.Sequentially x-1 data subset is used to train the model which can test the remaining one data subsets.Various pairs of (C, γ) values growing exponentially (grid-search) are tried and the one with the best cross-validation accuracy is selected to be the model.SVMs are designed to solve binary problems.When having n ≥3 classes of interest, various approaches are possible to address the problem, usually combining a set of binary classifiers.In this paper, we use the "one-against-one" approach, in which classification a voting strategy, is used to determine the multi-classes: For each instance, k (k-1)/2 binary classifiers are invoked (k: number of classes), each classifier votes for one class, and the final label is taken to be the class with most votes (Hsu and Lin, 2002).In case that several classes have identical votes, though it was not a good strategy, we simply select the one with the smallest index.
Experiment data
The captured data of Miyun area, in Beijing, was used in this paper.The full-waveform LiDAR data was acquired by the LiteMapper 5600 airborne LiDAR system and CCD images were acquired by DigiCAM-H/22 Hasselblad.The experimental area of Miyun data set was about 14 km 2 , the flying height was about 700m, the average density of the point clouds was 4points/m 2 , and typical land covers were buildings, trees, farmland, ground, etc.In this paper, a piece of experimental area was selected to study the SVM classification using extracted waveform features.The size of the selected area was about 330m x 390m, containing about 338174 points.The CCD image of the selected area was shown in Figure 3 (a).
Experiment procedure
The flow chart of the experiment is shown in Figure 4. Firstly, the returned waveforms were filtered by MAD as mentioned above.Then waveforms were decomposed using the enhanced component detection algorithm.Features including distance, intensity, FWHM and back scattering cross section were extracted and corrected.These features would be the attributes of instance for denoting the waveform reflected from a type of land cover.Secondly, multi-class SVM model was generated to classify the land cover types.Then the received waveforms reflected from typical land cover were divided into ground, trees, buildings and farmland.In this paper, 1000 features vectors for each typical land cover type were selected as the training data to generate SVM model according to the CCD image of the experiment region.Based on the SVM procedure mentioned in section 2.2, the selected data were trained for ten times, and the SVM model with highest cross-validation accuracy would be selected as the model to predict the land cover types of Miyun area.Finally, the pseudo color classification image depicting the values of land cover types of Miyun area was generated and the results were evaluated.
Experiment results
The classification result of Miyun area using corrected features based on the SVM method is given in Figure 5.The corresponding CCD image was shown in Figure 3 (a).The brown area, the yellow area, the red area and the green area respectively represent farmland, ground, building and trees.It can be seen that by using full-waveform features we can effectively distinguish different types of land cover.5248 instances of these four land cover types, except the training data, were selected to calculate the classification accuracy, as shown in Table 1.The ground truth information was acquired manually according to CCD image data of the experiment region.The confusion matrix for the classification results of these four land cover types of Miyun area was obtained and shown in Table 1.
The overall classification accuracy using corrected features reached 90.63% and the classification Kappa was 0.8741.In this paper, SVM classification method was used.In order to better interpret the SVM classification results, nonlinear Artificial Neural Networks (ANNs) was also applied to classify the land cover types in Miyun area, and the classification results of these two methods were compared.ANNs imitate the brain"s model of an interconnected system of neurons, enabling computers to detect patterns and to learn complex relationships within data (Anderson, 1995).Usually, ANNs basically provide a "black box" model.ANNs used in this paper consisted of a single hidden layer and was trained for 500 cycles by back propagation with a learning rate of 0.2.The classification result is shown in Figure 6, the brown area, the yellow area, the red area and the green area respectively represent farmland, ground, buildings and trees.The confusion matrix for the classification results was obtained and shown in
Analysis
The overall classification accuracy using SVM method was 90.63%, while it was 87.69% by using ANNs.It can be seen that SVM classification method can indeed produce higher accuracy.From Figure 5 and Figure 6, we can see that some area on the left side of the Figure was "ground" in fact; however it was classified to be "farmland" by ANNs, as the black ellipse shows.Some areas on the right side of the Figure were "ground" in fact; however it was also classified to be "farmland" by ANNs, as the blue ellipses show.Additionally, the most confusion in prediction by SVM method was between "building" and "tree", "farmland" and "ground", as shown in the fourth row and the second column, and the third row and the fifth column in Table 1.This was possibly resulted from the similar distance of "building" and "tree", "farmland" and "ground".Prediction errors were also generated from "tree" and "farmland", as shown in the fifth row and the fourth column in Table 1, which was because "tree" and "farmland" had similar properties.
CONCLUSIONS
In this paper, the returned waveforms were filtered by MAD and waveform decomposition was implemented using the enhanced component detection algorithm.Then waveform features including distance, intensity, FWHM and back scattering cross section were extracted and corrected.The classification ability of corrected features was also clearly analysed.Multi-class SVM model was generated to classify the types of land cover in Miyun area as ground, trees, buildings and farmland.Classification results showed that the classification accuracy reached 90.63% and the classification Kappa was 0.8741.Furthermore, the SVM classification was compared to classical ANNs, and it showed that SVM method could achieve better classification results.
In future work, the further improvement in land cover classification may be achieved by using more waveform features.The weight of the features will be studied.
Figure 1.Raw waveform and the filtered waveform
Figure 2 .
Figure 2. The diagram of intensity and FWHM
which are binary indicators of the instances and become support vectors.Also, in the dual formulation, an explicit knowledge of the function is not necessary and the kernel K may be applied instead, which is not possible in the primal problem.Here the kernel function used
Figure 4 .
Figure 4. Flow chart for land cover classification of Miyun area based on full-waveform LiDAR data
Figure 5 .
Figure 5. Land cover classification results of Miyun area based on SVM method
Table 1 .
Confusion matrix of the classification results based on SVM method
Table 2 .
The overall classification accuracy reached 87.69% and the classification Kappa was 0.8349.Figure 6.Land cover classification results of Miyun area based on ANNs method
Table 2 .
Confusion matrix of the classification results based on ANNs method | 3,977.4 | 2016-06-09T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Persistent shallow micro-seismicity at Llaima volcano, Chile, 11 with implications for long-term monitoring
21 Identifying the source mechanisms of low-frequency earthquakes at ice-covered volca- 22 noes can be challenging due to overlapping characteristics of glacially and magmatically 23 derived seismicity. Here we present an analysis of two months of seismic data from 24 Llaima volcano, Chile, recorded by the permanent monitoring network in 2019. We find 25 over 2,000 repeating low-frequency events split across 82 families, the largest of which 26 contains over 200 events. Estimated locations for the largest families indicate shallow 27 sources directly beneath or near the edge of glaciers around the summit vent. These 28 low-frequency earthquakes are part of an annual cycle in activity at the volcano that is 29 strongly correlated with variations in atmospheric temperature, leading us to conclude 30 that meltwater from ice and snow strongly affects the seismic source mechanisms related to 31 glacier dynamics and shallow volcanic processes. The results presented here should inform 32 future assessments of eruptive potential at Llaima volcano, as well as other ice-covered 33 volcanoes in Chile and worldwide.
Introduction
network. Window lengths of 0.7 and 8 s were used for the short-and long-term windows, 140 respectively, with a ratio threshold of 3.5 used to define a detected event at each station. 141 These parameters were decided using manual inspection of events detected over 24 hours 142 of seismic data recorded at station AGU and differ from those used in 2015 (minimum 2 143 stations, 1 and 9 s for short-and long-term windows, and a ratio threshold of 5). Seismic 144 data during this step were pre-filtered with a 1-10 Hz bandpass filter to improve the Each event is cross-correlated with all other events within each day, using a minimum 156 cross-correlation coefficient of 0.8 to define 2 events as closely matching; this is higher 157 than the 0.7 used for repeating events in 2015. The first 5 s of each event waveform is 158 used, which is sufficient to maximise the SNR of each event. As station AGU had the 159 highest number of detected events, waveforms from this station were used to build the 160 catalogue of families. Families of repeating waveforms were defined using a hierarchical 161 clustering method similar to that used by Buurman and West (2010) and Lamb et al.
162
(2017); the scipy.cluster.hierarchy Python package is used for this step (For more 163 details, see https://docs.scipy.org/doc/scipy/reference/, last accessed October 2021). In 164 this approach, branches within the hierarchy are joined at nodes whose height is the mean 165 cross-correlation value between each event pairs spanning the two groups. These nodes 166 may join individual events or between clusters of events, depending on which linkage has 167 the highest mean cross-correlation, and families are defined by nodes whose values are 168 higher than a threshold. 169 Next, for each day a median waveform stack is computed for each family of 2 or 170 more events, which are then compared with all other stacks across the whole time period 171 to find larger, multi-day families. The last step ensures a more complete repeating 172 event catalogue by using a frequency-domain approach with an overlap-add method, (1) where x grid is a target source position, C ij is the normalized cross-correlogram between
211
The above calculation was performed within the 'enveloc' Python package (Wech 212 and Creager, 2008). This optimization problem was performed on a grid with 0.005 • 213 lateral and 100 m depth intervals (down to 5 km) centered on the summit of the volcano.
214
However, the 'enveloc' package was originally developed for searching for locations across 215 a much larger area than that which is used here so topography was previously not taken Table S1 in 249 Supplementary Materials.
250
To estimate source amplitudes for each of the regional events, we first remove the site 251 amplification and instrument response, before filtering to 1 -10 Hz. The maximum of 252 the smoothed Hilbert Transform was used as maximum amplitude A i at each station i.
253
The amplitude at the source, A 0 , was then calculated at each station based on (Battaglia 254 and Aki, 2003) where r is the source-to-receiver distance, f is the central frequency, β is wave velocity , and 73; Figs. S11, S13, S17, S20 and S21, respectively). The remaining 12 families 290 are all located at shallow depths (<500 m) around the summit of the volcano (Fig. 5).
291
Note that 5 of these families are located at or within close approximation to each other could be calculated (Fig. 5). Local magnitudes for all located repeating events fell 298 within the 0.9 -1.5 range, with little distinct difference between families (Fig. 6). As we 299 Figure 3: Catalogue of family occurrence in our dataset. Each plotted point represents the time of an event, and lines join events from the same family. The total number of events in each family is noted with grey numbers before the first event. Families containing 30 or more events are highlighted using blue diamonds for the individual events.
assumed straight wave propagation between source and receiver, we underestimate A 0 for 300 regional events which, in turn, implies that local magnitudes for all repeating events are 301 overestimated; therefore, the values calculated here represent a maximum feasible value. to be continuing in 2019, albeit with higher numbers of detected events (Fig. 2a, b). and depths for most families support the hypothesis that these were generated by glacial 341 activity rather than volcanic.
342
Qualitative event magnitudes for each located family suggest little distinct differences 343 in energies between families (Fig. 6). The lack of obvious differences between each family 344 suggests they may be generated by a similar source mechanism, but at different locations.
345
A review of glacial seismicity has suggested that source mechanisms may be identified
406
Only temperature has a peak (0.32) which exceeds the maximum correlation value that 407 could be obtained randomly (0.3). There is also a strong diurnal cycle in event rates 408 from 2013 to 2020 with lower detection rates from 1000 to 1800 local time (Fig. 7d).
409
Visual comparison with hourly wind and temperature data suggests this is possibly due 410 to increased seismic noise from wind drowning out smaller magnitude events (Fig. 7d). A | 1,479.8 | 2021-11-01T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
Edinburgh Explorer Reliability of an automatic classifier for brain enlarged perivascular spaces burden and comparison with human performance
In the brain, enlarged perivascular spaces (PVS) relate to cerebral small vessel disease, poor cognition, inflammation and hypertension. We propose a fully automatic scheme that uses a support vector machine (SVM) to classify the burden of PVS in the basal ganglia (BG) region as low or high. We assess the performance of three different types of descriptors extracted from the BG region in T2-weighted MRI images: 1) statistics obtained from Wavelet transform’s coefficients, 2) local binary patterns and 3) bag of visual words (BoW)-based descriptors characterising local keypoints obtained from a dense grid with the scale-invariant feature transform characteristics. When the latter were used, the SVM classifier achieved the best accuracy (81.16%). The output from the classifier using the BoW descriptors was compared with visual ratings done by an experienced neuroradiologist (Observer 1) and by a trained image analyst (Observer 2). The agreement and cross-correlation between the classifier and Observer 2 ( κ =0.67[0.58 0.76]) were slightly higher than between the classifier and Observer 1 ( κ =0.62[0.53 0.72]) and com-∗ parable between both observers ( κ =0.68[0.61 0.75]). Finally, three logistic regression models using clinical variables as independent variables and each of the PVS ratings as dependent variable were built to assess how clinically meaningful were the predictions of the classifier. The goodness-of-fit of the model for the classifier was good (AUC values 0.93(model1), 0.90(model2) and 0.92(model3)) and slightly better (i.e. AUC values 0.02 units higher) than that of the model for Observer 2. These results suggest that, although it can be improved, an automatic classifier to assess PVS burden from brain MRI can provide clinically meaningful results close to those from a trained observer.
Introduction
Perivascular spaces, also known as Virchow-Robin spaces, are fluid-containing spaces that surround the walls of small vessels and capillaries in the brain as they go through the grey or white matter. Perivascular spaces are microscopic, filled with interstitial fluid and act as drainage pathways for fluid 5 and metabolic waste from the brain and, when enlarged, are visible in structural magnetic resonance imaging (MRI) sequences (Potter et al., 2015b). High number of enlarged perivascular spaces (PVS) has been reported to be associated with worse cognition (MacLullich, 2004), active inflammation in multiple sclerosis plaques (Wuerfel et al., 2008) or ageing (Aribisala et al.,10 2014), depression at older ages (Patankar et al., 2007), Parkinson's disease (Laitinen et al., 2000) and cerebral small vessel disease (Doubal et al., 2010).
The term Small Vessel Disease (SVD) refers to a group of pathological processes that affect the small arteries, veins and capillaries of the brain (Pantoni, 2010). It is the most common cause of vascular dementia and a 15 cause of about a fifth of the strokes worldwide , proven to have significant and strong associations with vascular risk factors (Staals et al., 2014).A moderate to severe burden of PVS in the basal ganglia (BG) is one of the markers of SVD , along with lacunes, cerebral microbleeds and white matter hyperintensities (WMH).
20
PVS can be better identified on T2-weighted (T2w) MRI, where they appear as linear or dot-like structures with intensities close to those of the cerebrospinal fluid (CSF) and less than 3mm diameter in cross section . Therefore, PVS can be potentially quantified. Visual counting and/or manual delineation of PVS can be time consuming, and the develop-25 ment of computational methods to assess them is challenging, partly due to inconsistencies within the literature regarding PVS diameter and overlap in shape, intensity, location and size with these of lacunes (Valdés Hernández et al., 2013). Recently, Wang et al. (2016) and Ramirez et al. (2015) presented computational methods to obtain quantitative measurements of PVS 30 and validated the usefulness of their procedures in clinical research, but both approaches are semi-automatic being, therefore, prone to inter-observer variations and could be time consuming. Cai et al. (2015) also proposed a method for quantifying PVS using high resolution 7T MRI scanners but the use of such field strengths, although providing good spatial resolution and signal-35 to-noise ratio, has limited clinical use. Ballerini et al. (2016) use a Frangi filter whose parameters are optimised by means of the ordered logit model to enhance the differentiation between PVS and the background, but is unsuitable for images with very anisotropic voxels commonly used in clinical settings (e.g. voxel sizes of 0.5 x 0.5 x 6 mm) and still requires the (visual) 40 rating of the PVS.
As an alternative to quantitative measurements, several visual rating scales that provide a qualitative assessment of the burden of PVS have been proposed in recent years. Potter et al. reviewed the ambiguities of these scales and combined their strengths to develop one that proved to be robust 45 (Potter et al., 2015a). However, as with any visual recognition process, it is subject to observer bias. Making the PVS rating automatic (e.g. replicating the visual rating scale using image processing and pattern recognition) could potentially overcome these and also the drawbacks that the current methods of PVS segmentation have. 50 Computer vision and pattern recognition have already been successfully applied for computer-aided diagnosis using MRI (Munsell et al., 2015;Beheshti and Demirel, 2015) and for segmentation of brain structures or lesions Ithapu et al. (2014); Roy et al. (2015); de Brebisson and Montana (2015). It has also been used to assess markers of SVD qualitatively. For exam-55 ple, Chen et al. proposed a framework based on multiple instance learning to distinguish between absent/mild vs. moderate/severe SVD in computed tomography (CT) scans (Chen et al., 2015).
However, to the best of our knowledge, only two papers have addressed the task of assessing automatically the PVS rating in brain MRI using com-60 puter vision and pattern recognition techniques 3 González-Castro et al., 2016). They explored the use of different descriptors for this task, but did not analyse agreement with a human observer other than with the one that provided the ground truth ratings, or whether the predictors of the classification were clinically meaningful. Moreover, each of 65 these two works evaluate different descriptors to characterise the brain region selected for classifying PVS burden and report similar levels of accuracy for the preferred schemes, albeit having validated the schemes differently (i.e. González-Castro et al. (2016) uses cross-validation and compares results on randomly divided train and test subsets).
70
An overall evaluation of the schemes proposed so far for classifying the burden of PVS from brain MRI is lacking.
In this paper we build upon the work presented in , comparing the performance of the descriptors proposed by both studies for classifying automatically the burden 75 of PVS using a Support Vector Machine (SVM) (Vapnik, 1995). We focus on the PVS in the basal ganglia (BG), since moderate to severe PVS in this region (i.e. ratings 2-4) is a marker of cerebral SVD. We evaluate three different types of descriptors: 1) statistics obtained from Wavelet transform's coefficients (Alegre et al., 2012), 2) local binary patterns (Ojala et al., 2002) and 3) 80 bag of visual words (BoW)-based descriptors, using keypoints obtained from a dense grid characterised with the scale-invariant feature transform (SIFT) characteristics. Moreover, we validate the results by comparing the predictions made by the automatic method (i.e. the classifier using the descriptors that achieve the best performance) with the ratings from two observers. Fi-85 nally, we also investigate the applicability of this classifier to clinical studies, to assess if its outcome is clinically meaningful. The paper is organised as follows: In Section 2 the dataset and proposed methods are explained. Section 3 introduces the experimental setup and the results of the experiments, which are discussed in Section 4. Finally, the conclusions and possible future 90 lines of work are presented in Section 5.
Subjects and MRI protocol
We used data from 264 patients who gave written informed consent to participate in a study of lacunar stroke mechanisms (Valdés Hernández et al., The study that provided data for this manuscript (Valdés Hernández et al., 2015) included patients with lacunar stroke and/or minor cortical strokes which were clinically evident, and did not consider diabetes, hypertension and other vascular risk factors as criteria for exclusion. However, 100 it excluded patients with other non-vascular neurological disorders, major medical conditions including renal failure, contraindications to MRI, unable to give consent, and those who had haemorrhagic stroke or whose symptoms resolved within 24 hours (i.e. transient ischaemic attack). It was approved by the Lothian Ethics of Medical Research Committee (REC 09/81101/54) and the NHS Lothian R+D Office (2009/W/NEU/14) and was conducted according to the principles expressed in the Declaration of Helsinki.
Brain MRI was conducted at baseline (i.e. there was a maximum of 8 days between the stroke and the scan) on a 1.5 tesla GE Signa LX clinical scanner (General Electric, Milwaukee, WI), equipped with a self-shielding gradient 110 set and manufacturer supplied eight-channel-phased array heal coil. For our analyses we used the T2w images, acquired with TE 147 milliseconds, TR 9002 milliseconds, field of view 240 × 240 mm, acquisition matrix 256 × 256, slice thickness 5 mm, 1 mm inter-slice gap and voxel size 0.469 × 0.469 × 6 mm. The reconstructed image size (in voxels) is 512 × 512 × 28. For tissue 115 segmentation, diffusion-weighted and structural T1-weighted (T1w), T2w and gradient echo, acquired as specified in (Valdés Hernández et al., 2015) were also used.
PVS visual rating scale
The visual rating scale proposed by Potter et al. was used for assessing 120 the burden of PVS in the sample (Potter et al., 2015a). It rates the PVS separately in three major anatomical brain regions, i.e. midbrain, basal ganglia (BG) and centrum semiovale (CS) -shown in Figure 1 -using T2w MRI. The rating is done separately for left and right hemispheres, but a combined score that represents the average of the PVS burden is given.
All visual ratings were made by two observers: a neuroradiologist (Observer 1) with more than 25 years of experience who participated in the In this paper, we focus only on the PVS in the BG, since moderate to severe PVS in this region (i.e. ratings 2-4) is a marker of cerebral SVD, 135 which has been associated with cognitive decline (Staals et al., 2014), vascular dementia and stroke (Potter et al., 2015b). An example of each of the ratings for the BG is shown in Figure 2. We dichotomise the BG PVS scores into two classes as per Potter et al. (2015b), scores 0-1 (i.e. none or mild PVS burden) and scores 2-4 (i.e. moderate to severe), to be our classes 0 and 1, 140 6 respectively.
Image preprocessing
The guidelines for the visual rating of PVS according to this scale state that the rating should be done on the slice with the highest number of PVS, so as to minimize inconsistencies and intra-/inter-observer variations due to 145 inter-slice variations in PVS visibility, varying number of PVS on different slices and double counting of linear PVS (Potter et al., 2015a). In the case of the BG region, this slice should be chosen amongst the slices with at least one characteristic BG structure, as indicated by Wang et al. (2016). A pipeline to extract the BG region and find the axial slice (from the BG) with the 150 highest number of PVS for each subject, was developed.
The first step of this pipeline is to automatically segment the intracranial volume and cerebrospinal fluid (CSF) on the T1w images. This was achieved using optiBET (Lutkenhoff et al., 2014) and FSL-FAST (Zhang et al., 2001) respectively. The second step is to, also automatically, extract all subcorti-155 cal structures, which was achieved using other tools from the same FMRIB Software Library (FSL) as is described in Valdés Hernández et al. (2015). Thereafter, from the slices that contained BG structures, we selected those in which the total area of these structures was more than 5 % the area of the intracranial area defined on the slice.
160
On each of the BG slices initially selected, a polygon enclosing the BG, internal and external capsules and thalami was automatically drawn by joining anatomical points in the insular cortex, the closest points to them in the lateral ventricles (frontal and occipital horns) and the intercept of the genu of the corpus callosum with the septum; and subtracting from it the region 165 occupied by the CSF. These steps are illustrated in Figure 3.
From this subset of slices, the slice where our classifier operated was selected after applying contrast-limited adaptive histogram equalisation (CLAHE) (Zuiderveld, 1994) to the polygonal regions, thresholding them to 0.43 times the maximum intensity level (Valdés Hernández et al., 2013;Wang et al., 170 2016) ( Fig. 3(d)), and counting the number of thresholded hyperintense regions on each candidate slice with area between 3 and 15 times the in-plane voxel dimensions (Wang et al., 2016). Although this procedure overestimates the number of PVS in the presence of other features of SVD markers (e.g. small lesions and lacunes) (Valdés Hernández et al., 2013), it provides a good estimate of the number of PVS on each candidate axial slice, so as to select the one with more PVS. for texture description with successful results (Arivazhagan and Ganesan, 2003). Due to its frequency domain localization capability, we have applied the discrete Wavelet transform (DWT) to each selected region to characterise their textures. We have used the Haar family of wavelets, which have already been successfully used in other medical image classification applica-185 tions (Alegre et al., 2012). The DWT extracts the low and high frequency components of a signal so they can be analysed separately. When the transform is applied to an image, four matrices of coefficients are obtained: namely LL i , LH i , HL i and HH i where i stands for the level of decomposition, which represent the approximations and details in the verti-190 cal, horizontal and diagonal directions respectively, They can be seen in the example that Figure 4 illustrates.
The first level of decomposition is applied on the original image, while the next levels i are applied to the matrix of approximations of level i − 1 as Figure 5 shows.
195
One of the descriptors we used is based on the DWT, and it is built using the mean and standard deviations of the histograms of the original image and each one of the matrices of coefficients yielded after three DWT levels (i.e. LL 1 , LH 1 , HL 1 , HH 1 , LL 2 , LH 2 , HL 2 , HH 2 , LL 3 , LH 3 , HL 3 and HH 3 ). Hence we represent each region by a vector of 26 features. This descriptor 200 is known as Wavelet statistical features (WSF) (Arivazhagan and Ganesan, 2003; Alegre et al., 2012).
The other descriptor based on the DWT is built using the features proposed by Haralick et al. (1973) derived from the grey-level co-occurrence matrix (GLCM) of the original image and each of the the coefficient ma-205 trices obtained after the first DWT level (i.e. LL 1 , LH 1 , HL 1 and HH 1 ). The features extracted from each GLCM are concatenated to form the final descriptor. A diagram depicting this process is shown in Figure 6. To achieve some invariance to rotation, we averaged the features extracted from GLCMs computed with orientations 0 • , 45 • , 90 • and 135 • . These de-210 scriptors are called Wavelet Co-occurrence Features (WCF) (Arivazhagan and Ganesan, 2003;Alegre et al., 2012). In this work, we assess two vari-ants of the WCF descriptors, WCF 4 and WCF 13 , depending on whether we extracted 4 or 13 features from the GLCMs, respectively. WCF 4 is built using the Haralick features Contrast, Correlation, Energy and Homogeneity, 215 and WCF 13 is formed using all features proposed by Haralick et al. (1973) except the Maximal Correlation Coefficient. These two descriptors showed good performance in Alegre et al. (2009).
Local binary patterns
Local Binary Patterns (LBP) were introduced by Ojala et al. (2002). In 220 the original version they worked with a 3×3 pixel block, but LBPs were later generalised, so as the size of the neighbourhood and the number of sampling points were parameters of the method. Given a pixel c with coordinates (x c , y c ), a pattern code is calculated by comparing it with the value of its P neighbours separated by a distance R, which in our case is 1, as per Equation
where g c and g p are the grey-level values of pixel c and its p-th neighbour, and function s(g p − g c ) is defined as: Finally, the whole image is described by means of a histogram of the LBP values of all pixels, given by Equation (1). As the position of the first neighbour (i.e. p = 0) is fixed, it being the pixel on the right hand side of c, the LBP R,P operator is not invariant to rotation We remove such effect of 230 rotation using the rotation invariant local binary pattern, LBP ri R,P , defined in Ojala et al. (2002).
As certain local binary patterns represent fundamental properties of texture, providing the vast majority of patterns present in textures (Ojala et al., 2002), while others are known to be less descriptive of the texture, Ojala et al. 235 introduced a measure of 'uniformity' U (LBP R,P ), which counts the number of spatial transitions (i.e. bitwise 0/1 changes) in a binary pattern LBP R,P for LBP R,P less than 2 (i.e. LBP riu2 R,P ) as expressed in Equation (2).
As the BG regions and the PVS are not very big we tried to keep the texture analysis as local as possible, so in this work we have used the values 240 R = 1 and P = 8. The final descriptors we use are the histograms of the accumulated output of LBP 1,8 , LBP ri 1,8 and LBP riu2 1,8 operating in each BG region.
Bag of visual words
The Bag of Visual Words (BoW) model (Sivic and Zisserman, 2003) rep-245 resents each image as a function of the frequency of appearance of certain visual elements, called visual words. The set of visual words is called the dictionary or codebook.
To build the dictionary, a set of keypoints from each image are sampled. Around each keypoint a small square region (i.e. patch) is extracted and 250 characterised by means of descriptors that retrieve information about the distribution of its pixels intensities. After that, the descriptors of the patches are clustered into K groups, each one having a prototype feature vector which is called visual word. This process is depicted in Figure 7.
In this work, we use a dense grid for sampling the keypoints and the 255 k-means clustering method (MacQueen, 1967) for forming the visual words. The process of creating the dictionary is performed in each iteration of the cross validation using the subsets of images used for training. We assessed different numbers of visual words to evaluate their impact on the classification. Once the dictionary is built, each image of the dataset is described by means of a process called image representation. This consists of repeating, for each image, the same process of keypoint selection and characterisation used in the creation of the dictionary, using also the same methods. Then, for each "new" patch, we find the visual word of the dictionary that is most similar to it by means of calculating the Euclidean distance between their descriptors.
The histogram of the visual words representative of all patches in an image is used as its final descriptor. The image representation process is illustrated in Figure 8. In this work, the patches are described using the Scale Invariant Feature Transform (SIFT) (Sivic and Zisserman, 2003). Basically, SIFT descriptors are based on histograms of oriented gradients computed from the intensities of the regions that result from dividing a 16 × 16 pixel squared patch around each keypoint into 16 subregions of 4 × 4 pixels each. More details about 275 SIFT can be found in Sivic and Zisserman (2003). Despite these consisting of two different parts, keypoint detector and patch descriptor, we only use the patch descriptor as we are sampling the keypoints in a dense grid.
Classification
In this work, we use a Support Vector Machine (SVM) classifier, which 280 is a supervised machine-learning approach that adjusts internal "weights" by means of a training process (i.e. an optimization phase), minimising the error between its calculated response and a "ground truth" provided by an expert. This type of classifier has attracted attention in the last few years for analysing MR images (Nam et al., 2015;Tong et al., 2014;Feis et al., 2013).
285
SVM tries to find the optimal hyperplane that maximizes the distances (i.e. margins) to the instances of the positive and negative classes in the training dataset. One of the parameters of SVM is the cost parameter C, which controls the trade-off between classes allowing training errors and forcing rigid margins.
290
SVM is a linear classifier: it tries to separate the data using a linear hyperplane. There are cases where the data is not linearly separable. In those cases, SVM may use the kernel trick : A kernel function K(x , x) may transform the data into a higher dimensional space where it is possible to separate it linearly. After evaluating different kernels (i.e. linear, radial basis function, sigmoid), the best results were achieved with the radial basis function (RBF) kernel: We refer the reader interested in more details about SVM to Schölkopf and Smola (2001).
Validation of the classifier
We validated the classification with a stratified 5-fold cross validation as 305 follows. The whole set, represented by the descriptors explained in Sections 2.4.1, 2.4.2 and 2.4.3, was randomly partitioned into 5 equally sized subsets with the same distribution as the original set. Of the 5 subsets, 4 were used to train the classifier and the remaining one was used as the test set. This process was repeated 5 times using a different subset each time as test set.
310
The 5 results from the 5 folds were averaged to provide the final results. This cross validation process was repeated 10 times, and the 10 results were averaged to avoid possible bias due to a random separation of the folds. Data were normalised so that they had mean 0 and standard deviation 1.
The overall results were validated in terms of accuracy, sensitivity and 315 specificity, using the dichotomised ratings of Observer 1 as ground truth.
Statistical analyses
The descriptors that achieve the best performance would be used in a real automatic visual rating application. Therefore we analysed the agreement of the visual ratings between the automatic classifier based on those descriptors 320 and between each observer. We also analysed the association between the outcome of each PVS rating (i.e. from each observer and from the automatic classifier) and clinical parameters known to be related to PVS burden in the patients that comprise this sample (see Section 2.1). 325 We determined the weighted Kappa coefficient of the PVS ratings in the BG region (scale 0-4) between observers as per http://vassarstats. net/kappa.html (Copyright Richard Lowry 2001. We also performed marginal homogeneity tests of the basal ganglia PVS visual ratings (scale 0-4) using the software application mh.exe ver. 1.2 (2016-03-01) (by John 330 Uebersax).
Inter-observer agreement
After dichotomising the BG PVS visual ratings produced by both observers, we determined the Kappa coefficient between observers and between the automatic classifier and each observer, using the function kappa in MAT-LAB R2015a (Copyright (c) 2007, Giuseppe Cardillo, updated 23 Dec 2009, 335 http://uk.mathworks.com/matlabcentral/fileexchange/15365-cohen-s-kappa/ content/kappa.m). We also conducted the McNemar's test between the ratings produced by the expert (i.e. Observer 1) and the automatic classifier to investigate whether the marginal frequencies between both were or not equal.
Clinical validation
The following clinical and demographic parameters were available for each study participant: age, hypertensive (or not) classification, stroke subtype (lacunar or cortical) classification and scores of white matter hyperintensity (WMH), atrophy and SVD burden. WMH were coded using Fazekas scores, 345 for periventricular (PV) and deep lesions separately in the left and right hemispheres and a combined score for both hemispheres was recorded (Fazekas et al., 1987). Brain atrophy was coded using a validated age-relevant template (Farrell et al., 2008), with superficial and deep atrophy coded separately ranging from none to severe on a scale from 1 to 6 according to the centiles 350 into which the template is divided, being 1(< 25 th ), 2(25-50 th ), 3(50-75 th ), 4(75-95 th ), 5(> 95 th ) and if >>5, 6 is used. Total atrophy was calculated as the average of deep and superficial atrophy scores. SVD was coded as per Staals et al. (2015) (0-4), which confers a point for each of the following conditions: if 1 or more cavitated old lacunar lesions are present, if Fazekas 355 PV score >= 3 and/or Fazekas Deep score >= 2, if BG PVS score is >= 2 as per Potter et al. (i.e. moderate-to-extensive), and if more than 1 brain microbleed is present.
We calculated the non-parametric bootstrapped correlations between BG PVS scores (before and after dichotomisation, from observers and from the 360 automatic classifier) and each clinical variable. We also performed bino-mial multivariable logistic regression to evaluate the clinical usefulness of our machine-learning scheme as per Potter et al. (2015a) and its sensitivity in various models. The latter was evaluated by comparison of correlated receiver operating characteristic (ROC) curves obtained from three models 365 that have as outcome variable the dichotomised PVS rating from A) the automatic classifier, B) Observer 1, and C) Observer 2. The first model (i.e. Model 1) had the following predictors: age, total atrophy, hypertension, Fazekas score, whether the patient had a previous lacunar infarct or not, index stroke subtype and SVD score. The second model (i.e. Model 2, implemented in Potter et al. (2015a)) had the same predictors as Model 1 with the exception of SVD score. The third model (i.e. Model 3) had also the predictors of Model 1 with the exception of Fazekas score and whether the patient had a previous lacunar infarct or not, as these two parameters are contemplated within the SVD score. These analyses were done using 375 MATLAB R2015a. Of note, the PVS outcome variable is also a contributor to the SVD score.
Analysis of the robustness against imaging confounds
All scans of the primary study that provided data for this analysis underwent quality checks. None of the T2-weighted sequences were corrupted 380 by visible movement artefacts that could affect the automatic PVS rating procedure presented. However, there are other confounds that could have influence in the results. We calculated the number of scans misclassified on each of the 10 iterations that contributed to the final result, on the absence and presence of the following imaging confounds visually identified by Ob-385 server 2 in the basal ganglia region blind to the neuroradiological reports: white matter hyperintensities found either bilaterally and scattered throughout the region or as a single cluster possibly indicative of a recent or old subcortical infarct, lacunes (symptomatic or asymptomatic), recent or old cortical strokes that partially affect the region, globus pallidus partially or 390 totally hyperintense, partial volume effects of the cerebrospinal fluid, and a combination of two or more of these factors.
We also counted the number of scans misclassified on each iteration for those people who had a lacunar infarct neuroradiologically determined, regardless of whether it was visible on T2-weighted in the basal ganglia region or not. This analysis would allow us to discuss whether the occurence of a recent lacunar infarct influenced the descriptors used by the classifier.
Results
The PVS ratings made by the experienced neuroradiologist (Observer 1), used to train the classifier, were distributed across the sample as Table 1 400 shows. The dichotomisation of these ratings into none-mild vs. moderatesevere resulted in 133 and 131 datasets for each class, respectively. The best descriptor in terms of overall accuracy was the descriptor based on the Bag of Visual Words model (81.15%) using a dictionary with 175 visual 410 words, followed by the fusion of WCF 4 and LBP riu2 1,8 (78.84%). Moreover, the former reached a sensitivity just slightly worse than the latter. The highest sensitivity is achieved by LBP riu2 1,8 , but its specificity is much worse than the BoW-based descriptor. It is also remarkable that, whereas WCF 4 does not get a good accuracy on its own, its accuracy improves 7% when it is fused with the LBP riu2 1,8 descriptor. The automatic classifier used in the following sections will be the SVM based on the descriptors that achieved the best overall accuracy (i.e. the dense-SIFT-based Bag of Visual Words model, with the SVM parameters C = 5 and γ = 10 −4 using a dictionary of 175 visual words). Once the visual 420 dictionary is created and the classifier is trained, this method took 0.0477 seconds to describe and classify each image.
430
The maximum possible linear-weighted kappa, given the observed marginal frequencies was 0.8486. Table 3 shows the agreement (i.e. kappa coefficient, standard error, 95% CI and maximum possible linear-weighted kappa, given the observed marginal frequencies) between each observer and the ratings assigned by the 435 SVM classifier that yield the best accuracy (see Table 2). Since the classification experiment was repeated 10 times, the reported agreements are the average of the corresponding 10 agreements. The marginal proportions between the ratings from the expert (i.e. Observer 1) and the automatic classifier were non-significantly different from each other (McNemar's test 440 p=0.1086). See the 2x2 frequency Table 4. Observer 2 (dichotomised and not dichotomised) and the automatic classifier were equally significantly and positively correlated with age, PVS ratings in centrum semiovale (dichotomised and not), atrophy (deep and superficial), Fazekas (deep and periventricular), hypertension, old lacunar infarcts and SVD score. None of the BG PVS ratings correlated with index stroke subtype 450 (lacunar or cortical), and all were highly and significantly correlated with each other as Table 5 shows. Table 5: Non-parametric bootstrapped cross-correlation matrix for PVS ratings in the basal ganglia region. All correlations shown were significant with p<0.0001.
Parameter
Observer 1 Observer 1 Observer 2 Observer 2 Automatic Scale 0-4 Dichotomised Scale 0-4 Dichotomised Classifier Observer 1 (0-4) 1 0.9317 0.8130 0.6828 0.6588 Observer 1 (0-1) 1 0.7341 0.6901 0.6464 Observer 2 (0-4) 1 0.9057 0.7127 Observer 2 (0-1) 1 0.7030 Table 6 shows the results of the binomial multivariable logistic regression. Age, Fazekas periventricular scores and the presence of old lacunar infarcts 455 were significant and negatively associated with all BG PVS scores (i.e. those done by both observers and by the automatic classifier), as in Potter et al. (2015a). The coefficient estimates tabulated (B) express the effects of each predictor variable on the log odds of being in one class (i.e. 1 or 0) versus the reference class (i.e. 1 or 0 as per Observer 1). Figure 9 shows the predicted probabilities of the outcome variables for each model. The distribution of the predicted "0"s and "1"s to be 0 and 1 respectively for the classifier and Observer 2 were similar across all models. All outcomes (i.e. PVS ratings from the classifier, Observer 1 and Observer 465 2) were consistently poorer for Model 2, which does not include SVD scores as predictor, than for the other two models. The PVS ratings from Observer 1 were particularly sensitive to the presence -and absence-of the SVD scores as predictor in the model, being exceptionally high when more components of the SVD score (including it) were included (i.e. Model 1).
Sensitivity analysis
(a) (b) Figure 9: Boxplots showing the distributions of the predicted probabilities of the outcome variable "1" (a) and "0" (b) (i.e. PVS ratings from the automatic classifier, from Observer 1 or from Observer 2) for each logistic regression model. Figure 10 shows the correlated ROC curves for each outcome variable (i.e. automatic classifier, Observers 1 and 2) also for each model. The area under the curve (AUC) from the automatic classifier experiences the least variation across the three curves: 0.93,0.90 and 0.92 for models 1,2 and 3 respectively (maximum variation 3%) indicating highest consistency in model accuracy, 475 followed by Observer 2 (maximum variation 5%).
Performance on the presence/absence of imaging confounds
As table 7 shows, only 9.6% to 16.6% of the scans that have a small T2weighted hyperintense lesion such as lacunes, white matter hyperintensities or subcortical new or old infarcts in the basal ganglia region of size comparable 480 with those of the PVS were misclassified, versus 16% of the scans that have two or more of these confounds, and 13.6% of those who had none. These percentages were higher when the T2-weighted hyperintense covered a larger region (i.e. cortical stroke or globus pallidus hyperintense), but the number of scans that had these confounds were very small (7 and 5 respectively out 485 of 264). The number of patients who had a recent lacunar infarct (neuroradiologically determined) and for which the PVS rating was miscalculated was the same as the number of patients that did not have any imaging confound and for which the PVS rating done by the classifier was wrong (compared to the ratings of the neuroradiologist).
Discussion
We developed an automatic framework to classify T2-weighted MRI as having none or few PVS in the basal ganglia region versus having many of them, in response to the need for such tool given the role of PVS in SVD and vascular dementia progression. Our framework uses a conventional SVM 495 classifier based on the information from SIFT descriptors that operate on patches from the basal ganglia region using a dense grid following the "bag of words" model. These descriptors provided the highest classification accuracy (81.16%) from those evaluated. This accuracy is slightly lower than the one reported in with the same descriptors (82.34%).
500
The reason is the different validation of the classifier used in both works: in the classification was carried out by randomly splitting the dataset into train (70%) and test sets (30%), whereas in this case we have used 5-fold cross validation. This classifier took an average of 0.0477 seconds to describe and classify each image. The framework proved to 505 be useful in clinical settings and outperformed the visual classification done by a trained observer.
The image processing pipeline that pre-processed the data where the descriptors were extracted was designed following the visual rating guidelines for PVS from Potter et al (Potter et al., 2015a) (http://www.sbirc.ed.ac. 510 uk/documents/epvs-rating-scale-user-guide.pdf), which are based on assessing the PVS from a region of interest on the axial MRI slice with the most visible PVS. All agreements between the automatic classifier, the dichotomised ratings of the experienced neuroradiologist (Observer 1) and those from the trained observer (Observer 2), as shown in Section 3.2 were 515 above 0.6. However, the agreement between the dichotomised ratings from both observers (kappa = 0.6822) was slightly higher than the agreement between the classifier and any of the observers (0.6228 with Observer 1 and 0.6743 with Observer 2). The fact that the classifier had better agreement with Observer 2 than with Observer 1 may be because Observer 2 followed 520 the same guidelines used to design the pipeline for the automatic classifier, whereas Observer 1 may have also applied their individual experience and neuroradiological knowledge when rating the PVS. The cross-correlation between the classifier output and the dichotomised ratings of both observers, shown in Table 5, followed the same pattern: the correlation of the classifier 525 with Observer 2 was higher than with Observer 1 (0.7030 and 0.6464, respectively). This cross-correlation between the output of the classifier and the dichotomised ratings of Observer 2 (0.7030) was comparable and even slightly higher than between the dichotomised ratings of both observers (0.6901).
The statistical model built to evaluate the applicability of the automatic 530 classifier to the clinical research showed excellent and similar goodness-offit irrespective of whether the outcome variable was the automatic classifier (AUC=0.90), Observer 1 (AUC=0.84) or Observer 2 (AUC=0.86). Also, age, the burden of periventricular white matter hyperintensities (i.e. Fazekas PV) and the presence of old lacunar infarcts were associated with the PVS burden 535 irrespective of whether these were rated automatically or visually by any of the observers, proving the usefulness of the automatic framework proposed. A separate sensitivity analysis of this and similar correlated models showed that the automatic classifier was the least susceptible to be influenced by the overall burden of SVD shown in the MRI scan whilst the ratings from the 540 neuroradiologist captured better the full flavour of the SVD features. The degree in which this result was favoured by the single-slice approach adopted by the classifier (Potter et al., 2015a) (Wang et al., 2016) is not known. Further evaluation on the whole extent of the three anatomical regions defined by Potter et al. (2015a), with added scrutiny to exclude lacunes is needed.
545
Nevertheless, given that the accuracy of the classifier on the presence of imaging confounds was not different from it in the absence of them, and that the output was quite robust against the whole SVD burden, we do not foresee any problem for this automatic classification scheme to be applied to longitudinal or multicentre studies, as long as the training and testing datasets 550 have similar acquisition protocols.
22
A possible limitation of this work is the fact that the segmentation of the basal ganglia region is not always accurate (due to, for example, not finding the anatomical points described in Section 2.3), causing a potential misclassification. As we wanted to assess the validity of a fully automatic 555 method, we kept those suboptimal segmentations. Another limitation of the study may be the dichotomisation of the visual ratings used in the automatic classification. Due to limitations in the sample size, we needed to simplify the classification, so we dichotomised the visual rating scale as it was done in previous studies (Potter et al., 2015b): a reliable 5-class classification model is not possible to be trained with such few instances in some classes (e.g. out of 264 subjects there were only 5 with rating 0 or 19 with rating 4). Further analyses using bigger samples and considering the full ratings (i.e. 0-4) need to be done.
565
In this paper we have proposed an automatic framework based on image analysis and machine learning to predict the burden of enlarged perivascular spaces on the basal ganglia as "none or few" or "moderate to severe" based on the PVS visual rating scale Potter et al. (2015a).We compared different descriptors computed from the basal ganglia region. The bag-of-visual-words-570 based descriptors achieved the best accuracy (81.16%) in the classification, carried out using a support vector machine trained using the visual ratings provided by an experienced neuroradiologist (i.e., Observer 1) as ground truth.
We also compared the predictions of the classifier with the visual rat-575 ings done by Observer 1 and also with those done by a trained image analyst (i.e., Observer 2). The inter-observer agreement with the Observer 2 (kappa=0.6743) was higher than with the Observer 1 (kappa=0.6228) and comparable to that between both observers (kappa=0.6822). The crosscorrelation with the Observer 2 (0.7030) is also higher than with the Observer 580 1 (0.6464), and slightly higher that that between both observers (0.6901). Finally, we built three correlated logistic regression models with some clinical variables as independent variables and the ratings predicted by the automatic method and both observers as outcome variables and demonstrated that, although the automatic classifier does not capture the overall SVD severity, it can be used in clinical research as it consistently gives a meaningful output in relation to clinical parameters.
For future work we will try to improve the classification performance by means of extracting the whole basal ganglia region and use the information from all slices where the extracted region appears (i.e. 3D analysis), as 590 it may provide information that we are currently not taking into account. We will also try to use data from patients from other studies to increase our sample size and perform a 5-class classification (i.e. ratings from 0-4). Supervised machine-learning schemes like the one presented here would require the ground truth PVS counts or segmentations from a large number 595 of datasets done by an expert to be able to count and/or segment PVS. Such data are currently unavailable. However, the output from this classifier could be used as input to the fully automatic PVS unsupervised segmentation approach developed by Ballerini et al. (2016), (mentioned in the Introduction Section) which needs the PVS ratings to tune its algorithm and make it fully 600 automatic. Finally, the classifier presented here could be adapted to get the visual rating of the PVS in the centrum semiovale. ) and (c) respectively). In the model 1 (a) the AUCs of the classifier, Observer 1 and Observer 2 were 0.9265, 0.9813 and 0.9074, respectively. In the model 2 (b) the AUCs of the classifier, Observer 1 and Observer 2 were 0.9041, 0.8395 and 0.8622, respectively. In the model 3 (c) the AUCs of the classifier, Observer 1 and Observer 2 were 0.9152, 0.9411 and 0.8934, respectively | 9,658.6 | 2017-07-01T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Exploration of the combined vibration parameters and external magnetic field in diagnosing asynchronous electric motors
. This paper presents the results of the development and application of a software-hardware complex for assessing the vibration level and external magnetic field intensity of asynchronous electric motors with the aim of diagnosing emerging defects. The issues of increasing the reliability and durability of asynchronous electric motors, as the most critical components in technological equipment complexes, are of utmost importance. Theoretical calculations describing the relationship between changes in the intensity of the external magnetic field and the presence of defects in asynchronous electric motors were conducted. Experimental measurements were performed using a compact portable device developed by the authors, equipped with a built-in Hall sensor. Experiments to determine the parameters of the external magnetic field were conducted on several types of electric motors, for which preliminary vibration measurements were conducted. Based on the results of vibration analysis and the distribution of the external magnetic field of the motor, a detailed list of defects detectable using this comprehensive diagnostic method has been compiled.
Introduction
In the Republic of Uzbekistan, the cotton raw material processing industry holds significant importance within the agricultural sector.With 145 cotton-textile clusters, over 250 cotton receiving points, and more than 21 self-employed enterprises, the cotton ginning industry plays a vital role in the country.The industry operates a wide range of equipment, including approximately 60,000 conditional units, over 160 power transformers, and more than 75,000 electric motors of varying capacities.Notably, nearly 95% of these motors are asynchronous electric motors [1].Due to their simple and technologically advanced design, high energy efficiency, operational reliability, and resistance to overloads, asynchronous motors are widely used in the agricultural and industrial complex.They find applications in various sectors such as transportation, machinery drives, pumps, fans, compressors, and lifting mechanisms.Therefore, the issues of increasing the reliability and durability of asynchronous motors, as the most critical components in technological equipment complexes, are of utmost importance.The assessment of the technical condition of electric machines is an important task that allows for the early detection of emerging defects and helps prevent potentially serious negative consequences.Therefore, the development of inexpensive, user-friendly, and accurate methods for monitoring and diagnosing the condition of electric motors is crucial.The most effective methods for this purpose include vibration diagnostics [2][3], thermal monitoring [4][5], and analysis of external magnetic field (EMF) parameters of electric machines [6].Vibration diagnostics is particularly effective in detecting most mechanical and some electrical defects, while thermal monitoring can identify defects related to overheating.In the future, monitoring EMF parameters will enable the detection of electrical defects in electric motors [7].These methods complement each other well, as they are based on the measurement and analysis of phenomena with different physical characteristics.The development of these methods will allow for a comprehensive assessment of the technical condition of electric motors and increase the reliability of defect diagnosis [8].
Methods
In this study, we explore the method of controlling and analyzing the parameters of the external magnetic field to diagnose emerging defects in asynchronous electric motors.As per theoretical principles, the overall rotating magnetic field generated by three-phase alternating current electric machines can be decomposed into two magnetic moment components along mutually perpendicular directions within the main plane of the machine [9].
= 0; = cos( • + ) ; = sin( • + ) ; (1) Here, MM, ω, and φM represent the amplitude, frequency, and phase of the magnetic moment, respectively.Let's consider the projection of the external magnetic field (EMF) onto one of the axes, such as the Z-axis [10].The current value of the magnetic induction in the air gap of the machine can be expressed as: = • ; (2) Here, Fφ and λφ represent the current values of the magnetic force and magnetic permeability in the air gap.In the case of symmetrical arrangement of the rotor with respect to the stator, we obtain [9]: Here, κδ -is the air gap coefficient (Carter's coefficient), κμ -is the coefficient that accounts for the saturation of the tooth zone, δ0 -is the air gap distance between the rotor and stator, and μ0 is the magnetic permeability constant.
The fundamental harmonic of the magnetizing force in the alternating current machine can be expressed as: ) Here; ω is the frequency of the stator's magnetic field rotation.
We obtain the expression: = ( • + • ) (5) Here; Bm= Fm‧λ0 is the amplitude of the fundamental harmonic of the magnetic induction for a symmetrical air gap.The presence of a spectrum of harmonics in the magnetic induction in the air gap leads to the appearance of a similar spectrum in the external magnetic field of the machine.The induction of the external magnetic field decreases with distance from the source according to a certain law = 1 +2 ⁄ : here; n is the order of the magnetic harmonic.Therefore, for the fundamental external magnetic field of the machine with harmonics of order r, we can neglect the harmonics of order (p+k) due to their small magnitude.Since the external magnetic field of the machine is shielded by the housing, this should be taken into account, for example, by using the shielding coefficient κe.Then, the radial induction of the external magnetic field of the machine can be expressed as: Here, B0 = κe‧Bm.Thus, the induction of the external magnetic field of a defect-free motor varies according to a sinusoidal law in time (Figure 1).The power supply provides voltage to the low-pass filters with a cut-off frequency Fcp=1000 Hz, designed to filter out high-frequency components and noise from the input voltage.The filtered voltage then goes to the linear magnetic field sensor (Hall sensor).When the Hall sensor is placed in a magnetic field, the magnetic induction vector generates a potential difference in the sensor equivalent to the external magnetic field.The output voltage from the Hall sensor is then amplified by the amplifier and further fed into the input channel of the digital block.In the digital recording block, the analog signal is converted into a digital format and stored for further playback and analysis [11].The calibration of the external magnetic field induction measurement device was conducted on a specially made test bench, which included an inductance coil, a power supply, and an ammeter.In order to determine the voltage conversion coefficient from the output of the Hall sensor to the dimension of magnetic field intensity, measurements were performed using the magnetic measuring ferroprobe device F-205.30A.The measurements were conducted using a ferroprobe converter -polemer and the developed instrument [12][13].
At the same point in the center of the coil, eight measurements were taken using the polemer, both in the positive and negative regions of the Hall sensor, in order to determine the sensitivity of each of its sides.After processing the measurement results, conversion coefficients were obtained for the positive (KpU+) and negative (KpU-) regions of the sensor, expressed in The directional diagram, showing the dependence of the output voltage on the angle α of the deviation of the sensor plane, was obtained using the calibration stand (Figure 3).Experimentally, it was determined that when the sensor plane is deviated by 20° from perpendicular to the magnetic field vector, the output signal changes by 100 mV, equivalent to 11.1%.
During the experiments, it was found that the components of the field directed along the outer circumference of the electric motor stator (Figure 4, b) provide the most informative data.Therefore, the sensor plane was oriented perpendicular to these lines of the field [14].
The measurement results were processed using a custom-made program "Fft_gui" in the MatLab programming environment, allowing the analysis of temporal signals of magnetic field intensity and the fast Fourier transform to obtain spectral characteristics.The duration of the recorded signal in txt format was 5.12 seconds, which allowed obtaining a spectrum with a resolution of 0.195 Hz (reducing the recording duration would decrease the spectrum resolution and provide an incomplete picture of the harmonic composition).The main working window of the program is shown in Figure 1.Measurement Methodology.To conduct experiments on the motor, fixed points were selected for measuring the intensity of the External Magnetic Field (EMF).The minimum number of points around the motor's stator that adequately represents the EMF distribution is eight.The number of control points can be increased, but this would increase the complexity of measurement and data processing.In the selected points, markers were placed to eliminate the possibility of inaccuracies in sensor placement.To achieve this, signal recording was performed sequentially at each point.The circular measurement was repeated five times to reduce errors caused by imprecise sensor placement.Circular diagrams of EMF intensity were then constructed based on the averaged values of EMF amplitude.
Results and discussions
Experiments to determine the EMF parameters were conducted on several types of electric motors, for which preliminary vibration measurements were carried out.Vibration measurements and analysis were conducted using the vibration collector SK-1100 and the software "Vibroanalysis 2.52" by the company "Tekhnekon."The points and directions of motor vibration measurements are shown in Figure 5.
Fig.5. Points and Directions of Vibration Measurements on Induction Motors
The results of vibration measurements.The general level of vibration (RMS of the vibration velocity) and the preliminary diagnosis of the state according to the time signal and spectrum are presented in Table .1.The vibration level of motor #1 is within the normal range and corresponds to the allowable vibration for new motors (1.4 mm/s).The main contributors to the Vibration Velocity RMS Level are the residual (unbalanced during manufacturing) rotor imbalance (peak at the rotational frequency of 24.9 Hz) and small high-frequency components at bearing frequencies.The vibration of motor #2 slightly exceeds the allowable level for a new motor but is still below the permissible operational level (2.8 mm/s).However, in the vibration spectrum, there are predominant peaks at the 2nd and 4th harmonics of the power frequency (100 Hz, 200 Hz), indicating asymmetry in the currents of the stator windings.As the main known vibration indicator of interturn short circuits is the presence of the fundamental harmonic at the double power frequency (2xfc), it is possible to assume the presence of this defect at an early stage.The vibration level of motor #3 is significantly high and exceeds the allowable value (4.5 mm/s).The vibration spectrum has an unusual shape and contains a component at the rotational frequency f1 -25 Hz, along with a considerable number (20) of its harmonics.Among them, the 2nd, 3rd, and 4th harmonics of f1 have the highest amplitude.It is possible to speculate that these are indications of an electrical defect -a broken rotor bar.Indeed, one of the most common defects in asynchronous electric motors is the cracking and breaking of rotor bars, which are often designed in the form of a squirrel cage.Reliable diagnostics of this defect can prevent sudden failures and increase the drive's reliability.However, relying solely on vibration diagnostics may sometimes be insufficient to confidently identify this defect.
In the example provided earlier, the absence of a characteristic set of sidebands around the harmonics of the rotational frequency (fn) makes it challenging to confidently detect this defect at an early stage.This is because a significant number of harmonics of the rotational frequency can also be indicative of mechanical defects in electric motors, such as mechanical loosening of fixed connections and clearances in movable connections.
To ensure accurate and early detection of rotor bar defects, a comprehensive approach involving multiple diagnostic methods may be necessary.Combining vibration diagnostics with other techniques, such as motor current signature analysis (MCSA) or stator current analysis, can provide a more reliable assessment and facilitate the timely detection of rotor bar defects, thus improving the overall maintenance strategy and prolonging the motor's service life.
The developed comprehensive method, based on vibration diagnostics and analysis of the external magnetic field of the machine, has the potential to detect up to 90% of possible defects in asynchronous electric motors, thus enhancing the reliability of diagnostics.Table 2 presents the types of defects that can be detected using the integrated vibration and magnetic methods of monitoring and diagnostics.During the conducted research, a compact portable device was developed for measuring the intensity of the external magnetic field of asynchronous electric motors.Additionally, a test bench was designed to conduct tests under various operating conditions and different types of faults in electric motors.A combined study of vibration parameters and external magnetic field analysis was performed for faults such as "interturn short-circuits" and "broken rotor bars."
Conclusion
The development of defects in electric motors not only affects vibration parameters but also induces changes in their external magnetic fields, particularly in the circular amplitude diagrams and field intensity spectra.The development of control and diagnostic methods based on external magnetic field parameters will enable obtaining reliable information not only about the types of defects but also about their severity.To determine the diagnostic features of asynchronous electric motor defects, the field intensity along the external circumference of the stator was measured in the initial stage.Based on these parameters, a circular amplitude diagram was constructed, and the harmonic composition of the defect-free motor's magnetic field intensity was determined, serving as the reference.The circular amplitude diagram of the defect-free motor was nearly symmetrical and presented a circular shape, while the spectrum only contained the first harmonic (50 Hz) of the magnetic field intensity high-frequency components.Through modeling "interturn short-circuit" and "broken rotor bar" defects in electric motors, it was revealed that the circular amplitude diagrams and harmonic spectra of the magnetic field intensity significantly differed from the reference.These defects caused the circular amplitude diagrams of the magnetic field intensity to undergo drastic changes, becoming asymmetric.
Fig. 1 .
Fig. 1.The main working window of the "Fft_gui" program with the time signal of the external magnetic field induction of a defect-free electric motor
Table . 1
. Data of asynchronous electric motors and results of vibration measurements and diagnostics
Table . 2
. The capability of defect detection | 3,215.8 | 2023-01-01T00:00:00.000 | [
"Physics"
] |
A Comparative Study on Different Pharmaceutical Industries and Proposing a Model for the Context of Iran.
Medication is known as the main and the most effective factor in improving public health. On the other hand, having a strong and effective pharmaceutical industry will, to a very large extent, guarantee people's health. Therefore, this study was prospected to review the different pharmaceutical industries around the world and propose a model for the context of Iran. This is a qualitative as well as a comparative study which was carried out in 2015. At the first stage, using the World Bank website, countries were divided into four groups of low-income, lower-middle-income, upper-middle-income, and high-income economies. Then, four countries of Afghanistan, India, Brazil, and Canada were chosen from these four groups, respectively. Secondly, data gathered from these countries were given to two 12-member expert panels. Finally, using the articles and the results of expert panel groups, useful and effective policies were extracted for the growth and development of Iran's pharmaceutical industry. Findings of the study indicated that the following seven items are the essential policies for the context of Iran: establishment of high academic centers as well as research institutes, using weak patent law, supporting research and development centers at universities and pharmaceutical companies, backing national pharmaceutical companies up, implementing generic rules, gradual economic liberalization, and membership in world trade organization. Since, pharmaceutical industry is an effective and inseparable part of every health system, proper and evidence-based policies should be taken into account in order to develop this industry and, ultimately, meet the public needs.
Introduction
Health has turned into one of the most substantial issues in different communities and this has increased the demands for health care services. Due to these high demands, most countries have faced with increased health expenditures. Pharmaceutical industry is no exception in this regard, and it is one of the areas which has played a vital role in increasing the health expenditures (1, 2). On the other hand, medication is known as the main and the most effective factor in improving public health as well as in controlling some diseases among people (3).
Considering the importance of public health, governments have implemented some strict and fundamental rules for health-related industries, so they can improve public health through controlling the costs and, therefore, can provide the grounds for development of such industries including pharmaceutical industries. Development carries a qualitative meaning and is a process through which fundamental changes and reforms take place at economic, social, cultural, and political levels. This can result in creation of new production methods and a shift from traditional to modern ways of improving industries (4, 5).
Moreover, improving public health and developing the pharmaceutical industry are the two most important challenges countries face around the world. Pharmaceutical industryas one of the most vital parts of any health system-is strictly monitored and controlled by governments. Implementation of some strict rules as well as microscopic supervision of governments on pharmaceutical companies have considerably affected their development and growth. Any wrong selection of rules or policies may result in finishing up the life of many pharmaceutical companies which itself will risk the public health. On the other hand, implementation of proper policies will help this industry grow and develop, but it will also assure the availability of drugs at the right place, with the right price and quality, and, ultimately, will improve the community′s health. Meanwhile, pharmaceutical companies should try to adapt themselves with these policies in order to, firstly, maintain their status quo, and, secondly, create development and growth through turning threats into opportunities or even using the current available opportunities (6, 7).
Governments and policy makers of countries around the world are well aware that development and growth of national industries will result in economic growth along with creation of job opportunities, wealth production, poverty reduction, and increased trades with other countries, and may also bring about high technologies to the national pharmaceutical industry. If pharmaceutical industry-as one of the important industries-witnesses considerable and dramatic growth, then, essential as well as non-essential drugs will be provided for the community with high quality and at the right price. So, people will benefit from such growth, and this will improve the community′s health, and we will have healthier people at work (8,9).
Considering the high advancements in the medical area, life expectancy of people in developing and developed countries is rising up, which it will result in more aged population in coming years. These people will definitely require some drugs, therefore, we will witness the enlargement of pharmaceutical markets in near future (10,11). With regards to the abovementioned points, this study was prospected to present a proper model for development and growth of Iran′s pharmaceutical industry.
Experimental
This is a qualitative and comparative study which was carried out in 2015. Since the researchers wanted to study the pharmaceutical industries of the countries around the world. Therefore, it was not feasible to choose all countries. Therefore, it required a criterion so as to classify countries and select some countries randomly from those categories. Hence, after consulting with research supervisors, it was decided to use income level of countries-as a criterion affecting the growth and development of pharmaceutical industries-in order to classify the countries. Then, using the World Bank website, countries were divided into four groups of lowincome, lower-middle-income, upper-middleincome, and high-income economies. Then, four countries of Afghanistan, India, Brazil, and Canada were chosen from these four categories, respectively. Later on, Turkey-a country from upper-middle income category-was added to the study due to its progress in its pharmaceutical industry in the recent years. Next, we went through the «Scopus,» «PubMed,» and «Google Scholar» databases using key words of «Pharmaceutical policy and growth» and «Pharmaceutical policy and development».
We tried to select the most related articles which met the following criteria: Inclusion of pharmaceutical rules and policies as well as the role of governments in development and growth of pharmaceutical industry Having a time period mentioned for the rules, policies, and interventions Outlining the effects of pharmaceutical rules, policies, and interventions Clarifying the reason(s) behind implementation of these rules and policies The aim of this stage was to extract the rules and policies for the pharmaceutical development and growth of chosen countries as well as governments′ roles in supporting their pharmaceutical industry. Then, these data were used to prepare the interview guide for the second stage.
In the second stage, twenty-four interviewees were invited to the expert panel discussions. They were experts in the pharmaceutical areas and were selected by the help of my supervisor, who has been working in the pharmaceutical area since Islamic Revolution. These experts were selected using the following criteria: Having management experience in pharmaceutical industry or related companies Having work experience in ministry of health Being familiar with international pharmaceutical markets Having work experience in research and development centers of pharmaceutical companies These twenty-four interviewees then were divided into two 12-member groups. This was done in order to prevent from any kind of crowdedness and to control the panel effectively. Pharmaceutical rules and policies gathered in the previous step were given to these experts and they were asked to express their opinions on the feasibility and appropriateness of each single policy. They were even asked to add the necessary points to the list of policies. Finally, using the articles and the results of expert panel groups, useful and effective policies were extracted for the growth and development of Iran′s pharmaceutical industry.
Results
Here, the conditions of the pharmaceutical industry in different countries are assessed and then the experts′ opinions have been used to shed light on Iran′s pharmaceutical industry.
Afghanistan
The quantity of both donated and privately imported medicines entering Afghanistan has been considerably increased since 2002. Most part of the pharmaceutical market (70 to 80%) is owned by private sector and the market may worth up to US$200m per year. Afghanistan has a National Essential drug list which determines the medicines for use in public health facilities. There are some limitations imposed on privately imported medicines by ministry of health. One of the weaknesses of pharmaceutical industry in Afghanistan is the widespread smuggling of medicines into this country. Moreover, Afghanistan has a chaotic pharmaceutical market. Medicines are brought into the country from many diverse sources, and there is a puzzling array of products on sale. The number of actors is larger at every point in the supply chain than in other studied markets. There are more importers, more wholesalers, many more pharmacies, many grocery stores that sell medicines and street vendors of medicines as well as purveyors of traditional medicine (12, 13). There is some evidence showing that doctors may overprescribe medicines without paying attention to the possible side effects. Patients often ask pharmacists to prescribe medicines even though a large proportion of private pharmacies do not have a qualified pharmacist on staff. What is worse is the presence of low quality and fake medicines containing insufficient ingredients on the market. Afghanistan also suffers from inadequate inspection, sampling, and testing facilities to meet the basic standards of medicines on the market. Given the scale of smuggling, it would be much better if efforts were concentrated on inspection and testing at the point of wholesale and retail. However, the absence of testing facilities at border points, long delays in clearing imports, and pending sample results from Kabul are serious impediments for importers to bring their imports through official channels. Recently, the government of Afghanistan has been working on having testing facilities installed at borders or even having mobile laboratory. In addition, there is not much control and regulation on pricing of pharmaceuticals in this country (14,15).
India
With the implementation of patent law in 1911, all of the innovations including products as well as their production methods became patentable for 36 years. This law was a copy of patent law of Britain launched in 1852. This law resulted in creation of a free market for multinational companies in a way that it made India import its pharmaceutical products from mother countries and they had no tendency to produce their own patented products within the country and did not even let its companies produce them (16). Another important point about India is the presence of common committees between ministries of health and science and technology. These committees are responsible for making arrangements between these two ministries so that they can set up proper and harmonized regulations for the growth and development of the national pharmaceutical industry. Moreover, Indian government announced a law for making its industries adapt themselves to trade related aspects of intellectual property rights (TRIPS). According to this law, there was no patent right for new applications of the old drugs, no patent for the mixed drugs, and no patent for derivatives of a new molecule if it has not increased the effectiveness of the previous molecule (17, 18).
Indian government not also tried to harmonize their pharmaceutical rules with TRIPS, but they also intelligently used the flexibilities of TRIPS to create suitable conditions for growth and development of its pharmaceutical industry. Through this, they also protected their national companies from harms of sever and unfair international policies and rules. Therefore, Indian companies could follow different paths such as producing generic drugs, investment in research and development departments so as to produce new drugs, partnership with multinational companies in research areas, marketing their patented drugs and producing them through signing contracts with their owners (19).
India has also implemented some rules for research and development of their companies. These rules include giving tax exemptions up to 150% for those investments in R and D sections as well as for those companies which use national technologies; government′s financial support of common research projects between universities, research centers, and pharmaceutical companies since 2004; providing low-interest rate loans for pharmaceutical companies; and giving a 5-year tax exemption to those companies involved in research and scientific projects (17,20).
Brazil
It was after mid-20 th century that both India and Brazil decided to bring growth and development to their national industries, especially to the pharmaceutical industry. Both countries tried hard to implement weak patent laws for a certain time and to create a big national market along with training many scientists and experts. In 1930, Brazil′s pharmaceutical market included some research institutes, national pharmaceutical companies as well as some multi-national companies which held 13.6% of Brazil′s pharmaceutical market (21). In 1990, federal government of Brazil passed on a law according to which establishment and development of companies required a formal license from the government. The aim of this law was to encourage investments in building or developing companies in strategic industries, and to reduce the imports and any kind of dependence on foreign countries (22). In recent years, Brazil has tried to improve the collaborations between pharmaceutical companies and universities so as to enhance research activities and use their results for development of pharmaceutical industry (23). In addition, those companies which are active in research and development areas can benefit from income tax exemptions or can be financially supported for buying new and essential equipment (24). Brazil has also built some research centers for producing essential drugs for diseases such as AIDS so that they could reduce their dependence on importing expensive and high-tech drugs for such diseases. Moreover, they have put some custom tolls and duties on importing any drug which is being produced by national companies. Finally, they passed on the generic law in 1999 in order to support national pharmaceutical companies and to make the drugs affordable and accessible to the public. This law was also concerned with packaging, marketing, and promotion of drugs (21).
Canada
Due to high prices of drugs in 1960 and in order to help the pharmaceutical industry grow, government of Canada authorized a patent law according to which patent duration changed to 17 years. This law also gave the patent owners the permission of using compulsory licensing for both imported and nationally produced drugs provided that they have paid 9% of the pure price of drug selling. This licensing reduced the governmental costs up to $212 million in 1983 (25). In 2004, a bill called C-9 or C-56 was passed on according to which government of Canada, with respect to the act 2003 of TRIPS, allowed companies to produce and export their patented drugs to under-developed or developing countries. This bill was authorized for two years in the beginning but was extended after that (26). Canada also passed on a law to support their national companies and their importing. According to this law, those drugs which were produced only for importing did not require any registration inside the country and should only be certified and registered in the target country. Of other supports of government from national companies was replacement of drugs by their generic forms which was later called «generic substitution». Moreover, they offered some tax exemptions for those actions aiming at development of research and production of new drugs in 1983 (27, 28).
Turkey
Pharmaceutical industry of Turkey has had, on average, a 14% growth from 1995 to 2000. This was much bigger than the 7.2% growth of European countries at the same time. One of the weaknesses of Turkey′s pharmaceutical industry is that it takes 2.5 years for a company to get a license for producing a generic drug. On the other hand, its controlling system for drug prices has stopped this strategic industry from development. In Turkey, most researches are done by universities but they are of no use since there are not strong and big private pharmaceutical companies to use the results of such researches. This is considered as one of the reasons behind low investment in R and D sections (29, 30).
Since 1961 and because of essential drugs being expensive along with Turkey′s market turning into a monopolistic market for multinational companies, they called off patenting any product as well as the production methods of drugs, and this led to a dramatic growth among national companies (29).
In 1999, Turkey signed a contract with World Trade Organization (WTO) according to which it was supposed to include drug patents and production methods in its rules. Turkey tried to control the imports so as to support the national producers and provided subsidies for pharmaceutical companies as well. On the other hand, national companies gained a lot of profit through importing raw material and working only on finished products. Furthermore, in order to support researches within the country, they prevented foreign companies from producing the finished products and these companies should start from the first levels of drug production if they want any production license (18).
Experts′ opinions on Iran′s pharmaceutical industry Obstacles ahead of growth and development of pharmaceutical industry in Iran
Presence of some infrastructural problems such as lack of a proper and suitable transportation system for development of industry, lack of high speed and reliable information and communication networks as well as weak banking system for facilitating international exchanges have created some obstacles for national pharmaceutical industry. Inconsistency of monetary and production policies along with instability of top managers′ positions are among the other issues facing the pharmaceutical industry in Iran.
Government′s role in growth and development of pharmaceutical industry
Since most of the pharmaceutical companies in Iran are owned by government and it is the main drug buyer (through insurance companies) as well, therefore, not only they can play an important role in implementation of national and international policies for development of pharmaceutical industry, but also can establish exact, feasible, and comprehensive plans to meet those regulations and policies. Macro-policies of government determine the pharmaceutical regulations and policies at micro level. Iranian government can pave the ground for growth and development of pharmaceutical industry through putting some limitations on importing drugs, exempting some companies from paying customs toll and duties, provision of tax exemptions and low-interest rate loans for those research-centered companies. Although government plays the most fundamental role in pharmaceutical industry, some experts believe that privatization is what Iranian pharmaceutical industry needs.
In order to support national companies and prevent from wasting capitals of insurance companies, government can differentiate essential and strategic drugs from other products, modify insurance policies (such as leaving OTC drugs out of insurance list and putting national products on that list), adjust drug prices along with putting some incentive prices, and alter prescribing behaviors of doctors and encourage them to prescribe generic drugs.
Doctors′ role in growth and development of pharmaceutical industry
Physicians do not usually trust national pharmaceutical products in Iran, so they go for prescribing foreign drugs without even being aware of their quality. Some nationally produced drugs have even more quality than those imported ones. Doctors should be totally informed of these drugs through holding some conferences and seminars.
As seen in Table 1, factors such as inconsistency in management, industrial, production, and monetary policies; lack of targeted planning and support for university level researches as well as the weak relation between university and industry; and absence of support from R and D sections of national companies are among the main obstacles facing pharmaceutical industry in Iran. Ultimately, using the above-mentioned information and considering the features of a successful pharmaceutical industry, a framework was developed indicating how Iran′s pharmaceutical industry can use these steps in order to improve its conditions (Figure 1).
Discussion
The current study was prospected to present a proper model for development and growth of Iran′s pharmaceutical industry. Having searched articles along with expert panel interviews meticulously reviewed, seven most important and fundamental steps for development and growth of national pharmaceutical industry were derived. These steps involve the following:
1-Establishment of high-profile academic centers as well as universities for training students and researchers in pharmacy, medicine, chemistry, and related majors in order to work in pharmaceutical industry or at the research and development (R and D) centers
This accounts for as one of the most fundamental policies for growth and development of pharmaceutical industries. Universities and high profile academic centers should best use their facilities for improving the scientific and technical capabilities of pharmacy students Ultimately, using the above-mentioned information and considering the features of a successful pharmaceutical industry, a framework was developed indicating how Iran's pharmaceutical industry can use these steps in order to improve its conditions ( Figure 1).
Discussion
The current study was prospected to present a proper model for development and growth of Iran's pharmaceutical industry. Having searched articles along with expert panel interviews meticulously reviewed, seven most important and fundamental steps and researchers and prepare them for R and D laboratories at universities and pharmaceutical companies as well as for working in production and managerial sections of pharmaceutical companies.
Although Iran has many universities for training students in pharmacy, medicine, chemistry, and other related fields of study, the results of researches are not properly used for producing new and innovative products. Moreover, there are some weaknesses in practical aspects of university curricula as they are not properly and usefully implemented in the country. On the other hand, lack of growth and development of pharmaceutical companies, especially in the area of R and D which is mainly focused on drug formulation aspects, has reduced the need for professional researchers in the companies and made them leave the country to conduct those researches somewhere else. It seems that with putting more attention on practical courses as well as apprenticeship of students along with equipping universities with facilities for R and D and reforming supportive policies and rules for researches, the needed motivation and drive in our researchers so as to produce new and innovative products can be achieved. Furthermore, the relationship between government, industry, university, and research centers should be enhanced. Financial and nonfinancial supports by government can pave the ground for presence of university professors and researchers in pharmaceutical companies and help the country use the highest potentials of such groups for national growth and development.
2-Using weak patent law in order to develop the reverse engineering and increase the capabilities of pharmaceutical companies for conducting fundamental researches and, ultimately, producing brand products
As mentioned earlier, using weak patent law for developing reverse engineering, increasing the capabilities of national companies in producing pharmaceutical products as well as learning the development trend of a drug are the basic steps adapted in many countries. In this regard, India from 1972to 2005, Brazil from 1945to 1969, and China from 1985to 1992 patented only the pharmaceutical production method. Moreover, pharmaceutical products were not patentable before 1949 in England, 1960in France, 1958 in Germany, and before 1978 in Swiss, Sweden, and Italy. Using weak patent law, especially in the area of pharmaceutical production method, has increased the abilities of these countries′ pharmaceutical companies in producing new drugs. Of course, we can′t overlook the governmental supports in these countries in providing a suitable environment for growth and development of their pharmaceutical industry. Endorsement and implementation of weak patent law are among the macro-policies of Iranian government. It seems that Iran has so far been able to use these rules properly, but there have been no long-run and targeted plans for achieving over the borders goals. Inter-ministry committees should be arranged in order to find proper and feasible solutions for problems of industries, especially pharmaceutical industry, in Iran.
3-Supporting R and D centers at universities and pharmaceutical companies and making a strong and dependable relationship between industry, university, and government
Governments have been supporting pharmaceutical research actions mainly through the following: Financial supports from researches done at universities, research centers, and R and D departments of pharmaceutical companies Financial supports through giving lowinterest rate loans for importing supplies and consumables for conducting the researches Financial supports in form of tax and duty exemptions for importing supplies and consumables for conducting the researches Tax exemptions for those researches resulting in production of a new product or new production method Offering price incentives for those researches resulting in production of a new product or new production method Due to high costs of producing raw pharmaceutical material, low marginal profits, and lack of governmental supports, private pharmaceutical companies can′t properly invest in R and D sections. Despite the advancements in reverse engineering, unfortunately, most of the activities in R and D departments are focused on formulation of drugs. Government should provide more supports for companies involved in research actions. On the other hand, most of the supports are shifted towards researches resulting in article publication, but they should support those researches which will lead to a new product.
4-Backing local pharmaceutical companies up so as to enhance their capabilities in producing pharmaceutical raw materials, meeting national pharmaceutical needs, and increasing their competitive features against multi-national companies
Not only we need to enact some policies in order to support R and D departments of pharmaceutical companies, but we also need to back up the companies so that they can compete with multi-national companies through updating their equipment and technologies. Most of the developed and developing countries apply the following procedures to support their pharmaceutical companies: Placing high customs toll and duties on importing foreign goods Implementing tax exemptions Considering some subsidies for companies Assigning the monopoly rights of producing some goods to some small-size companies Implementation of some policies for making foreign companies produce the pharmaceutical raw material as well as some drugs inside the country Providing low-interest rate loans for companies in order to help them update their equipment and adapt themselves with good manufacturing practice (GMP) procedures Making joint-ventures among small-size companies with an aim of creating bigger and stronger companies Applying the rules of TRIPS so as to develop and grow the national industry Due to involvement of Iran in some international political challenges, especially those sanctions that have been imposed on Iran after Islamic revolution, there have been some obstacles for businesses within the country; therefore, these businesses require more governmental supports for their strategic growth and development. On the other hand, the presences of high inflation rate along with broken banking system also prevent the growth and development of the national industries. Moreover, inconsistency of rules and regulations along with successive changes of management positions do not allow industries to form steady programs for their development. These changes have also broken the dependable trust among universities, industries, and government. Government can play a vital role in development and growth of pharmaceutical industry through bringing stability to production and monetary policies.
5-Implementing generic rules in order to support national generic drugs producers
Generics act is usually supposed to support national companies that produce generic drugs, back up the patient rights, and prevent from wasting the capitals of insurance companies. Although this act has been active in Iran since the Islamic revolution, it owns some weaknesses as well.
Buying generic drugs by government, making pharmaceutical companies imprint the generic name on their products as well as making doctors, who work in the governmental sectors, prescribe generic drugs for patients are among the important actions taking place under the name of generic act in Iran. Unfortunately, Iranian doctors are not quite aware of the quality of national pharmaceutical products. For this reason, they do not tend to prescribe generic drugs for their patients, and this has brought some dramatic costs to health and treatment sectors.
6-Gradual economic liberalization for developing and enhancing pharmaceutical industry when they are equipped with competitive capabilities
Many of developing and developed countries take on a gradual movement towards economic liberalization when they reach a reasonable level of growth and development among their national industries in a way that they are capable enough to compete with their international competitors. Of the influential actions taken to reach this goal are reducing customs toll and duties for importing foreign drugs, privatization of governmental companies, and reducing the number of drugs under the control of pricing system (22). Iran has many insurance companies which are the main drug buyers. On the other hand, some of the very strong and huge pharmaceutical companies severely control the pricing of drugs in Iran.
The pricing system of drugs in Iran does not correspond to the efforts of companies, so they are not willing to invest in producing innovative drugs or in their R and D departments. Using some encouraging pricing may move companies towards innovation, research and development actions.
Moreover, non-privatization of governmental companies or even non-development of them is a big obstacle on the way of growth and development of pharmaceutical industry in Iran.
7-Joining World Trade Organization (WTO) and compliance with all patent laws of TRIPS
Countries like Turkey, China, Canada, Brazil, and India have harmonized their industrial and monetary policies with WTO by the help of gradual economic liberalization; therefore, they have supplied more of their pharmaceutical products, especially generic ones, to the western developed countries. They have also enlarged their market share and, through this, are investing more in producing innovative and high-tech drugs. On the other hand, patients in these countries enjoy their accessibility to new foreign drugs because of their countries′ membership in WTO. India is one of those countries that has used TRIPS to adapt itself with the rules and policies of WTO, and its pharmaceutical companies have found their way in the world market (16). Although Iranian government has shown no willingness towards joining WTO, they should use the flexibilities of TRIPS for development and growth of its pharmaceutical industry. Perhaps, one of the most important issues in joining WTO involves reduction of tariffs or even immediate elimination of them. Because, implementation of tariffs and tolls will affect indexes and trade volume of Islamic republic of Iran in having business ties with the outside world. Although joining WTO will positively affect the export (through improving technical knowledge, having access to modern technologies...), import (through collaboration with international companies…), quality (through following international standards…), and survival of the national pharmaceutical industries (31), this should be taken into consideration that being a member of WTO requires proper infrastructure and appropriate management. Iranian government should adjust some of the business, monetary, and economic rules and policies so as to adapt itself with the requirements of WTO. Moreover, government should provide facilities for pharmaceutical companies in order to help them use modern technologies and improve their GMP.
Conclusion
These are the main steps which can be taken to bring about a change in Iran′s pharmaceutical industry. Along with these steps, there is a dire need to accurate and long-run planning, tax control and reforming the banking system, revision of insurance rules and policies, and changing doctors′ prescribing behavior. (1) (3) (4) | 7,134.4 | 2018-10-01T00:00:00.000 | [
"Medicine",
"Business",
"Economics"
] |
The Origin, Epidemiology, and Phylodynamics of Human Immunodeficiency Virus Type 1 CRF47_BF
CRF47_BF is a circulating recombinant form (CRF) of the human immunodeficiency virus type 1 (HIV-1), the etiological agent of AIDS. CRF47_BF represents one of 19 CRFx_BFs and has a geographic focus in Spain, where it was first identified in 2010. Since its discovery, CRF47_BF has expanded considerably in Spain, predominantly through heterosexual contact (∼56% of the infections). Little is known, however, about the origin and diversity of this CRF or its epidemiological correlates, as very few samples have been available so far. This study conducts a phylogenetic analysis with representatives of all CRFx_BF sequence types along with HIV-1 M Group subtypes to validate that the CRF47_BF sequences share a unique evolutionary history. The CRFx_BF sequences cluster into a single, not well supported, clade that includes their dominant parent subtypes (B and F). This clade also includes subtype D and excludes sub-subtype F2. However, the CRF47_BF sequences all share a most recent common ancestor. Further analysis of this clade couples CRF47_BF protease-reverse transcriptase sequences and epidemiological data from an additional 87 samples collected throughout Spain, as well as additional CRF47_BF database sequences from Brazil and Spain to investigate the origin and phylodynamics of CRF47_BF. The Spanish region with the highest proportion of CRF47_BF samples in the data set was the Basque Country (43.7%) with Navarre next highest at 19.5%. We include in our analysis epidemiological data on host sex, mode of transmission, time of collection, and geographic region. The phylodynamic analysis indicates that CRF47_BF originated in Brazil around 1999–2000 and spread to Spain from Brazil in 2002–2003. The virus spread rapidly throughout Spain with an increase in population size from 2011 to 2015 and leveling off more recently. Three strongly supported clusters associated with Spanish regions (Basque Country, Navarre, and Aragon), together comprising 60.8% of the Spanish samples, were identified, one of which was also associated with transmission among men who have sex with men. The expansion in Spain of CRF47_BF, together with that of other CRFs and subtype variants of South American origin, previously reported, reflects the increasing relationship between the South American and European HIV-1 epidemics.
CRF47_BF is a circulating recombinant form (CRF) of the human immunodeficiency virus type 1 (HIV-1), the etiological agent of AIDS. CRF47_BF represents one of 19 CRFx_BFs and has a geographic focus in Spain, where it was first identified in 2010. Since its discovery, CRF47_BF has expanded considerably in Spain, predominantly through heterosexual contact (∼56% of the infections). Little is known, however, about the origin and diversity of this CRF or its epidemiological correlates, as very few samples have been available so far. This study conducts a phylogenetic analysis with representatives of all CRFx_BF sequence types along with HIV-1 M Group subtypes to validate that the CRF47_BF sequences share a unique evolutionary history. The CRFx_BF sequences cluster into a single, not well supported, clade that includes their dominant parent subtypes (B and F). This clade also includes subtype D and excludes sub-subtype F2. However, the CRF47_BF sequences all share a most recent common ancestor. Further analysis of this clade couples CRF47_BF protease-reverse transcriptase sequences and epidemiological data from an additional 87 samples collected throughout Spain, as well as additional CRF47_BF database sequences from Brazil and Spain to investigate the origin and phylodynamics of CRF47_BF. The Spanish region with the highest proportion of CRF47_BF samples in the data set was the Basque Country (43.7%) with Navarre next highest at 19.5%. We include in our analysis epidemiological data on host sex, mode of transmission, time of collection, and geographic region. The phylodynamic analysis indicates that CRF47_BF originated in Brazil around 1999and spread to Spain from Brazil in 2002-2003. The virus spread rapidly throughout Spain with an increase in population size from 2011 to 2015 and leveling off more recently. Three strongly supported clusters associated with Spanish regions (Basque Country, Navarre, and Aragon), together comprising 60.8% of the Spanish samples, were identified, one of which was also associated with transmission among men who have sex with men.
INTRODUCTION
High genetic diversity of human immunodeficiency virus type 1 (HIV-1) is a defining feature of the AIDS virus. This diversity gain and loss is a hallmark of the evolution of HIV in the context of drug resistance and changing environments (Pennings et al., 2014). A contributing factor in the evolution of HIV is the process of recombination (Rambaut et al., 2004;Vuilleumier and Bonhoeffer, 2015). Genetic recombination is known to impact HIV allelic diversity and subsequent population dynamics at a rate equivalent to the high mutation rate of HIV (Shriner et al., 2004). Genetic diversity within HIV subtypes can be up to 17% sequence divergence across the genome with 17-35% divergence between subtypes (Castro-Nallar et al., 2012a). Yet recombination can even occur between subtypes as HIV variants spread around the globe, leading to circulating recombinant forms or CRFs, as well as unique recombinant forms (URFs) (Castro-Nallar et al., 2012b). There are currently 118 known HIV-1 CRFs according to the Los Alamos HIV Sequence Database (Los Alamos National Laboratory, 2021) involving recombination events between nearly all known subtypes and even between other CRFs [e.g., CRF15_01B is a recombinant form between CRF01 and subtype B (Tovanabutra et al., 2003)]. The CRFs often have their own unique population dynamics and molecular epidemiology compared to their parental strains and often lead to novel infection dynamics and spread. One such CRF is CRF47_BF, discovered in Spain and described in 2010 (Fernández-García et al., 2010) as an intersubtype recombinant form between HIV-1 subtypes B and F. Of the CRFs, among the most abundant are those between B and F subtypes, with 19 CRF_BFs (note that in the Los Alamos HIV Database these are sometimes designated "BF" and sometimes "BF1, " even for the same CRF). Of the CRF_BFs, all but two are known from South America (mainly Brazil, but Argentina, Uruguay, Paraguay, Chile, Peru, and Bolivia as well) with a few found both in South America and Europe (CRF66, 75, and 89). Only two CRF_BF have been reported to be found circulating exclusively in Europe, CRF42_BF in Luxembourg (Struck et al., 2015) and CRF47_BF in Spain (Fernández-García et al., 2010). Since its description, CRF47_BF has expanded considerably in Spain, predominantly via heterosexual contact, and is now known from Brazil as well, as attested by a CRF47_BF virus collected in this country whose sequence is deposited in the Los Alamos database (Los Alamos National Laboratory, 2021).
The goal of this study is to estimate the temporal and geographic origin of CRF47_BF and the dynamics of diffusion and growth throughout its evolutionary history. Toward this goal, we combine new CRF47_BF sequence data from our lab from strains isolated in Spain with data from other BF strains in the Los Alamos database to examine the origin and evolutionary dynamics of CRF47_BF and their epidemiological correlates.
Sample and Data Collection
Plasma and whole blood samples were collected from HIV-1-infected patients at public hospitals across eight regions in Spain for a molecular epidemiological study of all new HIV-1 diagnoses seen at the participating centers and for antiretroviral drug resistance testing. Epidemiological data from the CRF47_BF patients were collected to link to the HIV sequence data. The epidemiological data included patient gender, the transmission route, the patient's year of HIV diagnosis and date of sample collection, the region from which the sample was collected, the country of origin of the individual, and whether the patient was on antiretroviral (ARV) therapy.
The study was approved by the Committee of Research Ethics of Instituto de Salud Carlos III, Majadahonda, Madrid, Spain (report numbers CEI PI 38_2016-v3 and CEI PI 31_2019-v5). The study did not require written informed consent by the study participants, as it used samples and data collected as part of routine clinical practice and patients' data were anonymized without retaining data allowing individual identification.
Sequence Analyses
(RT-)PCR was used to amplify the protease-reverse transcriptase (PR-RT) gene region from plasma-extracted RNA or whole blood-extracted DNA using previously described primers (Delgado et al., 2015;Supplementary Figure 1). PCR products were sequenced using the Sanger method with an automated capillary sequencer. These data were combined with PR-RT sequences classified as CRF47_BF at the Los Alamos HIV Sequence Database and reference sequences for all subtypes and all CRFx_BFs for this same gene region from the Los Alamos HIV Database. Finally, we conducted a BLAST (Altschul et al., 1990) search against GenBank with the 5 -most 950 nt of PR-RT of all CRF47_BF viruses and included all sequences within 95% similarity. BLAST searches and further analyses (see below) yielded only two additional CRF47_BF sequences not identified as such at the Los Alamos database (with GenBank accessions JF929086, from Spain, and JQ238096, from Brazil).
We conducted two analyses with these data. (1) We included all data to validate the quality of the data and place the CRF47_BF within a broader phylogenetic context. Our initial phylogenetic analysis included subtypes from the HIV-1 M group (subtypes A1, A2, B, C, D, F1, F2, G, H, J, K, and L), as well as the CRF_BF recombinants (TotalCRF_BF.fasta, see Supplementary Material). Our final alignment (1,200 bp) included 14 sequences representing all the major subtypes within HIV-1 group M, 5 subtype B sequences, 11 subtype F (F1, F2) sequences, and 34 representatives of all known and distinct CRF_BFs. This alignment also included our more focused (2) CRF47_BF dataset (CRF47_BF.fasta see Supplementary Material). By including additional subtypes (including lab strains), we can both verify the monophyly of our target group of CRF47_BF sequences and validate that there are not contaminants or strange recombinants within this group as would be indicated by novel phylogenetic placement. For this second dataset, we included all 99 sequences from CRF47_BF, including 87 obtained by us [7 from a previous study (Fernández-García et al., 2010) and 80 newly derived] from the patients summarized in Table 1, and 12 from databases (10 from Spain and 2 from Brazil). We then conducted a focused analysis on the targeted CRF47_BF strains (1,377 aligned bp).
In both analyses, we aligned sequence data using MAFFT (Katoh and Standley, 2013) with the FFT-NS-2 progressive alignment approach since these sequences are relatively similar. Prior to subsequent phylogenetic and phylodynamic analyses, we checked that all sequences showed mosaic structures coincident with CRF47_BF, through two procedures: (1) bootscan analyses and (2) separate phylogenetic trees of B and F1 segments previously defined for CRF47_BF (Fernández-García et al., 2010), including B and F1 subtype references, to ensure that subtype assignment of each segment was identical for all sequences. Phylogenetic analyses were conducted using maximum-likelihood (Felsenstein, 1981;Posada and Crandall, 2021) as implemented by RAxML (Kozlov et al., 2019) via the CIPRES web service (Miller et al., 2012). The phylogenetic analyses utilized the best-fit model of evolution (Posada and Crandall, 1998) as determined by ModelTest-NG (Darriba et al., 2020). Phylogenetic analyses were also done using a Bayesian approach as implemented by MrBayes 3.2 (Ronquist et al., 2012) with integrated model selection, 10 million MCMC generations, and codon partitioning. Confidence in the resulting phylogenetic estimates was assessed using the bootstrap approach (Felsenstein, 1985) for the maximum-likelihood analyses with 1,000 pseudoreplicates and with posterior probabilities (pP) in the Bayesian framework. Phylogenetic trees were visualized with iTOL (Letunic and Bork, 2019), as well as mapping of epidemiological characters along the phylogeny. We applied BEAST2 (Bouckaert et al., 2014) to the CRF47_BF dataset to estimate a chronogram and the phylodynamic history of CRF47_BF. First, we validated the existence of temporal signal in the dataset with TempEst v1.5.3 (Rambaut et al., 2016), which determines the correlation of genetic divergence among sequences (measured as root-to-tip distance) with time. For the BEAST2 analysis we ran 10 million generations, two codon partitions (1st + 2nd, and 3rd positions), used an uncorrected log-normal relaxed molecular clock (initial ucld.mean = 1.0 and initial ucld.stdev = 0.333), estimated base frequencies and the HKY + G evolution model. The input file was created using BEAUti. Past population dynamics was estimated via Skygrid analysis (Hill and Baele, 2019) using a coalescent Bayesian Skygrid tree prior. We used Tracer (Rambaut et al., 2018) to verify convergence and to visualize the Skygrid plot. We compare the inferred effective population size of the CRF47_BF population in Spain to the proportion of CRF47_BF diagnoses over time across the same study regions and time period.
Finally, known drug resistant mutations were identified in the focused CRF47_BF data using the Stanford HIV Drug Resistance Database's HIVdb v9.0 program (Tang et al., 2012).
Statistical Analyses
Correlations between cluster membership and epidemiological data were analyzed with Fisher's exact test.
Epidemiology and Sequences
We collected samples and epidemiological data from 87 patients throughout eight different regions of Spain (Basque Country, Navarre, Galicia, Aragon, Comunitat Valenciana, Madrid, Castilla-La Mancha, and Castilla y León) (Figure 1). Collections were made from 2007 to 2021. Males accounted for 78% of the individuals with CRF47_BF in our study and 56% of individuals reported transmission via heterosexual contact (61% considering only individuals with available data on transmission route) ( Table 1). The Spanish region with the highest proportion of the CRF47_BF variant in our data set was the Basque Country with 44% of the cases, while Navarre was the next highest (18% of the cases) (see Table 1 for number of new CRF47_BF sequences and total CRF47_BF sequences per region, and Figure 1 for the total number of analyzed HIV-1 sequences and prevalence of CRF47_BF among new HIV-1 diagnoses in each region in the sampling periods). Most samples were collected shortly after HIV diagnosis. Patients received ARV therapy after sample collection.
Phylogenetics
The first phylogenetic analysis was a maximum likelihood phylogenetic estimate of the relationships amongst the CRFx_BFs, including HIV-1 M subtypes as outgroup taxa and subtypes B, F, and CRFx_BFs as ingroup taxa. Our RAxML tree depicted a monophyletic cluster of the subtype B, F, and CRF_BFs relative to the other HIV-1 subtypes (Supplementary Figure 2), but including also Subtype D. The backbone structure of the CRF phylogenetic relationships was weakly supported (<70% bootstrap support -indicated by dashed lines), which is not particularly surprising given the potential difficulty in representing evolutionary histories of recombinant HIV-1 forms as bifurcating trees Crandall, 2001, 2002). Many of the CRFx_BF forms cluster in strongly supported monophyletic groups themselves (e.g., CRF40_BF, CRF72_BF, CRF75_BF, CRF90_BF, CRF89_BF, etc.), including our target group of CRF47_BF sequences. Many of the other CRFs form weakly supported monophyletic groups (e.g., CRF70_BF, CRF46_BF, CRF38_BF, etc.) and a few form non-monophyletic groupings (e.g., CRF66_BF and CRF71_BF). The subtype B sequences cluster together within the CRFx_BF clade with both a cluster of subtype D and the CRF28_BF sequence nested within this subtype B cluster. Nevertheless, the target group for this study, the CRF47_BF sequences, clearly form a monophyletic group, suggesting independent evolution, and are a sister group to the CRF44_BF clade.
The Bayesian estimated phylogeny for the CRF47_BF sequences shows a monophyletic grouping of the sequences from Spain (Figure 2) with the two sequences from Brazil (KJ849798 and JQ238096) branching basally. Within the Spanish cluster, there are three strongly supported clusters, comprising 29 (cluster I), 17 (cluster II), and 13 (cluster III) viruses, respectively, which are associated with the Basque Country (p = 0.0002), Navarre (p = 0.0001), and Aragon (p = 0.0002), respectively. This is indicative of a single introduction of CRF47_BF into Spain with subsequent spread throughout the country and point introductions with subsequent expansion in different regions. The mixing of patient gender throughout the resulting phylogeny supports the epidemiological data suggesting predominantly heterosexual transmission among patients. We also found that cluster II, associated with Navarre, was associated with men who have sex with men (MSM) (p = 0.0388). In this cluster, 14 of 15 individuals with known gender are men.
The bootscan analysis for recombination and separate trees of B and F segments suggest that all the target sequences within the CRF47_BF analyses presented here share the same recombination pattern (as outlined at the Los Alamos HIV Database) (Figure 3 and Supplementary Figure 3). Thus, while recombination can significantly impact phylogenetic interpretations (certainly, for the overall tree presented in Supplementary Figure 2), it does not seem to be differentially impacting analyses of our targeted CRF47_BF sequences.
Based on the sample dates, we grouped these in four temporal categories of recency (days between diagnosis date and current date) (>3,000, 2,000-3,000, 1,000-2,000, and <1,000 days from current) for ease of visualization of time over the phylogeny to test for temporal clustering. Thus, the greater the value the closer to the most recent common ancestor, i.e., origin of CRF47_BF. Note that these correspond well to the branch lengths observed leading to samples with <1,000 days from diagnosis having longer branches from the root to the terminal samples and >3,000 having shorter and more basal branches in the phylogram. No temporal clustering was observed as these different time categories were distributed throughout the CRF47_BF phylogeny (Figure 2).
Analysis of Drug Resistance Mutations
To identify drug resistance mutations in the CRF47_BF viruses, we analyzed the sequences with the Stanford HIV Drug Resistance Database's HIVdb program (Tang et al., 2012). We found ARV drug resistance mutations in five patients: M184V or M184I mutations of resistance to nucleoside reverse transcriptase inhibitors (NRTI) in three samples; K103N mutation of resistance to non-nucleoside reverse transcriptase inhibitors (NNRTI) + K65N mutation of resistance to NRTIs in one sample; FIGURE 3 | Bootscan analyses of PR-RT sequences of CRF47_BF viruses. Simplot v3.5 (Lole et al., 1999) was used for the analyses. Twelve representative profiles are displayed. Names of viruses, with GenBank accessions, are shown above each bootscan plot. P1942 was included as CRF47_BF reference. A reconstructed B-F1 ancestral sequence was used as outgroup. The horizontal axis represents the position from nucleotide 1 of protease and the vertical axis represents bootstrap values supporting clustering with references. and E138A mutation associated with low level resistance to the NNRTI rilpivirine in one patient. Only one of these patients, with M184I mutation, was ARV drug-experienced.
Phylodynamics
Our TempEst analysis determined that there was an adequate temporal signal in the dataset (R 2 = 0.5051). With time-stamped sequence data, we performed a Bayesian Skygrid coalescent analysis to estimate historical population dynamics (Hill and Baele, 2019) of the CRF47_BF variants throughout Spain. Time labels (tipdates) were determined by the date of sample collection (ranging from 2007 to 2021). Our analysis supports a fairly dynamic population history of the CRF47_BF in Spain over the last 15 years with an initial increase in population size, a subsequent increase from 2011 to 2015, with a leveling off more recently, but seemingly increasing variance (Figure 4). This fluctuation in effective population size of CRF47_BF in Spain is not as dynamic as the % CRF47_BF infections among new HIV diagnoses, that fluctuate considerably over this same time period (Figure 4), but with similar overall trends. The average effective population size was estimated to be 155 with a mean substitution rate of 1.8128 × 10 −3 [95% highest posterior density (HPD) interval (1.3956 × 10 −3 , 2.2548 × 10 −3 )]. Using BEAST, we estimated a chronogram to determine the time of origin for both the CRF47_BF clade as well as the timing of the introduction of CRF47_BF viruses to Spain (Figure 5). We estimated the origin of the CRF47_BF clade in Brazil (pP = 1.0), dated to 1999(95% HPD interval between 1994and 2003 and timed the introduction of CRF47_BF to Spain (pP = 0.99) to be 2002-2003 (95% HPD interval between 2000 and 2004) (Figure 5). Similarly, viral strains seem to have entered once and spread through the Spanish regions of Basque Country (cluster I) (pP = 1.0), Navarre (cluster II) (pP = 1.0), and Aragon (cluster III) (pP = 1.0) between 2009 and 2012 ( Figure 5). These analyses, hence, suggest that CRF47_BF was probably circulating in Spain for about 8 years before it was identified through DNA sequencing, but clearly at a relatively low frequency. Given the sampling of CRF47 sequences, it appears that the introduction of this recombinant form to Spain was via Brazil, supported by very high posterior probabilities (pP = 1.00).
DISCUSSION
The HIV-1 CRF47_BF was first reported in 2010, detected in nine samples collected in Spain in 2007-2009. Samples have subsequently been collected as this novel variant has spread throughout the country. Our phylogenetic analysis shows that isolates of CRF47_BF form a strongly supported monophyletic group (share a most recent common ancestor) relative to other CRFx_BF sequences, distinct from other CRFx_BF sequences, subtype B, sub-subtype F2 and other Group M subtypes. A focused phylogenetic analysis of the CRF47_BF sequences show a clear single origin in Brazil around 1999-2000 with a subsequent transmission and rapid spread throughout Spain beginning around 2002-2003. Three strongly supported clusters, comprising a majority of viruses and associated with the regions of Basque Country, Navarre, and Aragon, were identified; this suggests that after a single introduction in Spain, CRF47_BF has spread mainly through localized point introductions and subsequent spread in different geographical areas. CRF47_BF is predominant in males (78%) with a predominantly heterosexual transmission (56% of the total, 61% of those with data on transmission mode). The phylodynamic analysis and percent of CRF47 among new HIV diagnoses both support a fluctuating population size of CRF47_BF over the last 15 years with periods of expansion and contraction, suggesting that continued monitoring of this novel variant will be important to track its spread.
It is interesting to note that one cluster of 17 individuals, associated with Navarre, where 14 of 15 individuals with available data were male, was significantly associated (p = 0.0388) with transmission among MSM. Although three men were reported to be heterosexual, considering the great male preponderance in the cluster, it is probable that they are nondisclosed MSM (Hué et al., 2014;Ragonnet-Cronin et al., 2018). The identification of an MSM-associated cluster within the CRF47_BF clade may be indicative of the diffusion of CRF47_BF from a heterosexual-driven network to a MSMdriven network. A similar phenomenon has been observed for the two other CRFs of South American origin identified by us in Spain: CRF66_BF (Bacqué et al., 2021) and CRF89_BF . Such phenomenon may reflect the migration of these CRFs from countries where heterosexual transmission is predominant to Spain, where most currently expanding HIV-1 clusters are associated with MSM (Patiño-Galindo et al., 2017;Gil et al., 2022). It should be pointed out, however, that outside of the Navarre cluster, the male:female ratio was 3.2:1, which contrasts to the 2.4:1 ratio of selfdeclared heterosexual men to MSM (decreasing to 1.5:1 if all men with non-specified sexual transmission were MSM). This discrepancy could also be explained by the presence of nondisclosed MSM among self-declared heterosexual men outside of the Navarre cluster, and indicates that epidemiological data on transmission route based on self-reported sexual behaviors should be interpreted with caution.
The recent expansion in Spain of CRF47_BF, whose Brazilian origin is first reported here, is one more example of the increasing relationship of the South American and European HIV-1 epidemics, also reflected in the propagation in Europe of other CRFs (12_BF, 17_BF, 60_BC, 66_BF, and 89_BF) Fabeni et al., 2015Fabeni et al., , 2020Bacqué et al., 2021;Delgado et al., 2021) and variants of subtypes F1 and C (Tovanabutra et al., 2003;de Oliveira et al., 2010;Thomson et al., 2012;Lai et al., 2014;Carvalho et al., 2015;Delgado et al., 2015;Vinken et al., 2019) of South American ancestry, which probably derives from increasing migratory flows from South America to Europe.
The repeated introduction and expansion in Spain of multiple CRFs and non-B subtypes (Delgado et al., 2015(Delgado et al., , 2019Patiño-Galindo et al., 2017;González-Domenech et al., 2018;Kostaki et al., 2019) justifies the establishment of a HIV-1 molecular epidemiological surveillance system, aimed at promptly detecting the propagation of such variants, as well as rapidly expanding clusters, that could provide information in real-time on changes in the genetic composition and the dynamics of the HIV-1 epidemic to guide the implementation of preventive public health interventions (Paraskevis et al., 2016;German et al., 2017;Oster et al., 2018). Nevertheless, phylogenetic analyses of HIV sequence data should be taken with caution as recombination can impact phylogenetic inference (Schierup and Hein, 2000;Posada and Crandall, 2002) suggesting network approaches might be better suited for representation of such data (Clement et al., 2000). Nevertheless, we focus on a specific set of CRF47_BF sequences with a shared mosaic structure and therefore our results should be robust to the impacts of recombination (see Figure 3).
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ncbi.nlm. nih.gov/genbank/, OK148895-OK148974.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Committee of Research Ethics of Instituto de Salud Carlos III, Majadahonda, Madrid, Spain (report numbers CEI PI 38_2016-v3 and CEI PI 31_2019-v5). Written informed consent for participation was not required for this study, as it used samples and data collected as part of routine clinical practice and patients' data were anonymized without retaining data allowing individual identification.
AUTHOR CONTRIBUTIONS
MT, ED, and MP-L conceived of the project. ED collected sequence data from the samples. GH, KC, MP-L, MT, and ED conducted the data analyses. HG performed the data curation. SB, VM, MS, JC-G, and EG-B performed the experimental work. The members of the Spanish Group for the Study of New HIV Diagnoses collected the samples and clinical and epidemiological data for the study. KC, GH, and MP-L wrote the original draft of the manuscript. MT, ED, and HG edited the manuscript. All authors read and approved the manuscript. | 5,941.6 | 2022-05-16T00:00:00.000 | [
"Medicine",
"Biology"
] |
A NEW MASTER PLAN FOR HARRAN UNIVERSITY BASED ON GEODESIGN
Harran hosted the historical Harran University, which is considered to be the oldest university of the world. The Department of Geomatics of the modern Harran University has been charged with the design of a new master plan using Geodesign technology. Carl Steinitz developed a complete framework for doing Geodesign as applied to regional landscape studies. In this project “Geodesign Hub”, an online software for collaborative Geodesign, has been selected as the main tool. According to the Geodesign concept, data collection had to be limited to support the evaluation of the ten selected systems. The deployment of Unmanned Aerial Systems (UAS) has been necessary in order to collect data with the required accuracy for such a vast area. Currently, works on the third model of the Geodesign framework are continuing.
INTRODUCTION
Harran, today a small district center in Sanliurfa Province, was founded about 5000 years ago in the cradle of human civilization, the fertile crescent.It hosted the historical Harran University, which is considered to be the oldest university of the world.This city was not just created somewhere by chance and then, just happened to prosper.On the contrary, the selection of this location gives prove that even at that time, man used the concept of Geodesign.During this selection process the following criteria, most probably have been of importance: 1. Geopolitical location: It lied between the superpowers of the West (Greeks, Romans) and the East (Assayers, Persia).
2. Transportation: It lied at the crossroads of two ancient main roads: The West-East axis from the Mediterranean to the plains of the Tigris, and the North-South axis running to inner Anatolia.In 1995, Carl Steinitz who was working with his colleagues and students over a period of approximately 30 years at the Harvard Graduate School of Design developed a complete framework for doing geodesign as applied to regional landscape studies.This framework originally called Framework for Landscape Planning and later renamed to Framework for Geodesign (Steinitz, 2012), advocates the use of six models to describe the overall planning (geodesign) process as shown in figure 3.In "Framework for Geodesign" the author delineates the conceptual framework for doing Geodesign, which is considered to be the standard book for both practitioners and academics.
3.
Every organization, large or small, public or private, does three things: it gets and manages information (data), analyzes or assesses that information with respect to some purpose (analysis), and (based on that information and those assessments) creates or re-creates goods and/or services (design).It is, in fact, the creation or re-creation of goods and/or services that gives most organizations their reason for being.If for this creation GIS is used then actually, we can speak of Geodesign.
In his book "Geodesign -Case Studies in Regional and Urban Planning" McElwaney (McElwaney, 2012) lists seven key characteristics of Geodesign.However, during the ongoing discussions of our project the following three characteristics have been determined as the most important ones: 1. Goedesign provides a fast feedback on your changes to a plan making the impacts of it immediately visible.2. Geodesign supports a participatory approach giving all stakeholders a voice for the planning of their future.3. Geodesign uses an intuitive GUI that allows the active participation of a multidisciplinary project team and decision-makers at the same time.
CURRENT STATUS
In many GIS based projects, during the first project phase data collection is the main topic.And very often, big databases consisting of hundreds of layers with outstanding accuracy are created without having a clear idea who and for what they will ever be used for.During the early phases of our project, discussions about all reasons for this study and which methodology would support the finding of reasonable solutions were dominating.
According to the Geodesign methodology developed by Steinitz, a consecutive pass through the six models should be undertaken for three times.The first pass serves the clarification of the reason for this study.Although it seemed obvious that a new master plan was underway anyways so, it should be done properly using a scientific method like e.g.Geodesign.However, during our discussions it became clear that we wanted much more.Especially, the following reasons could be identified: 1.In contrast to old master plans and the one already underway, our master plan would not only deal with the development of the centre of Osmanbey Campus (ca.150 ha) but the whole area stretching over more than 2800 ha.
2. We wanted a master plan that is worth more than the paper written on it.That meant, only active participation of the decision-makers would give a real chance that the master plan would ever be implemented.
3. Changes always occur.Therefore, we wanted to create not a static document rather, a dynamic system that could accommodate changes if required.
During the second pass according to Steinitz, the exact methodology to be used for a project has to be set up.
There is nothing like "the" Geodesign methodology.This has to be found out for each project separately.After testing of different systems, we decided to go with "Geodesign Hub" found by Ballal (Nyerges, 2016).Geodesign Hub is an online software for collaborative geodesign.It enables teams to create and share concepts, to design collaboratively, and to receive change-assessments instantlyall in a highly synergetic, efficient and easy to use environment.
The starting point in Geodesign Hub is the set-up of evaluation systems.These systems shall answer the question whether the current study area is working well or not.Up to 10 evaluation systems are allowed.This idea follows the logic that before you can start thinking about change you have to find out how well your systems are working currently.We decided to focus on the following 10 systems: Accordingly, data collection had to be limited to support the evaluation of the above mentioned systems.Among others, the deployment of Unmanned Aerial Systems (UAS) seemed to be necessary in order to collect data with the required accuracy for such a vast area.For collecting data in the already developed area, a multi-copter based and in the bigger undeveloped part of the campus, a fixed wing based system has been acquired, training conducted and testing carried out.
CONCLUSION
Harran University decided to create a new master plan for its main campus, Osmanbey, using Geodesign methodology.Using this methodology, the master plan would satisfy three criteria: 1) Creation based on a participatory approach, 2) Development of a user-friendly GUI allowing decion-makers to be actively involved, and 3) Building of a dynamic system that allow the easy integration of future changes.So far, passes 1 (definition of scope) and 2 (definition of exact methodology) according to the Geodesign methodology of Steinitz have been finished.Works for pass 3 (implementation of the methodology) are continuing.
Figure 5 .
Figure 5. Point cloud of GAP YENEV derived from multi-copter UAS | 1,563.8 | 2017-11-13T00:00:00.000 | [
"Geography",
"Engineering"
] |
Understanding the selective realist defence against the PMI
One of the popular realist responses to the pessimistic meta-induction (PMI) is the ‘selective’ move, where a realist only commits to the ‘working posits’ of a successful theory, and withholds commitment to ‘idle posits’. Antirealists often criticise selective realists for not being able to articulate exactly what is meant by ‘working’ and/or not being able to identify the working posits except in hindsight. This paper aims to establish two results: (i) sometimes a proposition is, in an important sense, ‘doing work’, and yet does not warrant realist commitment, and (ii) the realist will be able to respond to PMI-style historical challenges if she can merely show that certain selected posits do not require realist commitment (ignoring the question of which posits do). These two results act to significantly adjust the dialectic vis-à-vis PMI-style challenges to selective realism.
Introduction
The so-called 'pessimistic meta-induction' (PMI) challenge to scientific realism has never really gone away since it was vividly articulated by Laudan (1981). Since 1981 the details have changed, but the overall spirit remains the same. Simply put, the history of science still does pose a problem for scientific realists who want to make some sort of inference from the explanatory and/or predictive success of scientific theories to the approximate truth 1 of scientific theories, or theory parts. And it even poses a problem for the 'selective' realists, who make a distinction between the 'working' parts of a theory, which are said to warrant realist commitment, and the 'idle' parts of a theory. 2 Selective realism was actually designed specifically to overcome the PMI. But there remain historical examples where what very much seem to be working posits-in the derivation of a successful novel prediction, say-are definitely not approximately true (whatever your theory of 'approximate truth'). In fact, some of the examples on Laudan's original list are still relevant here (caloric, phlogiston, the luminiferous ether). But a number of new examples have recently been introduced to the literature, put forward specifically as challenges to selective realism, including: (i) Kirchhoff's theory of diffraction (Saatsi and Vickers 2011), (ii) Sommerfeld's prediction of the hydrogen fine structure (Vickers 2012), (iii) Dirac's prediction of the positron (Pashby 2012), and (iv) Ptolemaic astronomy (Díez and Carman 2015). Vickers (2013) and Lyons (2016) In recent years much of the discussion has concerned how (and whether) the selective realist can define 'working', such that (a) the definition is properly motivated (not ad hoc), and (b) the realist can use the definition to rebut the historical challenges. 3 The thought, usually, is that the realist has to be able to define 'working', otherwise the position is empty since the realist can't tell us what we should be epistemically committed to. For example, Stanford (2006, pp.173-180;2009, pp.385-387) has complained that if the realist can only identify the working posits in hindsight (that is, after we already have a successor theory in hand), then realism is bankrupt, since the whole point was to tell us which parts of current science we should/shouldn't put our epistemic trust in. However, in much of this literature two distinct realist projects have been conflated: (i) the project of responding to the historical challenge, and (ii) the project of explaining what realists should commit to. These projects are not entirely separate, of course, but one must bear in mind that they are definitely not one and the same project. Here-as elsewhere in philosophy-a defence against a challenge (in this case the historical challenge to realism) is not necessarily a positive argument for the view, nor does it have to be.
In this paper I explain what the realist really has to do to respond to contemporary PMI-style objections, emphasising what the realist does not have to do. This is articulated as a defensive strategy for the selective realist in Sect. 2. Section 3 then considers a possible 'disjunction problem' which arises in this context. In Sect. 4 I consider the possibility that, even if we can't prospectively identify what warrants realist commitment in our best contemporary theories, we can prospectively identify at least some elements which do not merit realist commitment. In this way it may be possible to make a prediction concerning what will not be retained in future science. 2 The 'working/idle' terminology is now common, and I will adopt it here in a broad sense to refer to what is common to several different contemporary 'selective' realisms. These include Kitcher's distinction between 'working posits' and'presuppositional posits' (1993), Psillos's 'divide et impera' distinction between 'idle' and'essentially contributing constituents' (1999), Saatsi's focus on 'success-fuelling properties ' (2005), and Chakravartty's 'semirealism' distinction between 'detection properties' and 'auxiliary properties' (2007). What these all share is the idea that only certain parts/aspects of a scientific theory are confirmed by the theory's successes, and thus merit realist commitment. 3 See Peters (2014) for discussion and references.
The selective defence against the PMI
This paper will consider one particularly powerful PMI-style objection to scientific realism, consisting in the claim that there are several examples in the history of science of theories which achieved very significant (predictive) success, but where the working posits are not (all) approximately true. In other words (so the claim goes) there are examples in the history of science of derivations of novel predictions, where at least some of the hypotheses which feature in/fuelled the derivation are definitely not approximately true (on any reasonable definition of 'approximate truth'). The targets of this challenge are most contemporary scientific realists, including those that advocate a selective realist commitment. Even one such example from the history of science can be a real thorn in the side for most realists, who typically think that novel predictive success is a very good indication that the theory's working posits are approximately true. But if there are several examples (and there seem to be) then this argument also speaks against more cautious realists, who claim that novel predictive success is quite a good indication, or probably means that the theory's working posits are approximately true.
Let's think about how the dialectic works here in a little more detail. Since the antirealist is putting forward a challenge to the realist, there is a significant burden on the antirealist to demonstrate the force of the objection. The antirealist needs to present a case from the history of science, identifying a success which is sufficiently impressive for realist commitment-let's assume a novel predictive success. Then the antirealist needs to reconstruct the derivation of that prediction, identifying assumptions which (at least apparently) merit realist commitment, given their role in the derivation. And then finally the antirealist needs to show that at least one of those assumptions is not approximately true when compared with current scientific thinking. Now this is a lot to achieve, and so the realist has plenty of options for responses. The main options for the realist are as follows: (i) Question whether the success identified is really success enough for realist commitment. 4 (ii) Question whether the reconstruction of the derivation is fair to the history, or whether it is somehow biased, or just one possible reconstruction. 5 (iii) Question whether the specific 'working posits' identified by the antirealist as not approximately true (a) really do merit realist commitment, and (b) really are not-even-approximately-true in light of current scientific thinking.
It should be noted, of course, that these aren't just options for the realist when responding to an antirealist objection. They are also criteria the realist herself must consider very carefully when she opts to make a realist commitment to certain scientific assumptions, and they can work against her just as much as they can work for her. For example, when considering these criteria it may turn out that some part of science the realist intuitively really wanted to believe, or originally took for granted as 'getting at the truth' in fact turns out to be something the realist should not believe given her own realist position. So the criteria are very important vis-à-vis responding to antirealist threats, but they can also force realists to withdraw realist commitment from parts of science they really want to commit to (e.g. because, intuitively, there seems to be 'lots of evidence' for the scientific claims in question). The focus of this paper is going to be criterion (iiia), questioning whether the specific assumptions identified by the antirealist really do merit realist commitment. It is here that much of the literature on this topic (including Psillos 1999;Lyons 2006;Vickers 2012, andHarker 2013) has conflated the realist's defence against the PMI and the positive project of the realist to identify what the realist should be committed to in a given case. Now, one way one might respond to an antirealist challenge is to invoke some theory of 'working posits'-identifying which assumptions really are 'working' in the case in question-and then conclude that the posits identified by the antirealist are not included within the working posits. One of the most influential figures in the selective realist community is Stathis Psillos (e.g. Psillos 1999) and he is very easily interpreted as doing just this. Lyons (2006) introduces Psillos's divide et impera realism as a response to 'the historical challenge', shows that there are problems with Psillos's definition of 'the posits which really fuel the derivation' (which merit realist commitment according to Psillos), and concludes (p.537) that "this sophisticated form of realism remains threatened by the historical argument that prompted it." Harker (2013)-partly drawing on Lyons (2006)-similarly fuses together both the realist's defence against PMI-style objections and the positive project of the realist. This leads him to state the following of the selective realist strategy: For the strategy to answer Laudan-style concerns […] the criteria we invoke to isolate those constituents of theories that are to be recommended for realist endorsement must render such constituents epistemically accessible (Harker 2013, Sect. 2).
This entails that to answer the historical challenge we need to have a convincing theory concerning how it is possible to identify the posits which warrant realist commitment. But this is incorrect. To answer the historical challenge the realist can do this, and prominent realists such as Psillos have done this, but the realist need not do this. Indeed, to develop a theory of which posits warrant realist commitment is to make the realist's task much harder than it has to be if all she wishes to do (for now) is respond to 'Laudan-style concerns'.
To respond, all the realist needs to do is show that the specific assumptions identified by the antirealist do not merit realist commitment. And she can do this without saying anything about how to identify the posits which do merit realist commitment. How is that possible? There is more than one answer here. The simplest case is when an assumption can just be eliminated without affecting the derivation in question. In this case the assumption is clearly idle vis-à-vis the success, and thus doesn't merit realist commitment. And this can be established without any real theory of what does merit realist commitment, simply by eliminating the posit in question and displaying the resultant derivation. However, such a case is going to be vanishingly rare. If an assumption is so obviously idle, then (a) scientists would usually have left it out in the first place, and (b) antirealists looking for a serious threat to selective realism would surely recognise that the assumption in question is not doing any work, such that the selective realist has an easy response to the challenge.
Much more serious is a case where a scientist has indeed used a posit in the derivation of a novel prediction, and that posit cannot be simply eliminated from the derivation without also eliminating the success. This means that there is a straight forward sense in which the assumption is doing work: the derivation doesn't go through without it. But at the same time the posit in question does not merit realist commitment, because it is not confirmed by the success. The reason is this: the posit in question is doing work in the derivation solely in virtue of the fact that it entails some other proposition, which itself is sufficient (when combined with the other assumptions in play) for that specific derivational step. In such a case it is this other proposition, loosely speaking 'contained within' the original proposition, which is (seems to be) really fuelling that particular step in the derivation. The bracketed 'seems to be' signals the fact that, again, the realist doesn't need to identify what really deserves realist commitment here. If all the realist wants to do, for now, is respond to the antirealist challenge, then all that matters is that the original proposition does not merit realist commitment. And this is the case even though there is an important sense in which it is not completely idle. 6 We can bring this sort of case to life with some examples, first a couple of toy examples and then a real case from the history of science. Consider first a situation where a doctor supposes you have the adenovirus (e.g. because the adenovirus is known to be widespread in the neighbourhood). That doctor might well use this assumption along with other assumptions about the human immune system to quite accurately predict how your symptoms will develop. But the doctor might be wrong about your having the adenovirus, and in addition the doctor's assumption is doing work in the sense that she reaches her conclusion making use of that assumption. Is this a case of miraculous success, then, since the doctor predicted correctly whilst making use of a false assumption?
Of course not: it's easy to make sense of how the doctor predicted correctly despite her mistake. The truth is you do have one of the cold viruses. And the doctor's reasoning only depended on her (implicit) belief that you have one of the cold viruses, a belief she is committed to in virtue of the fact that she believes that you have the adenovirus. Thus her false belief is doing work in her reasoning solely in virtue of the fact that it entails some other proposition, which itself is sufficient (when combined with the other assumptions in play) for the success. Another way to put it is this: the doctor was committed to what mattered here; she was mistaken only about redundant details, details which go beyond what was needed to make her prediction.
This particular way of identifying a proposition as not meriting realist commitment has been touched on in the literature, but it is not widely appreciated. For example, Saatsi (2005, p. 532) discusses a case concerning a crammed elevator which refuses to move. Saatsi is interested in explanations which contain false content which is 'surplus' in the sense that it is non-explanatory. But the case is also useful when trying to understand realist commitments in a case of predictive success. Suppose somebody predicts that an elevator will refuse to move by reasoning with the false assumption that the elevator load is 50 kg too heavy. One predicts successfully, since the reasoning depends only on the assumption that the load is too heavy-the belief that it is specifically 50 kg too heavy is redundant detail. Or to put it another way, the '50 kg too heavy' assumption does work in the derivation only in virtue of the fact that it entails another 'too heavy' assumption which itself is sufficient, when combined with the other assumptions in play, to reach the true prediction.
Many in the community still assume that if an assumption is used in a derivation then the realist must make a commitment to it (must believe it to be approximately true). Lyons (2006) even describes the selective realist strategy as 'deployment realism' (following Kitcher 1993), terminology which strongly suggests that the 'deployed' (used) assumptions are the assumptions the realist must commit to. But this is not the case: the given toy examples show very clearly that, at least sometimes, realists should not make a realist commitment to all the assumptions employed within a derivation of a successful prediction. After all, it would be madness for anyone to believe the lift to be exactly 50 kg too heavy just because that assumption was used to reach a correct prediction.
We can illuminate the strategy further with a real case from the history of science. Take Bohr's prediction of the frequencies of the spectral lines of ionised helium. Vickers (2012) puts this forward as a possible counterexample to selective realism. It seems to fit the bill, since: (a) The prediction of the frequencies of the spectral lines of ionised helium was a novel predictive success, and was seen as extremely significant at the time. (b) Bohr's prediction came about by making direct use of his theory of the atom (which includes some assumptions which are definitely not approximately true).
However, Vickers (2012) presents a 'way out' for the selective scientific realist, by noting that one of the assumptions Bohr made, and used to reach his successful predictions, does not merit realist commitment. The assumption in question is as follows: H: The electron orbits the nucleus at specific, quantised energies, corresponding to only certain 'allowed' orbital trajectories.
This doesn't merit realist commitment, since it can be seen to do work within the relevant derivation solely in virtue of the fact that it entails another proposition, which itself is sufficient for the relevant step of the derivation. This other proposition is the following: H * : The electron can only occupy certain, specific, quantised energy states within the atom.
Bohr was committed to H * in virtue of his being committed to H. But it turns out that H * is sufficient for Bohr's derivation to go through. 7 And, crucially for the selective realist, whilst H is certainly false (there are no 'trajectories'), H * is approximately true (by the lights of current scientific thought). This is enough to answer the antirealist's concern that one of the working posits was not approximately true. The realist might well accept that H is not approximately true, given the reference to orbital trajectories. The realist might also accept that H is 'doing work' within Bohr's derivation, in the sense that it certainly cannot be simply eliminated from the derivation without destroying the derivation, and thus the success. However, H does not merit realist commitment, given its relation to the success in question. There is an important sense in which it is not directly fuelling that success; rather it is only indirectly fuelling that success in the sense that it does so via its relation to H * . If anything merits realist commitment here, it is H * , and not H.
The crucial step now is to note that the realist does not need to claim that H * merits realist commitment. All that matters to answering the challenge is to show that H does not merit realist commitment (at least, not in virtue of its relation to Bohr's prediction of the ionised helium spectral lines). Vickers (2012) actually muddies the waters here: he makes a distinction between (i) the assumptions Bohr used to reach his predictions, and (ii) the assumptions which were 'truly necessary' to generate the predictions (p. 10). But the words 'truly necessary' are a mistake-they belong in a discussion of what the realist should be committed to, not in a discussion of what the realist should not be committed to. And to answer the challenge the realist just doesn't need to claim that H * is 'truly necessary' for the derivation. Perhaps the derivation can go through with an assumption still weaker, such that it turns out that H * is also immune to realist commitment. Perhaps only very abstract 'structure' truly merits realist commitment, as structural realists like to claim. But that can be left for another day. We are not here in the business of identifying realist commitments; we are in the business of showing that some specific assumption does not merit realist commitment. Because that is enough to answer the historical challenge.
A disjunction problem?
The toy examples above serve an important role in this paper. They serve to show by example just how ridiculous it would be to suggest that an assumption merits doxastic commitment, on 'no miracles' grounds, just because it was used to reach a successful prediction. A full answer to the question 'Why exactly is it ridiculous?' is harder to provide. Nonetheless, I do have a partial explanation to give: sometimes it is clear that a used assumption is not confirmed by the success it leads to because it is so clear that it did work to generate the success solely in virtue of the fact that it entails some other proposition which itself is sufficient for the derivational step in question. This is just a partial explication, of course, but hopefully it is explication enough to have persuasive force. Another worry is that, far from requiring further explication, it already fails as it stands because of the use of 'entailment' as a key part of the explication.
Entailment seems to work well for the examples given in the previous section: 'Dave has one of the cold viruses' is entailed by 'Dave has the adenovirus', 'The load is too heavy' is entailed by 'The load is 50 kg too heavy', and 'The electron can only occupy certain energy states' is entailed by 'The electron can only occupy 123 certain orbital trajectories'. So why not entailment? Indeed, Vickers (2013, p. 198) considers precisely the sort of example under consideration here, and talks in terms of one proposition (the original proposition used within the derivation) 'containing within it' another proposition which can take the place of the original proposition in the derivation. Vickers (ibid.) then writes "By 'contain within it', the realist means simply that some weaker proposition can be inferred from the original proposition: so 'The passengers are too heavy' is contained within 'The passengers are 50 kg too heavy'." However, Vickers' account leads to some rather awkward results. In particular, it leads to a disjunction problem.
The basic problem is the Principle of Addition: any given proposition P entails PvQ for any arbitrary proposition Q. Now, as explained above, to respond to an historical challenge a realist only needs to show that some individual proposition P does not merit realist commitment. And Vickers' analysis seems to suggest that if the realist can find any proposition entailed by P which can take the place of P in the derivation, then that shows that P does not merit realist commitment. 8 The worry with this is that P entails PvQ for any Q whatsoever, and so Q can be selected to make sure that PvQ can take the place of P in the derivation without affecting the derivation of the prediction in question (call the original prediction 'A'). Since PvQ is a disjunction one might worry that any resultant prediction would also be disjunctive, and so couldn't be the same as the original prediction, A, achieved by using P. But this isn't necessarily the case: if we use PvQ in place of P the final prediction could take the form AvA, and thus collapse to the original prediction A. This might well be very difficult to achieve in practice, and also not representative of serious science, or philosophy. But the point is that Vickers' analysis allows for this absurd response to an antirealist challenge, and that is unacceptable. It should be ruled out from the start.
One option here is for Vickers (2013) to retain his analysis but reject the Principle of Addition, perhaps by adopting a relevance logic. Certainly one could find support for this move in the literature. Weingartner (1993) notes that "[T]his principle [of addition] is the culprit of a lot of difficulties in different areas," and argues that it is responsible for "most of the well-known paradoxes in the theory of explanation, confirmation, law statements, disposition predicates, etc." His conclusion is that we ought to put limitations on the application of classical logic, especially "if we think of logical consequences drawn in science from assumptions or hypotheses" (p. 95). More recently Strevens (2008, Sect. 3.61) encountered his own 'disjunction problem' in the context of developing a causal theory of explanation. And his ingenious attempts to get around the problem-making reference to 'causal contiguity at the fundamental level'-are controversial (see Strevens et al. 2012 for discussion). Perhaps Strevens too could consider adopting a relevance logic, or (following Weingartner) adopting classical logic but limiting its application. This will look like an ad hoc solution to many, but recent work on logical pluralism encourages us to think of logic as a tool, not a truth (cf. Beall and Restall 2006). In which case we need not be bound by every rule of classical logic in every conceivable context. I am sympathetic to this line of argument, but in fact it isn't necessary to mess with the logic. There is a crucial difference between the (brief) analysis given by Vickers (2013) and the analysis given here, such that the noted disjunction problem affects only Vickers (2013). The key to avoiding the problem lies in the particular wording: P does not merit realist commitment whenever P is doing work in the derivation solely in virtue of the fact that it entails some other proposition which itself is sufficient, when combined with the other assumptions in play, for the relevant derivational step. To see this, suppose that a given P is swapped in the derivation for PvQ, for some random proposition Q. Certainly it is the case that P entails PvQ, and it might be the case (if Q is carefully selected) that PvQ itself is sufficient for the relevant derivational step. But what's missing here is the condition that P is doing work in the original derivation in virtue of the fact that it entails PvQ. Remember that Q has been carefully selected to ensure that the final prediction is unaffected. Most probably, it didn't feature at all in the original derivation or indeed the relevant history of science. How, then, could it make sense to say that the work done by P in the original derivation is work done in virtue of the fact that P entails PvQ? Entailment of PvQ from P is not enough to meet this condition: we need something more than mere entailment. One option might be to turn to entailment which is metaphysically necessary. But to pursue this further would open up a huge can of worms in metaphysics and logic, e.g. concerning the concept of a grounding relation (see e.g. Raven 2015). All that matters for present purposes is that it does not make sense to say that a given proposition P does work in a derivation in virtue of the fact that it entails PvQ where Q is selected specifically to leave the final prediction unaffected and did not feature at all in the relevant history of science.
least some of the posits which don't merit realist commitment even though they are working posits, in the sense noted above. And this is much more feasible, I submit, simply because it is so much easier to identify something a realist should not commit to than it is to identify something a realist should commit to.
How would we go about prospectively identifying at least some posits which, despite being working posits, are not confirmed by the success such that they merit realist commitment as a consequence of that success? Well, we can simply go through the assumptions in a derivation one at a time, and consider the following question: Is this assumption doing work in the derivation solely in virtue of the fact that it entails some other proposition which is itself sufficient, when combined with the other assumptions in play, for the success? 9 Consider how this could have worked out in the Bohr case. How might it have been possible to see that his hypotheses concerning electron orbits were not confirmed by the success? Well, Bohr's assumption H concerning quantised electron trajectories entails H * , which merely concerns quantised electron energies. Bohr was certainly in a position to notice this entailment relation. And he was also, of course, in a position to notice that H * is sufficient for the success: H * can take the place of H in the derivation without affecting the result. Was Bohr also in a position to note that H does work in his original derivation solely in virtue of the fact that it entails H * ? It would seem so, since the fact that H * is sufficient for the result shows that the reference to trajectories is redundant vis-à-vis the success. If this is right we have the prospective identification of a posit which is not confirmed by the success it leads to, and this might even lead to a prediction concerning the future development of the relevant science, namely, reference to the orbital trajectories of electrons will not be retained in the successor theory. And this prediction of course would have turned out to be true! We are perhaps getting ahead of ourselves. Putting ourselves in Bohr's shoes for a moment, it may have been inconceivable at the time to think that electrons could have quantised electron energies without having associated quantised orbital trajectories (cf. Stanford 2006, p. 171). But this wouldn't have stopped Bohr noticing that his references to electron orbits were redundant vis-à-vis his predictive successes. Or, to put it another way, that his references to electron orbits were not confirmed by the success. Naturally in such circumstances one might still want to believe in quantised electron orbits on the grounds that these are, apparently, metaphysically necessitated by the quantised electron energies and other relevant assumptions. But at least Bohr could have separated two importantly different motivations for his beliefs: his beliefs in the quantised energies were motivated directly by their role in generating the successful predictions, but his beliefs about quantised orbits were motivated by an inference from his beliefs concerning quantised energies. And, since his beliefs about energies do 9 There is an interesting problem lurking here, which I merely present for future investigation. Suppose a given derivational step consists of two assumptions A and B combining to deliver C. And suppose that A entails A* and A* combined with B still delivers C. In that case A does not merit realist commitment. Suppose further that B entails B*, and B* combined with A still delivers C. In that case, B does not merit realist commitment. But now suppose that A* and B* together do not deliver C. In that case it seems that what the realist should not be committed to is underdetermined, since she has equal reason not to be committed to A and not to be committed to B. And yet not committing to both A and B is not an option given that A* and B* combined do not deliver C. not properly (logically) entail his beliefs concerning orbital trajectories, he could have come to agree that the latter were not as secure as the former. This gives us a way to answer Stanford's question: "[W]hy did we (or the relevant scientific communities) ever believe more than those parts or aspects of past theories on which their empirical successes really depended?" (2009, p.385). The answer in Bohr's case is that the assumptions apparently responsible for his empirical successes (concerning electron energies) appeared to conceptually entail other assumptions (concerning electron orbits). But it still stands that they weren't directly confirmed by the success, because they can be seen to be redundant vis-à-vis that success.
This all puts us in a position to be optimistic concerning the prospects for prospectively identifying at least some of the posits which do not merit realist commitment in our current best scientific theories. One can at least attempt to go through modernday derivations of predictions of phenomena, and look for hypotheses which include details which are redundant for the purposes of the derivation. In this way it may be possible-at least sometimes-to separate the elements of theory which are confirmed by the phenomena they predict, and the elements which are not so-confirmed, but which we believe for other reasons (e.g. because they appear to be conceptually or metaphysically entailed by the elements which are confirmed). And when it comes to the bizarre world of fundamental physics we might come to agree that these other reasons have been shown time and time again through the history of science not to be good reasons. Thus the realist should, perhaps, restrict her commitments to what is directly confirmed by the predictive successes.
At the very least this seems to me to be a worthwhile heuristic to bear in mind when we (scientists in particular) are thinking about how scientific progress might be made. Including how general relativity and quantum theory might ultimately be reconciled. Anything that can help with this extraordinary challenge can only be a good thing. | 7,723.8 | 2017-09-01T00:00:00.000 | [
"Philosophy"
] |
Therapeutic Efficacy of Vitamin E δ-Tocotrienol in Collagen-Induced Rat Model of Arthritis
Rheumatoid arthritis (RA) is a chronic, systemic, inflammatory disease primarily involving inflammation of the joints. Although the management of the disease has advanced significantly in the past three decades, there is still no cure for RA. The aim of this study was to determine the therapeutic efficacy of δ-tocotrienol, in the rat model of collagen-induced arthritis (CIA). Arthritis was induced by intradermal injection of collagen type II emulsified in complete Freund's adjuvant. CIA rats were orally treated with δ-tocotrienol (10 mg/kg) or glucosamine hydrochloride (300 mg/kg) from day 25 to 50. Efficacy was assessed based on the ability to reduce paw edema, histopathological changes, suppression of collagen-specific T-cells, and a reduction in C-reactive protein (CRP) levels. It was established that δ-tocotrienol had the most significant impact in lowering paw edema when compared to glucosamine treatment. Paw edema changes correlated well with histopathological analysis where there was a significant reversal of changes in groups treated with δ-tocotrienol. The results suggest that δ-tocotrienol is efficient in amelioration of collagen-induced arthritis. Vitamin E delta-tocotrienol may be of therapeutic value against rheumatoid arthritis.
Introduction
Rheumatoid arthritis is a chronic inflammatory and destructive arthropathy. The worldwide occurrence of RA is about 1-1.5% of the population and it does not discriminate between age and ethnicity [1]. The precise aetiology of RA remains unknown. It has been established however that this disease is strongly linked with major histocompatibility complex (MHC) class II antigens, suggesting a genetic predisposition [2]. The pathogenesis of RA is associated with the activation of both cellular and humoral immune responses to an autoantigen [2,3]. The activation of such responses leads to a potentiation of various proinflammatory cytokines (TNF-, IL-1 , etc.), cascading into a vicious cycle of antigenic stimulation, inflammation, and joint destruction [4]. Currently, only symptomatic treatment of RA is available. Treatment for RA is unsatisfactory, as it consists of drugs with serious side effects and does not correct the underlying causes of arthritis. Therefore, there is a dire need for a safer and equally, if not more, efficient treatment option which is able to attack the root of the disease itself.
Nutraceuticals are loosely defined as "functional foods" and food products that provide medicinal or health benefits [5]. With a reported market value of US$ 75 billion, this industry is becoming progressively popular, partly due to the public's perception that "natural is better" [6]. Joining a long list of nutraceuticals are the tocotrienols, a constituent of vitamin E found in palm oil as a phytonutrient [7]. Vitamin E is a collective name for a complex mixture of homologues. The two main homologues of vitamin E are the tocopherols (T) and tocotrienols (T3). Each homology is subsequently composed of four isomers, , , , and . Tocomonoenol, found in small quantities in palm oil and marine organisms, forms the third homologue of vitamin E. Two isomers of 2 BioMed Research International tocomonoenol have been described to date with not much known about either [8]. Gaining substantial momentum in the past decade, tocotrienol research has led to the discovery of many of its important properties. Ranging from anticancer [9] to neuroprotective qualities [10], it is clear that tocotrienol use in various disease states is warranted. One such disease is rheumatoid arthritis (RA). The anti-inflammatory effects of tocotrienols have been less studied. Only recently, it has come to light that these isomers of vitamin E might bear some consequences on eradicating chronic inflammatory diseases such as arthritis, atherosclerosis, and coeliac disease to name a few. The aim of the present study is to assess the efficacy of -tocotrienol supplementation as a therapeutic agent in the collagen-induced rat model of arthritis.
Materials and Methods
Female Dark Agouti (DA) rats, 6-10 weeks old (150-200 g), were obtained from the Institute of Medical Research (IMR), Malaysia. Rats were maintained in individually ventilated cages (4 per cage) in the Animal Holding Facility (AHF) at the International Medical University (IMU) after their arrival. Food and water were available to the animals ad libitum. The AHF environment was climate-controlled with a 12hour day and 12-hour night cycles. The International Medical University's joint committee for research and ethics approved all experimental procedures of this study. Delta-tocotrienol was a kind gift from Davos Life Sciences (Singapore). Collagen from chicken sternal cartilage type II, complete Freund's adjuvant (CFA), glucosamine hydrochloride, and acetic acid 99.8% were obtained from Sigma (Sigma Aldrich, USA).
Rats were divided randomly into four experimental groups ( = 6 in each group), control, arthritis alone, arthritis with -tocotrienol treated, and arthritis with glucosamine treated. Collagen from chicken sternal cartilage (5 mg) was reconstituted in 5 mL of 0.1 M cold acetic acid. The collagen was left to solubilise overnight at 4 ∘ C. Once the solution had become clear, complete Freund's adjuvant (CFA) at a ratio of 1 : 1 was added to the preparation of collagen. The mixture was then transferred to a handheld homogeniser and emulsified for approximately 20 minutes. For collagen-induced arthritis, the rats were briefly anesthetized and approximately 0.2-0.4 mL of the collagen-CFA emulsion was injected intradermally at the base of the tail at day 0. For the treatment group 10 mg/kg body weight of -tocotrienol was administered daily using feeding needles from day 25 to day 50. The amount administered was at a concentration of 10 mg/kg rat based on the weight of the rat at day 25. Treatment was administered as oral supplementation daily through oral gavage from day 25 to day 50. Arthritis with glucosamine group received 300 mg/kg body weight of glucosamine hydrochloride daily through oral gavage from day 25 to day 50.
The body weight of the animals was measured at five intervals. Rats were monitored daily for general appearance and behavior. The severity of arthritis was quantified by measuring the rat paw thickness using a digital vernier caliper at five-day intervals. Paw measurements were taken at each limb at four different joint positions. Twenty-four hours after the last day of the experiment, rats were anesthetized and blood sample was collected by cardiac puncture. Joint samples were also collected for histopathology. Spleen was removed and placed in a Petri dish containing approximately 5 mL RPMI media. Cells were immediately harvested and stored on ice. The plasma C-reactive protein (CRP) levels were quantified using the Millipore rat C-reactive protein ELISA kit.
Collagen Stimulation of Splenic Leukocytes.
For collagen stimulation of splenic leukocytes, the tubes containing the cell suspension were centrifuged at 800 rpm for 5 minutes after which the supernatant was discarded. 1 mL of complete RPMI media and 3 mL of RBC lysis buffer were added to the pelleted cells. The tube was inverted gently several times for 30 seconds to ensure mixture of the buffer and the pellet. Then, the tubes were centrifuged again at 800 rpm for 5 minutes and the supernatant was discarded. The pellet was then resuspended in 2 mL of RPMI media. For the counting of cells, 100 L of the cell suspension is diluted with 900 L of complete RPMI medium. Then, 20 L of this suspension was transferred into a microcentrifuge tube and 20 L of trypan-blue dye was added to the cells. The trypan-blue dye stains dead cells and this will help to eliminate the counting of these cells. The suspension was transferred onto a glass haemocytometer and viewed under the microscope. Viable leukocytes were counted. The cell number was adjusted to 5 × 10 6 cells/mL using the culture medium. About 200 L of this cell suspension was added in triplicate to the wells of a sterile 96-well flat-bottomed plate in the presence of 5 g/mL collagen. The plate was left to incubate for 72 hours in a humidified CO2 incubator at 37 ∘ C. Cell proliferation was determined using the MTT assay.
Histopathology.
After being fixed in 10% formalin for one week, the joint tissues were transferred to a tube containing a decalcifying agent. Following decalcification, the tissue was processed using an automated tissue processor and embedding station. Blocks were sectioned at 3-4 m thickness and slides were prepared and stained (H&E).
Statistical Analysis.
All data were analyzed using SPSS version 18 (SPSS Inc., Chicago, IL, USA). One-way analysis of variance (ANOVA) was used to detect differences among the experimental groups. For detecting differences between any two groups in a multiple group comparison, Tukey's test was used to evaluate paw oedema data and readings obtained from the ELISA. For all tests, a value of less than 0.05 was considered to be significant.
The Severity of Arthritis.
Arthritic animals began to have restrictions in movement at around day 20 where signs of arthritis began to develop. Some animals had developed a limp and moved around dragging their paws. Eating patterns, however, remained normal as denoted by a linear weight gain. As treatment progressed from day 25 to day 50, the mobility of the rats supplemented with either glucosamine or tocotrienol improved and almost all the rats in these groups regained ability to move around freely. Apart from the arthritis that affected the joints of these animals, there were no other gross changes observed ( Figure 1).
Paw Edema.
Paw edema was quantified by measuring paw size using a digital caliper from day 25 to 50. Arthritic animals began to show signs of arthritis between days 15 and 20. A total of 16 joints were measured for changes in swelling.
Prior to induction, all rats showed no signs of arthritis or any paw deformities (Figure 1(a)). Signs of arthritis observed included swelling and redness over the joints, development of a limp, and tenderness to touch. In all rats, more than two joints were involved on each limb. As paw edema was most prominent on the hind-paws; only the joints in the hind-paws were assessed analytically using Tukey's post hoc test. Joints showed a significant ( < 0.05) decrease in paw edema for all treated groups when compared to the untreated group (Table 1 and Figures 1(c) and 1(d)). Comparison between tocotrienol and glucosamine revealed a significantly higher ( < 0.05) effect of -tocotrienol in reducing paw edema (Table 1). In the supplement-treated groups, swelling and redness over the joints reduced markedly and the rats regained their mobility. There was no recurrence of oedema in the joints and no additional joints became involved during this period (Table 1 and Figure 1).
Body
Weight. Body weight of each rat was measured and recorded every five days from day 25 to 50. Although the starting weight for each group differed slightly, there was a noticeable upward trend for all groups ( < 0.05). The body weight of the animals ranged between 123.1 g and 138.9 g on day 35 and between 132.4 g and 156.8 g on day 50. The average body weight of all the animals rose between 10 and 12 g throughout the treatment period. The arthritis alone group had the highest ending weight. There was a significant increase in the body weight in this group after day 40 when compared with the other three groups ( < 0.05). The arthritic rats supplemented with -tocotrienol showed significantly lower weight compared to the arthritis alone and control groups ( < 0.05) (Figure 2).
Histopathology.
Histological sections were examined by light microscopy after H&E staining. All rats in the various arthritic groups showed significant changes in joint structure with varying degrees of arthritis changes. Changes observed included inflammation, cellular inflammation, joint space narrowing, synovial hyperplasia, erosion, and fibrosis. The joints of normal animals showed normal architecture with no swelling of the joint space. There was adequate gap and the articulating surfaces were lined by a healthy lining of cartilage, beneath which was the bony trabeculae. The synovium of these rats appeared healthy and there was no evidence of oedema or inflammation (Figure 3(a)). Maximum degenerative changes were observed in the arthritis alone rats, where a feature of early and late-stage inflammation such as widening of joint space during early stages of the disease was observed. Severe congestion surrounding the joint space had resulted in oedema, dilation of blood vessels, and narrowing of the joint space later on. The surface surrounding the joint space showed erosion and degeneration (Figure 3(b)). Extensive synovial hyperplasia was also noted with increased cellular infiltration composed primarily of inflammatory cells (lymphocytes and plasma cells). Areas of granulomatous inflammation known as pannus formed in several areas with increasing fibrosis. The tocotrienol supplemented rats showed less severe changes when compared to arthritis. Inflammation was scarce with a marked reduction in edema and congestion. Only scatters of inflammatory cells were observed, suggesting that only moderate inflammation was present. There were few focal areas of fibrosis present, indicating healing of the joint. Vascular dilation was still present accompanied by moderate synovial hyperplasia. Areas of active inflammation and healing were also noted (Figure 3(c)). The joints of the rats in the glucosamine-treated group also showed a significant reduction in swelling. Microscopically, the orientation of joint space was predominantly healthy and morphological changes were minimal. There were focal areas of mild edema and scattered inflammatory cells, amidst healthy synovial tissue. The subsynovial regions showed good vasculature and areas of fibrosis, signifying that the process of healthy healing was taking place (Figure 3(d)).
Collagen-Induced Proliferation of Splenocytes.
Splenocytes from the arthritic rats showed maximum cell viability when these cells were cocultured with 5 g/mL collagen for 24, 48, and 72 hours. At 72 hours, cell viability was at its peak with 85% compared with 59% for 24 hours and 62% for 48 hours. As the concentration of collagen increased, however, it was found that the viability of cells decreased to as low as 9%.
Once the optimum concentration of collagen and incubation time were determined, this data (not shown) was used to determine the proliferation of splenocytes from the control and experimental rats. Proliferation of collagen-stimulated lymphocytes was quantified using the MTT assay. The results showed that the proliferation of collagen-stimulated splenocytes was reduced in animals that were supplemented with glucosamine and -tocotrienol ( < 0.05) (Figure 4).
C-Reactive Protein (CRP).
Plasma levels of C-reactive protein (CRP) were determined by using a commercial ELISA kit using the protocol recommended by the manufacturer (Millipore, USA). The arthritic group showed significantly ( < 0.05) elevated levels of CRP compared to the control group. There was a significant decrease in the CRP in thetocotrienol and glucosamine-treated group ( > 0.05) when compared to the arthritis alone group ( Figure 5).
Discussion
This study aimed to assess the therapeutic efficacies oftocotrienol. In this study, several parameters which enable the elucidation of these potential benefits were investigated for these purposes. Six joints of the hind-paws were assessed for changes in paw oedema. Only the hind-paws were chosen as they were the most significantly affected and literature has shown that this is a feature in collagen-induced arthritis [11]. In four out of the six joints assessed, tocotrienol had the most significant reduction in paw oedema when compared to all other groups. It is clear that tocotrienol treatment had a considerable impact on reducing paw edema.
Body weight of rats in all groups showed a linear increase with no unusual changes. There were no significant differences noted between the experimental groups. This was an important marker in showing that the animals were not in distress during the experimental period. There was a significant increase in the body weight in arthritis group after day 40 when compared with the other three groups. This increased weight could be due to water retention caused by edema in these animals. Most studies on CIA show rapid weight loss in rats during the period after induction of arthritis, which gradually increased with remission of the disease [12]. This was however not seen in this study.
Collagen-induced arthritis established in DA rats was a reliable model, with a 100% incidence of arthritis. There was consistent development of full-blown arthritis in at least one of the hind-paws with the distal interphalangeal joint being involved. No rats were severely disabled to warrant early sacrifice. Arthritis developed acutely in rats, with joint changes occurring much more rapidly than in human RA. This allowed for a more detailed observation of joint changes before and after treatment. The CIA is associated with unwanted features such as variable incidence, severity, and intergroup inconsistency [13] which were controlled by maintaining appropriate environmental conditions.
Macroscopically, all rats induced with arthritis showed similar signs and arthritis had peaked by day 25. Classical signs of CIA were observed, including symmetrical joint involvement typically involving the hind-paws, swelling, and erythema over the joints [14][15][16]. By the end of treatment period (day 50), treated groups showed an amelioration in signs and improved mobility of the joints. Histopathological changes correlated with macroscopic observations, including changes in paw oedema. Hallmarks of the CIA were noted and were present in varying degrees amongst the different groups. Untreated (arthritis only) rats showed maximum degenerative changes. Suppression of disease activity was seen in treated groups with the greatest changes in thetocotrienol group.
Previous studies using animal models of arthritis showed that anti-inflammatory effects were attained through the inhibition of inflammatory mediators [17]. Tocotrienols have been described to inhibit these mediators, primarily TNFand IL-1 [17][18][19][20], which could possibly correlate with the attenuation histopathological changes found in these groups. The antioxidant qualities of tocotrienols are also a key in modulating joint injury by preventing free radical induced damage. Tocotrienols are known to possess a higher free radical quenching ability [21,22]. Suppression of disease activity with -tocotrienols could be due to the fact thattocotrienol is known to lower TNF-, IL-1 , and nitric oxide levels [18,19]. Glucosamine is a known potent antioxidant and has been used in osteoarthritis because of its ability to reduce joint damage [23]. This is consistent with our findings in which glucosamine exhibited disease attenuation property in arthritis rats.
Splenocyte proliferation was performed to determine whether treatment with -tocotrienol is associated with a protection against cell-mediated immunity. Proliferation of collagen-specific T-cells (CII-T) was assessed by the MTT assay. Firstly, conditions in which proliferation was maximal had to be established to determine accurate quantification of these cells. Therefore, optimisation on the concentration of collagen needed to stimulate appropriate amounts of CII-T was carried out in this study. Although the procedure for T-cell assay is well established, there seemed to be conflicting information in the literature as to the optimum concentration of collagen [24][25][26][27]. The results showed that, at a concentration of 5 g/mL with an incubation time of 72 hours, proliferation was at its maximum. Collagen toxicity towards cells increased at concentrations greater than this. This was the case for both the normal and the arthritic rats. It has been established that T-cell infiltration is directly correlated with the severity of arthritis [26]. The assumption that increased amounts of T-cells in the synovium occur as a result of clonal expansion to specific antigens has been explored previously [28]. In the case of CIA, the autoantigen is known to be collagen type II [26]. Therefore, it is safe to assume that high levels of CII-T cells indicate an increased disease severity. It has been reported that when introduced into the dermis, collagen type II is immediately captured by antigen presenting cells (APCs). This results in the activation and expansion of CII-T cells, initiating joint damage [5]. Therefore, by quantifying levels of CII-T cells, therapeutic benefits of -tocotrienol and glucosamine were assessed and compared. Using the set conditions, it was found that both -tocotrienol and glucosamine exhibited the significant suppression of CII-T cells when compared to the untreated groups. Both of these exhibited values close to those of the normal nonarthritic rat. In comparison with each other, -tocotrienol exhibited a higher suppressive power than glucosamine. As such, our observation thattocotrienol reduced CII-T-cell proliferation may indicate a mode of protection offered by this isomer of vitamin E against inflammatory arthritis.
It is unclear however by which exact mechanismtocotrienol was able to suppress the clonal expansion of CII-T cells. We hypothesise that it could be through one of two mechanisms: (i) direct suppression of the CII-T cells or (ii) upregulation of T-regulatory (Treg) cells. Direct suppression could arise from blocking the interaction between T-cells and APCs or prevention of T-cell infiltration into the synovium [28]. Treg cells have been proposed over recent years to inhibit the initiation or downregulate immune reactions in inflammation [29]. Studies have shown Treg cells to prevent proliferation and cytokine production of antigenic T-cells, thereby controlling inflammatory responses [28].
To determine which of these constitutes the underlying mechanism, further study needs to be done. A limitation of these findings is that it does not demonstrate a suppression of CII-T cells over time due to tocotrienol treatment. Thus, significant conclusions cannot be made that the tocotrienols offer protection against RA by reducing the number of T cells.
Biomarkers of inflammation have proven to be useful in the evaluation of disease progression and response to therapeutic intervention in a number of systemic inflammatory disorders, including RA. One such marker is C-reactive protein (CRP). An acute phase protein, CRP, is produced in the liver under conditions of systemic inflammation. It is reported to be a very useful marker of inflammation as its half life does not alter in health and disease states and it directly correlates with the intensity of pathological processes [12]. In clinically active human rheumatoid arthritis, levels of CRP are found to be increased [30]. This is translated across to animal models of rheumatoid arthritis where a similar process is observed [12]. One study demonstrated that high CRP levels are associated with incidence of total joint replacement in patients with arthritis and lower levels of CRP correspond to sustained suppression of the disease [12]. CRP levels have also been shown to be good markers for inflammation, bone degradation, and clinical well-being of patients with rheumatoid arthritis [12]. Studies have also shown that plasma CRP does not tend to rise substantially in response to inflammation in rats [31]. Plasma CRP level decreased significantly with -tocotrienol and glucosamine treatment in this study. The -tocotrienol and glucosamine groups still had significantly lower levels of CRP by the end of the experimental cycle compared to arthritis alone group [12,19,30]. Production in CRP is activated by synovial macrophages and fibroblasts mostly via a number of inflammatory cytokines including TNF-, IL-1, and especially IL-6 [32]. These cytokines are similarly produced in abundance in RA; thus it can be said that lowered levels of CRP with tocotrienol treatment signify decreased cytokine production and consequently decreased disease activity.
Conclusions
In conclusion, this study has demonstrated that oral supplementation of -tocotrienol potently attenuates the development of progressive joint destruction in rats with CIA. This effect is due, in part, to their ability to inhibit Tcell proliferation, reverse histopathological changes, and inhibit production of proinflammatory cytokines. The properties exhibited by -tocotrienol showed promising outcomes against collagen-induced arthritis in this study. Therefore, there is clear evidence to suggest the potential benefits for this tocotrienol to be used as therapeutic agent in rheumatoid arthritis. Furthermore, insight into the possible mechanisms of this drug and disease should be uncovered to unleash a whole new realm of therapeutic possibilities. | 5,401.2 | 2014-07-10T00:00:00.000 | [
"Biology",
"Medicine"
] |
Steroid Nanocrystals Prepared Using the Nano Spray Dryer B-90
The Nano Spray Dryer B-90 offers a new, simple, and alternative approach for the production of drug nanocrystals. In this study, the preparation of steroid nanocrystals using the Nano Spray Dryer B-90 was demonstrated. The particle size was controlled by selecting the mesh aperture size. Submicrometer steroid particles in powder form were successfully obtained. These nanoparticles were confirmed to have a crystal structure using powder X-ray diffraction pattern analysis. Since drug nanocrystals have recently been considered as a novel type of drug formulation for drug delivery systems, this study will be useful for nano-medical applications.
Introduction
The use of drug nanoparticles as a drug delivery system has attracted considerable attention in the field of nanomedicine. Nanoparticles in which the drug filling rate is 100% (where the amorphous form is also filled) are called drug nanocrystals [1]. Since nanocrystals have a large surface area compared with microparticles, drug nanocrystals have several unique properties, including increased dissolution velocity, increased saturation velocity, and increased adhesion to cell membranes [2]. Additionally, drug nanocrystals enable larger amounts of drugs to be delivered into cells and tissues at a single-particle level, because of their densely packed crystal structure [2]. Because of their unique physicochemical properties, drug nanocrystals have recently been considered as a novel type of drug formulation for drug delivery systems [3].
Generally, since the spray dryer technique uses a facile approach for the preparation of nanoparticles-involving spraying, the evaporation of the ethanol-based drug solution, and the collection of the drug particles-numerous drugs are feasible candidates for the preparation of nanoparticles using this technique. However, it is difficult to prepare particles smaller than 2 µm using conventional spray dryer techniques, and it is also difficult to collect the finer particles [4]. In other words, submicrometer-sized particles, i.e., nanoparticles, cannot be produced using conventional spray dryers. Recently, an advanced spray dryer technology, the Nano Spray Dryer B-90, was developed by Büchi ® [5]. The piezoelectrically driven vibrating mesh and the electrostatic particle collector allow the successful preparation and collection of nanoparticles. The different mesh aperture sizes can be used to create different sizes of nanoparticles. To date, drug-encapsulated polymeric nanoparticles [6], protein nanoparticles [7], and lithium carbonate (Li 2 CO 3 ) hollow spheres used in lithium batteries [8] have been successfully prepared using the Nano Spray Dryer B-90.
Recently, our interest has focused on the development of a novel type of steroid nanocrystal-based eye drops, used to treat ophthalmic diseases. A number of steroid compounds are hydrophobic by nature, including fluorometholone and dexamethasone. They are therefore used in ophthalmic treatments in the form of an eye drop suspension formulation of large particles (i.e., more than several micrometers in size). These commercially available eye drops certainly show drug efficacy against the inflammation of eye diseases. However, the ocular penetration of these steroid drugs is considered to be comparably low, because of the low dissolution velocity of the drug particles, which results from their large size (approximately 6 µm) [9]. These micron-sized particles are produced via a milling process, owing to industrial compromises made to achieve reductions in costs. It is reported that if the drug particle size is reduced to less than 2 µm, the total dissolution velocity of the drugs will be increased, resulting in increases in the drugs' ocular penetration [9]. The high ocular penetration of drugs is useful in achieving high drug efficacy, as well as in reducing side effects by allowing doses to be minimized. The preparation of steroid particles smaller than ~2 µm is therefore an attractive approach for producing effective drug formulations with high drug efficacy. The production of nanocrystals, which have a size defined as between a few nanometers and 1000 nm (= 1 μm) [1], is especially attractive for these purposes.
In this research, we demonstrated the preparation of steroid nanocrystals in powder form, using the Nano Spray Dryer B-90. The steroid drugs selected were fluorometholone ( Figure 1a) and dexamethasone ( Figure 1b). Since it is known that the mesh aperture size is important in determining the resulting particle size [10], we investigated the relationship between mesh aperture size and drug particle size. To confirm that the nanocrystals did indeed have a crystal structure, powder X-ray diffraction pattern analysis was carried out.
Materials and Methods
The fluorometholone, dexamethasone, and ethanol (99.5%, v/v) were purchased from Wako Pure Chemical Industries (Osaka, Japan). The fluorometholone and dexamethasone were dissolved in ethanol at concentrations of 1 mg/mL with a final volume of 10 mL and 10 mg/mL with a final volume of 12.5 mL, respectively. Since there was the fact that the fluorometholone did not soluble to the ethanol as much as that of the dexamethasone, the concentration of the ethanol solution of fluorometholone was tuned to be lower than that of dexamethasone.
Preparation of Nanocrystals Using the Spray Dryer B-90
The ethanol-dissolved drug solutions were then used to prepare the nanocrystals, using the Nano Spray Dryer B-90 (Büchi ® ). A schematic image of the Nano Spray Dryer B-90 is shown in Figure 2. Briefly, the drying gas, which is heated up to the setting inlet temperature, flows into a drying chamber. The gas then exits from the spray dryer, passing through the clearing filter at the bottom. The inlet temperature and outlet temperature are shown as T in and T out , respectively. The operating conditions for the experiments were kept constant at T in = 50 °C, T out = 35 °C, feed rate 25 mL/h, drying gas flow rate = 100 L/min. Spray mesh aperture sizes of 4.0, 5.5, and 7.0 μm were used in these experiments. Finally, the resulting nanocrystal powders were collected using a rubber spatula.
Scanning Electron Microscopy Observation of the Nanocrystals
The morphology and size of the collected particles were observed using scanning electron microscopy (SEM; JEOL-6510LA). The average size and particle size distribution were calculated by counting more than 300 particles from the obtained SEM pictures.
Powder X-ray Diffractometry Analysis
The crystal structure of the nanocrystals was confirmed using powder X-ray diffractometry (SmartLab, Rigaku). CuKα radiation (1.54 Å) was used as an X-ray source. The X-ray output was 45 kV and 200 mA. Statistical analysis (a two-tailed t-test) was carried out to clarify the difference between the diameter of the nanoparticles and the aperture size of the apparatus (Figures 5 and 6).
Results and Discussion
The ethanol-dissolved drug solution was fed to the spray head by pumping. The solution was atomized by piezoelectrically driven mesh vibrations in the small spray cap, and millions of precisely sized droplets (with a narrow size distribution) were ejected each second by the vibrating actuator, which was driven at approximately 60 kHz. The extremely fine droplets dried to form solid particles during their passage through the chamber; these particles were then electrostatically charged by dry N 2 and CO 2 gases, and were collected by the electrode. A schematic image of the Nano Spray Dryer B-90 is shown in Figure 2.
The particles were collected using a rubber spatula, and were observed using SEM. The morphology of each steroid nanocrystal was sphere-like, regardless of the mesh aperture size, which was varied between 4.0, 5.5, and 7.0 µm (Figures 3 and 4). However, the particle size changed depending on the mesh aperture size. For the fluorometholone nanocrystals, the average particle sizes with their size distribution were 620 ± 268, 795 ± 285, and 856 ± 344 nm for mesh aperture sizes of 4.0, 5.5, and 7.0 µm, respectively ( Figure 5). For the dexamethasone nanocrystals, the average particle sizes with their size distribution were 833 ± 402, 1118 ± 573, and 1344 ± 857 nm for mesh aperture sizes of 4.0, 5.5, and 7.0 µm, respectively ( Figure 6). For both the fluorometholone and dexamethasone particles, the size distribution of the particles became narrower with decreasing mesh aperture size. The size and size distribution for each sample was significantly different from those of each other sample (see p values in Figures 5 and 6). The validity of these results are supported by previous reports, which showed that different mesh aperture sizes resulted in different sizes of particles; the average particle size decreased with decreasing mesh aperture size [10]. This is because smaller mesh aperture sizes tend to generate smaller droplets of ethanol-dissolved drug solution, compared with large mesh apertures. Figures 5 and 6 suggest that the increased size of the dexamethasone particles (compared with the fluorometholone particles) might have resulted from the difference in the concentrations of the ethanol-dissolved drug solutions. The concentrations of the ethanol solutions of fluorometholone and dexamethasone were 1 mg/mL and 10 mg/mL, respectively; i.e., the concentration of the dexamethasone solution was 10 times higher than that of the fluorometholone solution. The obtained results seem reasonable when compared with previous reports [10,11], in terms of the fact that the particle size was significantly affected by the concentration of the drug solutions. The particle size tended to increase with increasing drug concentration. The details of the effect of the drug concentration on the resulting steroid compound particle sizes will be discussed elsewhere.
Powder X-ray diffraction pattern analysis revealed the specific diffraction patterns for each sample, confirming that all of the fluorometholone and dexamethasone nanocrystals had a crystal structure (Figure 7). Although the particle sizes were different, the crystal structures were same for the fluorometholone and dexamethasone nanocrystal samples (Figure 7a,b). We successfully prepared size-controlled fluorometholone and dexamethasone nanocrystals using the Nano Spray Dryer B-90, by selecting the mesh aperture size.
Conclusions
We succeeded in preparing size-controlled steroid nanocrystals using the Nano Spray Dryer B-90. The particle size was controlled by the mesh aperture size; when the mesh aperture size was decreased, the particle size decreased. Powder X-ray diffraction pattern analysis confirmed that the nanocrystals had a crystal structure, which showed specific diffraction patterns. The detailed experimental conditions, including the concentration of the ethanol-dissolved drug solution, the inlet temperature, and the drying gas flow rate, that might affect the particle formation will be investigated in future work, as will the dissolution velocity of the drug nanocrystals in aqueous media. Additionally, we will investigate the possibility of the polymorphs and phase transitions of nanocrystals by differential scanning calorimetry (DSC) measurement. This DSC data will provide information about the relationship between polymorphs/phase transition and dissolution profiles of drug nanocrystals. We are also investigating the preparation of aqueous dispersions of steroid nanocrystals. In this stage, the detailed particle size distribution in aqueous medium will be analyzed by dynamic light scattering measurement. These aqueous nanocrystal dispersions will be attractive as nanocrystal-based eye drop solutions with the potential to be used for the treatment of ophthalmic disorders in the near future. We expect the drug nanocrystals to be used in drug delivery systems as nano-medical applications. | 2,513 | 2013-01-25T00:00:00.000 | [
"Chemistry",
"Materials Science",
"Biology"
] |
Laboratory rivers: Lacey’s law, threshold theory, and channel stability
More than a century of experiments have demonstrated that many features of natural rivers can be reproduced in the laboratory. Here, we revisit some of these experiments to cast their results into the framework of the threshold-channel theory developed by Glover and Florey (1951). In all the experiments we analyze, the typical size of the channel conforms to this theory, regardless of the river’s planform (single-thread or braiding). In that respect, laboratory rivers behave exactly like their natural counterpart. Using this finding, we reinterpret experiments by Stebbings (1963). We suggest that sediment transport widens the channel until it reaches a limit width, beyond which it destabilizes into a braided river. If confirmed, this observation would explain the remarkable scarcity of single-thread channels in laboratory experiments.
Introduction
At the turn of the 20th century, Jaggar (1908) developed a series of laboratory experiments to produce small-scale analogues of rivers (Fig. 1a). In the first one, a subsurface flow seeps out of a layer of sediment. Sapping then erodes the sediment, and this process generates wandering channels. Introducing rainfall in another experiment, he was able to generate a ramified network of small rivers, which drains water out of the sediment layer, much like a natural hydrographic network drains rainwater out of its catchment. The similarity between his experiments and natural systems led Jaggar to the following conclusion (Jaggar, 1908, p. 300): The foregoing experiments suggest many questions and answer few. They are based on the assumption that the extraordinary similarity of the rill pattern to the mapped pattern of rivers is due to government in both cases by similar laws.
Jaggar was therefore convinced that we should use laboratory analogues to investigate, under well-controlled conditions, the mechanisms by which a river forms and how it selects its geometry.
Forty years later, Friedkin (1945) used a laboratory flume to investigate the stability of a river's course. In his experiment, he carved a straight channel in a layer of sand and sharply curved its course near the water inlet. This perturbation causes the channel to erode its banks and migrate laterally. As it does so, the channel becomes sinuous, and a well-defined wavelength emerges (Fig. 1b). Friedkin then explored systematically the influence of the control parameters (grain size, initial geometry, water and sediment discharge) on this response. His observations showed that water and sediment discharges are the main control on the channel's cross section and planform geometry. In particular, when the sediment discharge gets large, the channel turns into a braided river. Conversely, in the absence of sediment load, the channel relaxes towards an isolated steady thread.
Building on Friedkin's work, Leopold and Wolman (1957) located, in the parameter space, the braiding transition of a laboratory channel. To do so, they supplied water and sand to an initially straight channel. As this channel adapts to the input, mid-channel bars form which tend to separate the flow and eventually split the channel. Ultimately, the experiment generates a braided river. Leopold and Wolman then observed that braided threads have, on average, a Published by Copernicus Publications on behalf of the European Geosciences Union. Jaggar, 1908). (b) Sinuous channel in sandy bed (adapted from plate 3, Friedkin, 1945). (c) Meandering channel forced by the oscillation of the inlet (Dijk et al., 2012). (d) Metamorphosis of a braided river into a single-thread channel induced by vegetation (Tal and Paola, 2010, with permission from John Wiley & Sons). (e) Active braided river in coarse sand (Leduc, 2013). larger longitudinal slope than their isolated counterparts. Inspired by this finding, they plotted field observations on a slope-discharge diagram and showed that braided channels are separated from single-thread ones by a critical value of the slope S c , which decreases with discharge Q according to S c = 0.06 Q −0.44 (discharge in ft 3 s −1 ). To our knowledge, such an empirical boundary has never been drawn for laboratory experiments, partly because maintaining an active single-thread channel has proven to be an experimental challenge (Schumm et al., 1987;Murray and Paola, 1994;Federici and Paola, 2003;Paola et al., 2009). In non-cohesive sediment, most experimental channels turn into a braided river, unless they do not transport any sediment. This propensity for braiding persists when the water discharge varies during the experiment and seems unaffected by grain size Foufoula-Georgiou, 1996, 1997;Métivier and Meunier, 2003;Leduc, 2013;Reitz et al., 2014).
By contrast, preventing bank erosion helps maintain a single-thread channel. One way to do so is to add some fine and cohesive sediment to the mixture injected into the experiment (Schumm et al., 1987;Smith, 1998;Peakall et al., 2007;Dijk et al., 2012). Another successful method is to grow riparian vegetation on the emerged areas of the flume. Tal and Paola (2007) and Brauderick et al. (2009) used alfalfa sprouts, the roots of which protect the sediment they grow upon from scouring. These observations show that bank cohesion, in addition to sediment discharge, controls the planform geometry of laboratory rivers. However, the relative importance of these parameters remains debatable, both for laboratory experiments and for natural rivers (Métivier and Barrier, 2012). To address this question, we need to formalize, in a suitable theoretical framework, the interplay between the dynamics of sediment transport and the mechanical stability of a channel's banks.
To design stable irrigation canals, Glover and Florey (1951) calculated the shape of a channel the bed of which is at the threshold of motion. Henderson (1963) referred to this work as the threshold theory and showed that it applies to natural rivers as well. This theory offers a physical interpretation for the empirical relationship proposed by Lacey (1930), according to which the width of an alluvial river increases in proportion to the square root of its water discharge (Henderson, 1963;Andrews, 1984;Devauchelle et al., 2011b;Gaurav et al., 2015;Métivier et al., 2016).
In a series of theoretical papers, Parker and coauthors extended the threshold theory to active alluvial rivers that either maintain their banks at the threshold of sediment motion or rebuild them constantly by depositing a fraction of their suspended load (Parker, 1978a(Parker, , b, 1979Kovacs and Parker, 1994). These mechanisms counteract the bank collapse induced by gravity, and the resulting balance controls the geometry of their bed. This theory provides a physical basis for comprehensive regime relations, which describe the geometry of alluvial rivers as a function of their water and sediment discharges (Parker et al., 2007). Does this theoretical framework apply equally to laboratory rivers?
Here, we investigate this question by reinterpreting experiments performed since the late 1960s in the light of the threshold theory. We begin with a brief presentation of the connection between Lacey's law and this theory and then evaluate its applicability to laboratory experiments (Sect. 2). Finally, using the experimental observations of Stebbings (1963), we propose an empirical criterion for the stability of an active channel in non-cohesive sediment and compare it to laboratory single-thread and braided channels (Ikeda et al., 1988;Ashmore, 2013)
Lacey's law and the threshold theory
In 1930, Lacey remarked that irrigation canals remain stable when their width scales as the square root of their discharge, even when they are cut into loose material (Lacey, 1930). Field observation later revealed that Lacey's law applies to natural rivers as well. For illustration, we use the compendium of Li et al. (2015) to plot the width of a broad range of alluvial rivers against their water discharge (Fig. 2a). Over 12 orders of magnitude in discharge, the data points gather around a 1/2 power law, in accordance with Lacey's law.
Lacey's relationship remained an empirical law until Glover and Florey (1951) calculated the cross-section shape of a channel the bed of which is at the threshold of motion. When the water flow is just strong enough to entrain the bed material, the balance between gravity and fluid friction sets the cross-section shape and the downstream slope of the channel. In particular, this balance relates the width W of a channel to its discharge Q (Glover and Florey, 1951;Henderson, 1963;Devauchelle et al., 2011b;Seizilles, 2013): where Q * = Q/ gd 5 s is the dimensionless discharge, d s is the grain size of the sediment, ρ and ρ s are the densities of water and of the sediment, C f is the turbulent friction coefficient, θ t is the threshold Shields parameter, µ is the friction angle, and finally K[1/2] ≈ 1.85 is a transcendental integral.
Glover's and Florey's theory explains the exponent of Lacey's law, but what about its pre-factor? Some of the parameters in the pre-factor of Eq. (1) are approximately constant in nature: the density of water (ρ 1000 kg m −3 ), that of sediment (ρ s 2650 kg m −3 ), and the friction angle (µ 0.7). Other ones vary significantly. For instance, the median grain size d 50 extends over 3 orders of magnitudes in the data set we use (0.1 mm-10 cm). In addition, the sediment is often broadly distributed in size within a river reach, which, strictly speaking, impairs the applicability of the threshold theory. We do not know how a broad grain-size distribution affects . Similarly, the value of the turbulent friction coefficient C f typically extends over almost 2 orders of magnitude in nature (0.02-0.1), depending on the flow Reynolds number and the bed roughness (Buffington and Montgomery, 1997). The Shields parameter θ t varies between about 0.03 and 0.3, depending on the Reynolds number on the grain's scale (Recking et al., 2008;Andreotti et al., 2012;Li et al., 2015). One can take these variations into account by supplementing Eq. (1) with empirical expressions that relate C f and θ t to the water depth and median grain size (Parker et al., 2007). However, the rough approximation we use for the grain size would make such exactitude superfluous. Accordingly, we simply evaluate Eq. (1) using typical values for its parameters (ρ = 1000 kg m −3 , ρ s = 2650 kg m −3 , θ t = 0.05, C f = 0.1) and represent the impact of their variability as an uncertainty on the prediction (Fig. 2). Virtually all rivers from the compendium of Li et al. (2015) fall within this uncertainty. Equation (1) provides a reasonable first-order estimate of the size of a river, thus supporting Henderson's hypothesis: the force balance on the grain's scale explains Lacey's relationship (Henderson, 1963;Andrews, 1984;Savenije, 2003;Devauchelle et al., 2011a;Phillips and Jerolmack, 2016). Recent experiments involving a laminar flume have shown it possible to reproduce this balance in the laboratory . More generally, though, do laboratory rivers conform to the threshold theory, like their natural counterpart?
To answer this question, we compiled data from a variety of laboratory experiments (Table 1, Fig. 1). We selected a broad range of experimental conditions and included as many shapes of channel as possible (braided, straight, sinuous). Of course, our choice was limited to contributions that fully report experimental conditions and observations, either explicitly or in the form of figures. Among these experiments, many generated braided rivers. We treated the individual threads of these as independent channels, as has proved instructive for the interpretation of field data (Gaurav et al., 2015;Métivier et al., 2016). We find that the width of all the laboratory channels we selected conforms well to Lacey's law (Fig. 2). In fact, the laboratory experiments partly overlap the compendium of Li et al. (2015), and, where they do, experimental channels cannot be distinguished from natural rivers. In that sense, laboratory rivers do not just resemble natural ones but rather are small rivers in their own right.
Experimental observations, like natural rivers, gather around Lacey's law. Several factors may account for deviations: vegetation growth, cohesion, biofilms, or sediment transport. Tal and Paola (2010) grew alfalfa sprouts on a sandy braided river and observed that, in their experiment, vegetated threads are narrower and deeper than nonvegetated ones. Peakall et al. (2007) and Dijk et al. (2012) used fine cohesive particles to strengthen the bed and banks of an experimental channel. This cohesion induced nar-rower channels. Recently, Malarkey et al. (2015) showed that biofilms affect the threshold for sediment transport and therefore could change the morphology of a river.
In Fig. 2, these fluctuations disperse the data points around the trend by a factor of about 3. Yet, on average, laboratory channels conform well to Lacey's law. They therefore appear to select their own size according to the available water discharge, like natural rivers do. As a consequence, the threshold theory provides a reasonable estimate of their size, regardless of the specifics of each experiment. This robustness is again reminiscent of Lacey's law, which holds under a variety of natural conditions.
All this, of course, is excellent news for experimental geomorphology. If indeed experimental flumes are but small rivers, the understanding we gain in the laboratory is likely to apply in nature. This continuity, however, revives an old question: How can single-thread channels be so difficult to maintain experimentally, whereas they are ubiquitous in nature? In the next section, we investigate the stability of a single-thread channel by revisiting the laboratory observations of Stebbings (1963).
Channel stability
The elusiveness of the single-thread channel led some authors to the conclusion that laboratory experiments lack a vital ingredient, such as sediment cohesion or vegetation, to generate realistic rivers (Schumm et al., 1987;Smith, 1998;Peakall et al., 2007;Dijk et al., 2012;Tal and Paola, 2007;Brauderick et al., 2009). This view parallels a more conceptual criticism of the threshold theory: by definition, it cannot take sediment transport into account. Indeed, an arbitrarily small amount of mobile sediment can, in principle, destabilize the threshold channel (Parker, 1978b). What specific mechanism maintains the bed of single-thread rivers in nature remains a matter of debate. In this section, we propose a detailed comparison of laboratory channels with the threshold theory, hoping it will help us address this question.
We now return to the diagram of Fig. 2 and focus on laboratory experiments (Fig. 3). This closer view reveals that laboratory channels follow two distinct trends, depending on their planform geometry. The data points corresponding to single-thread channels align with the threshold theory (the parameters in Eq. 1 correspond to the experiment of Stebbings, 1963). Conversely, the threads of braided rivers tend to be wider than predicted, although they also follow a squareroot relationship. These two distinct trends emerge from a large collection of disparate experiments. We thus interpret them as the signature of an underlying common parameter that determines the planform geometry of a channel and affects the pre-factor of Lacey's law.
To isolate this pre-factor in the laboratory, the ideal experiment would produce single-thread and braided rivers under similar conditions. The flume experiment of Stebbings Earth Surf. Dynam., 5, 187-198, 2017 www.earth-surf-dynam.net/5/187/2017/ (1963) approaches this ideal. Stebbings simply carved a straight channel in a flat bed of well-sorted sand. He then let a constant flow of water run into this channel, the morphology of which gradually adjusted to the water discharge (Fig. 4). Before reaching steady state, however, the river undergoes a reproducible transient. The flow first incises the channel near the inlet and entrains the detached sediment towards the outlet. As a result, bed load transport intensifies downstream. Stebbings noted that the river responds to this increase by widening its channel. In some cases, a bar emerges near the center of the widened channel, and the river turns into a braid. If, following Stebbings, we assume that the channel cross section adjusts to the local sediment discharge, then his transient channel materializes the transition of a river from a channel at threshold to a collection of braided threads. Although unconfirmed yet, the hypothesis that the sediment load triggers the metamorphosis of a river has been proposed previously to interpret field observations (Mackin, 1948;Smith and Smith, 1984;Métivier and Barrier, 2012).
Once the channel has reached steady state, it does not transport any more sediment, and we can expect it to be exactly at threshold. We indeed find that the size of Stebbings' steady-state channels accords well with the threshold theory (Fig. 3). This also holds, albeit less literally so, for their depth and downstream slope (Appendix A). A better way to evaluate this agreement is to correct the width from the influence of discharge. To do so, we introduce the detrended width W * as the ratio of the channel width to the width predicted by the threshold theory (Gaurav et al., 2015): where C W is the pre-factor in brackets in Eq. (1). For a threshold channel, we expect W * to be 1 regardless of water discharge. Unsurprisingly, W * shows no dependency on discharge for the steady-state channels of Stebbings (1963) (Fig. 5). Its average is W * = 1.07 ± 0.16, confirming the accord of Stebbings' measurements with the threshold theory. We now turn our attention to active channels (i.e. channels transporting sediment). In Stebbings' experiment, the channel is active during the transient, and we expect its width to deviate from that of the threshold channel. The downstream widening of the river indicates that sediment transport tends to induce a wider channel (Fig. 4). This hypothesis is further supported by Fig. 3, which shows that virtually all experimental threads in our data set, which are likely to transport sediment, are wider or as wide as the threshold channel. This observation suggests that the theory of Glover and Florey corresponds to the narrowest possible channel, which forms in the absence of sediment transport (Henderson, 1963;Parker, 1978b). We hypothesize that, as the latter increases, the channel's width departs from this lower boundary. Unfortunately, Stebbings did not measure sediment dis- (Stebbings, 1963). Green: threshold channels (no sediment transport); blue: active channels about to split . Left: detrended width W * as a function of dimensionless discharge; right: normalized histograms of the same data. Dashed lines indicate fitted Gaussian distributions.
charge in his channels, and we cannot quantify the dependency of the channel's width with respect to sediment discharge. What Stebbings did measure, though, is the channel's width at the onset of braiding, just upstream of the first bar (Fig. 4). We refer to this value as the "limit-channel width", implying it corresponds to the largest possible width of a stable channel. Once detrended according to Eq. (2), the limitchannel width W * ,l shows no remaining correlation with discharge (Fig. 5), indicating that it is proportional to the width of the threshold channel. The proportionality factor is about W * ,l = 1.7 ± 0.2, thus significantly larger than 1. The detrended limit-channel width is narrowly distributed around its own average, much like the threshold-channel width (Fig. 5). The two average values are clearly distinct, to the 95 % level of confidence. In short, the channel destabilizes into a braid when it gets about 1.7 times as large as the threshold channel.
Based on this observation, we propose the following scenario for the transient in Stebbings' experiments. As its upstream end incises the sediment layer, the river loads itself with sediment. The continuous increase of bed load transport along its course causes it to widen, until it reaches the limit-Earth Surf. Dynam., 5, 187-198, 2017 www.earth-surf-dynam.net/5/187/2017/ channel width. At this point, bars develop and quickly split the river into multiple channels. Generalizing this interpretation, we suggest that a river can only accommodate so much sediment transport before it breaks into a braid. This fragility would confine single-thread channels to a precarious domain in the parameter space, thus explaining their rarity in laboratory experiments. To our knowledge, only Ikeda et al. (1988) produced active and stable, yet non-cohesive, single-thread channels in a laboratory experiment. To do so, they first carved an initially straight channel in non-cohesive sediment. To prevent the formation of bars and the lateral migration of the channel, Ikeda et al. cut the channel in half with a vertical wall aligned with the channel's axis. Water and sediment are then injected at a constant rate. Eventually, this experiment generates a stable half channel with a flat lower section where sediment is transported continuously. (Hereafter, we use twice the width of the half channel, for comparison with other experiments.) It is unclear whether the channels of Ikeda et al. have fully reached steady state, with as much sediment exiting the experiment as is injected into it. Nonetheless, the actual sediment discharge appears to be low enough to allow for stable channels, which we may treat as a collection of single-thread active channels. Their detrended width is distributed narrowly around a mean value of W * ,s = 1.16 ± 0.16 (Fig. 6). As expected, this value falls within the stability domain based on Stebbings' experiments, to the 95 % level of confidence (Figs. 5 and 6). Based on the report by Ikeda et al. only, we cannot be certain that no stable channel could survive outside the stability domain. Neither can we evaluate the influence of the central wall on the channel's stability. However, these observations are clearly consistent with our interpretation of Stebbings' experiment.
Stebbings' observations suggest that single-thread channels destabilized by sediment transport become braids. The mechanism by which this metamorphosis occurs is still a matter of debate, although the bar instability has been repeatedly highlighted (Parker, 1976;Repetto et al., 2002;Crosato and Mosselman, 2009;Devauchelle et al., 2010b, a). What is likely, though, is that once the river has turned into a braid, each of its channels transports only a fraction of the total sediment discharge. It is therefore reasonable to treat it as an active channel itself and compare its width to the threshold theory. This method was applied with some success to natural braided rivers and in Sect. 2 (Gaurav et al., 2015;Métivier et al., 2016).
In his review on braided rivers, Ashmore (2013) reports on laboratory experiments he performed in the 1980s. What makes his experiments unique is that he measured the size and the discharge of the individual threads that compose his braided rivers. Translating his measurements in terms of the detrended width W * ,b , we find that its distribution spreads around an average of W * ,b = 1.87 ± 0.68, close to the upper bound of the stability domain (Fig. 6). One way to interpret this observation, although speculative at this point, is (Ikeda et al., 1988); blue: threads of braided rivers (Ashmore, 2013). to consider the upper bound of the stability domain as an attractor for the threads' dynamics. Accordingly, we conjecture that the threads of a braided river, constantly destabilized by an excessive sediment discharge, split into smaller channels. These channels, when numerous enough, are likely to meet one another and recombine their sediment load. This process could repeat itself until reaching the dynamical equilibrium which characterizes a braided river (Métivier and Meunier, 2003;Reitz et al., 2014). The thread population resulting from this equilibrium would include stable channels, the detrended width of which lies in the stability domain, and splitting channels, which we expect to be wider than the limit channel. The broad distribution of W * ,b in Ashmore's experiment is consistent with this interpretation (Fig. 6), as are the center bars often found in the threads of natural braided rivers (Gaurav et al., 2015;Métivier et al., 2016).
The threshold we propose to represent the braiding transition remains empirical. This transition is often attributed to the formation of bars (Parker, 1976;Repetto et al., 2002;Crosato and Mosselman, 2009;Devauchelle et al., 2010b, a). Parker (1976) investigated the linear stability of an initially flat, non-cohesive channel. His analysis predicts the transition from single-thread to multiple-thread channels. Using the experiments of Stebbings (1963), Ikeda et al. (1988), and Ashmore (2013), we compare Parker's prediction with our own analysis ( Fig. B1 and Appendix B). We find that the experiments accord with both transition criteria. However, the criterion introduced here corresponds more accurately to the limit channels observed by Stebbings. At this point, we cannot base this empirical criterion on physical reasoning.
Conclusions
More than a 100 years of laboratory investigations have improved our understanding of how rivers select their own morphology. Here, we have revisited some of these experiments to place them in the perspective of the threshold theory introduced by Glover and Florey (1951) and Henderson (1963). Although these experiments were designed to investigate a variety of phenomena, the channels they produced all conform to Lacey's law, exactly like natural rivers. This indicates that laboratory flumes and natural rivers are indeed controlled by the same primary mechanisms, in accordance with Jaggar's views. We take it as encouragement for experimental geomorphology.
Most laboratory channels are larger than predicted by the threshold theory. Based on the experiment of Stebbings (1963), we propose that, for the most part, sediment transport induces this departure from the threshold channel. According to this interpretation of Stebbings' observations, the channel widens to accommodate more bed load, until it reaches a width of about 1.7 times that of the threshold channel, at which point it destabilizes into a braided river. The writing of Stebbings' paper suggests that, had he been aware of the work of Glover and Florey (1951), he would have drawn similar conclusions from his experiment. To our knowledge, the influence of the sediment discharge on the width of a channel has never been measured directly (Stebbings did not measure the sediment discharge). The laboratory would certainly be a convenient place to do so.
Mentions of active single-thread channels are scarce in the literature on laboratory rivers, although some authors succeeded in maintaining such channels by various means, such as riparian vegetation or cohesive sediment.
More often, laboratory flumes generate braided rivers. Again, we suspect sediment discharge is the real culprit for this familiar destabilization. Accordingly, it should be possible to produce active and stable single-thread channels simply by lowering the sediment input enough. If this method works, not only will we be able to quantify the influence of sediment transport on a channel's width, but it will also gain us a laboratory rat for single-thread rivers. We believe it would shed light on the dynamics of such rivers, including meandering.
Data availability. The experimental data discussed in this paper have been compiled from various sources (see Table 1). They are provided as a supplement.
Appendix A: Threshold theory for depth and slope
In addition to the width, the threshold theory provides an estimate for the depth and the slope of channel at threshold (Glover and Florey, 1951;Henderson, 1963;Devauchelle et al., 2011b;Seizilles, 2013): We now compare these regime equations to Stebbings' experimental channels (Fig. A1a). The depth of the channels accords with Eq. (A1), although with slightly more scatter around the prediction than for the width (Fig. 3). Measurement uncertainty probably explains this dispersion, since the depth of a channel is less accessible than its width. The downstream slope of Stebbings' channel appears more dispersed than the width (Fig. A1b). The corresponding data points nonetheless follow a clear power law, compatible with the inverse square root predicted by Eq. (A2). The prefactor of this relationship, however, falls around the upper bound of the uncertainty range. We do not know the origin of this offset, for which we can only propose speculative explanations. First, as the slope of experimental channels is notoriously difficult to measure, a systematic error cannot be ruled out (Stebbings provides no indication about the accuracy of his slope measurements). Second, as readily seen by comparing Eqs. (1) and (A2), the slope of a threshold channel is sensitive to the value of the threshold Shields parameter. A value twice as large would account for Stebbings' slope measurements, without impacting significantly the width and depth of the threshold channel. Finally, to our knowledge, the regime equations of a channel at threshold have always been established using the shallow-water approximation. In real channels, the flow transfers momentum across the stream (Parker, 1978b). Taking this transfer into account could correct the threshold theory, without altering much the scalings it predicts. Parker (1976) investigated the growth of bars in an initially flat channel perturbed sinusoidally. His stability analysis predicts the transition between single-thread channels and multiple-thread channels. This transition occurs when
Appendix B: Comparison with the stability analysis of Parker (1976)
where S is the channel slope, Fr = U/ √ gH is the Froude number of the flow, g is the acceleration of gravity, and W , H and U are the width, depth, and velocity of the flow respectively. T r a n s it io n ( P a r k e r , 1 9 7 6 )
(b)
Threshold threads (Stebbings, 1963) Single threads (Ikeda et al., 1988) Limit threads (Stebbings, 1963) Braided threads (Ashmore, 2013) Figure B1. (a) Detrended width as a function of dimensionless discharge. Green: threshold threads (points), stable threads (three-pointed stars), and threshold theory (dashes); blue: limit threads (points), braided threads (crosses), and transition between stable and unstable threads (dashes). (b) Regime diagram of Parker (1976). Here the blue dashed line corresponds to the theoretical transition proposed by Parker (1976). Figure B1 compares our empirical prediction (Fig. B1a), to that of Parker (Fig. B1b), using the same dataset. We selected, in the datasets presented in Sect. 2 and Table 1, the experiments that involved only non-cohesive sediments and no vegetation. Threshold channels and single-thread channels lie in the stable domain of both diagrams. Most multiplethread channels lie in the unstable domain of both diagrams. Finally, limit channels gather around the transition line in both cases. Therefore, the data set we use is compatible with both predictions. However, the limit channels of Stebbings (1963) gather more tightly around the threshold proposed here (Fig. B1a) than around the threshold proposed by Parker (Fig. B1b). | 6,907.6 | 2016-09-07T00:00:00.000 | [
"Physics"
] |
Assisted dynamical Schwinger effect: pair production in a pulsed bifrequent field
Electron-positron pair production by the superposition of two laser pulses with different frequencies and amplitudes is analyzed as a particular realization of the assisted dynamic Schwinger effect. It is demonstrated that, within a non-perturbative kinetic equation framework, an amplification effect is conceivable for certain parameters. When both pulses have wavelengths longer than the Compton wavelength, the residual net density of produced pairs is determined by the resultant field strength. The number of pairs starts to grow rapidly if the wavelength of the high-frequency laser component gets close to the Compton wavelength.
Introduction
The possibility of direct energy conversion processes from a strong electromagnetic field into e − e + pairs is one of the curious features of quantum electrodynamics (QED) [1][2][3]. However, the required critical electric field strength has the so-called Sauter-Schwinger value 1 E c ≡ m 2 /|e| = 1.3 × 10 16 V/cm (here, m and e are the mass and the charge of the electron, resp.) which makes it inaccessible to direct experimental observations at present. The hope for the observation of such processes was revived with the advent of ultra-intensity laser systems in the optical or X-ray regimes [4]. The rapidly evolving laser technologies [5] triggered repeatedly the theoretical search for suitable laser configurations which have the potential to realize pair production by Schwinger-type tunneling processes (for different variants, see [6,7]). A new avenue was provided by the dynamically assisted Schwinger effect [8,9], meaning that the tunneling path is abbreviated by an assisting second field, thus enhancing the originally small tunneling probability. Given this scenario, a number of dedicated investigations aimed at further elaborating the prospects to find appropriate signals of the Schwinger effect.
Because of the important implications for related effects in other fields in physics (see Refs. [10,11] for an a e-mail<EMAIL_ADDRESS>1 We use = c = kB = 1 throughout this work. overview including particle production in cosmology and astrophysics, Hawking-Unruh radiation as well as conceptional issues of vacuum definition), many investigations address either the principles of the strictly nonperturbative pair production [12] or employ special field models to elucidate the general features, often only by numerical evaluation.
The term "assisted Schwinger effect" stands for pair production from the vacuum under the influence of two fields -one assisting the other. Special field models are, for instance, particular pulses (such as the Sauter-or the Gauss-pulse) or oscillating fields with particular envelopes (such as Sauter-or Gauss-pulse with sub-cycle structures). Since in a spatially homogeneous electric field the threemomentum of a charged particle is a good quantum number which makes the mode expansion appropriate, one often restricts oneself to such homogeneous fields. The rationale for many models with a purely temporal dependence is that counter-propagating, suitably linearly polarized (laser) beams [13] in the homogeneity region of anti-nodes represent such spatially constant fields. The account for spatial gradients is quite challenging [14,15] and requires much more efforts.
In the latter case, the common envelope was taken with a long flat-top period with short ramping and de-ramping stages.
Besides numerical examples, also the underlying enhancement mechanism has been clarified for that special field model: it is the shift of the relevant zero of the quasiparticle energy in the complex time domain toward the real axis (cf. [20,25] for other field configurations). Here we are going to extend the considerations in references [23,24] and study, by numerical means, some systematics of the enhancement for a Gauss envelope. Besides the oscillation frequencies of both fields, the temporal width of the Gauss envelope enters as relevant new parameter related to time scales. Our paper is organized as follows. In Section 2 we recall the formal framework of the quantum kinetic equations as basis of our non-perturbative analysis. In Section 3 we introduce the parametrization of the field model we consider. Numerical results are presented in Section 4. In Section 5 we give a critical discussion of the explored parameter range w.r.t. applications, and in Section 6 we present the summary of this work.
Theoretical basis
The non-perturbative consequence of the equations of motion of QED determines the vacuum effects in a given external, spatially homogeneous electric field with an arbitrary time dependence [26]. For instance, one can employ the quantum kinetic equation [27] describing the e − e + creation by an electric field E(t) = −∂ t A(t) ≡ −Ȧ(t) with the four-vector potential in Hamilton gauge (we use natural units with c = = 1), A μ (t) = (0, 0, 0, A(t)), where w(p, t) = 1 − 2f (p, t) is the depletion function containing the dimensionless phase space distribution function per spin projection degree of freedom f (p, is the amplitude of the vacuum transition, while stands for the dynamical phase, describing the vacuum oscillations modulated by the external field. The quasiparticle energy ε, the transverse energy ε ⊥ and the longitudinal quasiparticle momentum P are defined as: where p ⊥ = |p ⊥ | is the modulus of the momentum component perpendicular to the electric field, and p stands for the momentum component parallel to E. The integro-differential equation (1) is useful for the low-density approximation by setting f (p, t ) → 0. For the complete numerical evaluations of (1) an equivalent system of ordinary differential equations is comfortable (arguments are dropped for brevity) with u and v as auxiliary functions being related via Since the modes with momenta p decouple we have suppressed these arguments here, as well as the time dependence of all quantities. Sometimes, the relationḟ = λu/2 is useful for a field acting a finite time only, telling that, since As emphasized, e.g., in [10], a sensible quantity is lim t→∞ f (p, t), since the adiabatic particle number per mode depends on the chosen basis. Accordingly, the residual pair number density is The factor two refers to the two spin degrees of freedom which are summed up since in a purely electric field the spin degrees of freedom are degenerate. Other formulations of the basic equations are conceivable, e.g., by relating f to the reflection coefficient at (above) an effective potential, where the problem's heart is a Riccati equation [20,25]. In such a way the equivalence with a quantum mechanical scattering problem is highlighted, where the potential is related to ε(p, t). This makes evident that the residual phase space distribution can, in general, obey an intricate momentum dependence.
Asymptotic methods for the solution of the kinetic equation (1) were developed in [28,29]. There, some difficulties of applying such methods for field parameters corresponding to the case of tunneling regime are also discussed.
Field models
Only for a few cases the equations of Section 2 allow for exact solutions. Most notable are the Schwinger field E Schw = const and the Sauter pulse E Saut ∝ 1/ cosh 2 (t/τ ) with a time scale τ . For a systematic approach to relate features of the residual momentum distribution and the temporal field shape, see [25]. Therefore, in most cases of interest, one has to resort to numerical solutions. Here one faces the problem that, for pulses with or without subcycle structures, a number of parameters determine the solution which can sensitively (often non-linearly) depend on the location in parameter space. Therefore, suitable approximations and estimates are very important. For instance, in a WKB type analysis the locations of zeroes of ε in the complex t plane are identified as important quantities determining the dominating exponential factor for the pair production. This also explains that pulses which look similar on the real t axis can have strikingly different implications since the analytic properties can be rather distinctive. On a qualitative level, the enhanced pair production in the assisted dynamical Schwinger effect can be traced back to moving the relevant zeroes towards the real axis (cf. [20]), as mentioned above.
A subject of intense previous studies [30,31] was the Gauss pulse with sub-cycle structure or, equivalently, a periodic field with Gaussian envelope where E 0 is the amplitude, ω denotes the oscillation frequency and ϕ is the carrier envelope phase, which determines the symmetry properties w.r.t. time reversal.
Hereafter, we put ϕ = 0. The parameter σ = ωτ characterizes the number of oscillations within the pulse. For σ > 4, the known examples [31] exhibit f (t → ∞) at p ⊥ = 0 as a strongly oscillating (in tune with τ ) function of p around a bell-shaped mean, the latter one accessible via a WKB approximation. The occurrence of two time scales, 1/ω and τ , allows to define two Keldysh parameters, [32] to the tunneling regime and can be termed dynamical Schwinger effect.
Considering (11) and (12) as the strong pulse in the spirit of the assisted dynamical Schwinger effect, one adds a second weak assisting pulse with the same envelope form but different parameters yielding an eight dimensional parameter space for the two-dimensional p ⊥ − p distribution. Here, the optimization theory [19,22] is certainly very useful to search for parameters suitable for maximum amplification. Upon restricting to a narrow patch in the parameter space one can constrain the ansatz for the superposition of a strong and a weak pulse, each with sub-cycles, to In these expressions, k E ≤ 1 is the field strength fraction of the amplitude of the weak pulse, and k ω ≥ 1 is the frequency ratio. The envelopes of both pulses are synchronized and the carrier envelope phases are dropped, leading to a t → −t symmetric field E(t). Thus, we are going to quantify the assisted dynamical Schwinger effect for moderate values of k E , k ω and τ in the mildly subcritical regime with E 0 < E c and ω ≤ m. Having more extreme conditions in mind, e.g. k ω ≫ 1, another field model could be more suitable, such as and the related function A(t). Besides the Gauss envelope, other pulse shapes and/or nonzero carrier envelope phases may be considered in separate work. Here we will just consider the example of the super-Gauss bifrequent field model The Gauss envelope (13) is contained in (16) for the value ν = 2. Figure 1 shows an example for the electric field (upper row) and the potential (lower row) of the strong, lowfrequency pulse (left column, field "1" characterized by E 0 , ω, τ in (11) and (12)), the weak, high-frequency pulse (middle column, field "2" characterized by k E E 0 , k ω ω, τ to be used in (11) and (12) instead of E 0 , ω, τ ) and the superposition of both (right column, field "1+2" according to (13) and (14)). We emphasize the much more pronounced "roughening" of the electric field "1+2" by "2", while the impact on the potential looks very modest (note the different scales of left and middle panels in the bottom row). In Figure 3 we show the field (upper panel) and the potential (lower row) of the super-Gauss model in the case ν = 8 which gives the high frequency field "2" a shape with a flat top (see, e.g., Ref. [23]) and the wings of the combined field "1+2" show a stronger modulation. In Figure 2 (Fig. 4) we show the residual phase space distribution at p ⊥ = 0 (upper row) and p = 0 (lower row) (13) and (14) corresponding to kE = 0; middle column: the weak, high-frequency component "2" with kE = 0.25 and kω = 10, i.e. only the second term in curly brackets in equations (13) and (14); right column: the superposition "1+2", i.e. the complete expressions in equations (13) and (14). (13) and (14) corresponding to kE = 0; middle column: the weak, high-frequency component "2" with kE = 0.25 and kω = 10, i.e. only the second term in curly brackets in equations (13) and (14); right column: the superposition "1+2", i.e. the complete expressions in equations (13) and (14). for the fields displayed in Figure 1 (Fig. 3). It is obvious that here a nonlinear parametric enhancement effect takes place. The maximum values of the distribution function for the bifrequent pulse "1+2" are almost two orders larger than the corresponding values for the low-frequency pulse "1" and almost three orders of magnitude for the highfrequency pulse "2". In addition, the phase space occupancy for "1+2" is apparently strikingly larger. Contrary to [23,24], one can hardly recognize a "lifting" of the p distribution for field "1" by "2": the patterns are fairly different. In the present case, there is no "flat-top" in the time dependence of the field envelope, but the crucial aspect for the observed enhancement is the dominance of pair production in the multiphoton regime for the weakfield, occurring for sufficiently large frequencies k ω ω.
Numerical results
This situation is not qualitatively changed for the super-Gauss field with ν = 8 (see Fig. 3), where due to the flattening of the envelope shape resonant-like structures appear in the distribution function for the individual pulses while the combined bifrequent pulse results in a pair distribution function (shown in the lower right panel of Fig. 4) very similar to that for the Gaussian pulse shown in the lower right panel of Figure 2.
Due to the rather structureless behavior of the distribution function for the bifrequent fields of Gauss or super-Gauss type, the density (10) is easily accessible. Instead of n we show in the following the dimensionless combination N e − e + = n/ω 3 which characterizes the number of pairs generated in a volume determined by the transverse size of the minimum focal spot attainable at the diffraction limit of field "1". Figure 5 shows the increase of the number of pairs created with increasing field strength k E E of the highfrequency pulse from small to large values of k E . The left panel shows also a strong dependence of the effect on the frequency k ω ω of the second component of the field: at k ω = 10 the amplification effect becomes noticeable only for k E > 0.01. For k ω = 40, an enhancement effect is seen already for k E > 0.0001. Such a behavior has been noted already in reference [24] for another special field model and in reference [20] more generally: keeping fixed all other parameters, a certain value of the field strength "2" is required to cause a noticeable amplification by the assisting field. The right panel of Figure 5 shows that the effect is universal for different frequencies ω of the strong field "1". The effect depends weakly on ω at fixed highfrequency k ω ω. In the inset of that panel, we show the ratio r = N e − e + (k E )/N e − e + (0) = n 1+2 /n 1 as a function of k E to quantify the amplification effect. In particular, at k e → 1, the enhancement due to the assisting field becomes enormously large.
The dependence of the amplification effect on the frequency k ω ω of the weak, high-frequency field component is presented in Figure 6. In the left panel, the dependence of the number of created pairs is presented for three values of the strong-field frequency ω. At the same time, the frequency range of the second field component runs in each case over a range from values of the frequency ω, i.e. k ω = 1, up to k ω ω = 2 m. The limiting case of equality of the first and the second frequency components is equivalent to an increase of the field amplitude of the first component by the coefficient 1 + k E and corresponds to the field defined by equations (11) and (12) The right panel of Figure 6 shows the dependence of the ratio r of the enhancement of pair creation by the second field component. For all three pulse frequencies ω the ratio shows these unique features: (i) at k ω = 1 the enhancement stems from a coherent superposition of the high and the low field which is quantitatively described by a simple rescaling of the field strength E 0 → E 0 (1 + k E ) in the law of pair production by the single high field; (ii) for high values of the low-field frequency k ω ω > ∼ 0.5 m the results are almost identical and only weakly depending on the high-field frequency ω, they are dominated by the multiphoton regime for the assisting weak field; (iii) in between these two cases a dip in the ratio occurs due to the transition from coherent to incoherent superposition of the two fields.
It should be stressed that the pair production in the multi-photon regime becomes very efficient at high frequencies and depends less on the field strength than in the tunneling regime. To illustrate that point let us consider the pulse model (15) with a Gaussian envelope and E 0 = 0.2 E c and ω 2 = m. For k E = 0, i.e. only the first term in (15), the phase space distribution is smooth (see the left panel of Fig. 7), in contrast to the distribution shown in the left panel of Figure 2. For larger values of σ, the distribution approaches that of the Schwinger process, which is flat in p direction and Gaussian shaped in p ⊥ direction. In the displayed momentum range, one pronounced multi-photon peak is visible when considering the second term in (15) alone, see the middle panel of Figure 7; it is accompanied by much lower side-ridges in p ⊥ direction (the cross section at p ⊥ = 0 looks similar to the middle panel of Figure 2, of course). Its peak value is much higher than the maximum seen in the left panel, even the field strength is less. That is the efficiency of the multi-photon process. The complete pulse (15) gives rise to the phase distribution exhibited in the right panel.
The enhancement relative to the left panel is obvious, but the net effect falls short in comparison to the middle panel, when comparing the maxima of f . In this example the action of field "2" looks more like a "lifting" of the distribution emerging from "1", albeit without the ripples. While the ratio r = n 1+2 /n 1 rises strongly for k ω ω → m (as seen in the right panels of Figs. 5 and 6 for another pulse), the net efficiency n 1+2 /(n 1 + n 2 ) acquires a maximum which can be much larger than unity, but drops ultimately to unity upon enlarging further k ω ω as emphasized in reference [21]. It is the distinct phase space distribution which becomes important to discriminate the impact of the field components.
Discussion
Our investigation was originally motivated by the availability of XFELs (E XFEL ∼ 10 −5 E c , ω XFEL ∼ 5-50 keV, cf. fig. 1 in [33] and [24]) and PW laser systems (E PW ∼ 10 −3 E c , ω PW ∼ 1-3 eV, cf. [34][35][36]). These installations, when being combined with each other (as envisaged in the HIBEF project [37] for instance, or available already at LCLS [38]), in principle, would be characterized by k ω > 10 3 and k E ∼ 10 −2 . Moreover, pulse lengths of sub-attosecond duration would correspond to mτ ∼ 10 2 . Clearly, these values are fairly distinct from those we have considered above. Thus, our present considerations do not directly apply to situations which can be expected to be exploited for experimental investigations towards the assisted dynamical Schwinger effect. In so far, our work is an exploratory supplement to studies searching for promising designs with discovery potential w.r.t. genuinely nonperturbative mechanisms of particle production. Without strikingly new ideas on avenues to the experimental verification of the Schwinger effect in freely propagating fields (in contrast to the nuclear Coulomb field), the many details understood by now call for significantly higher fields and/or large photon frequencies. Nevertheless, the facets of the Schwinger effect remain challenging, in particular due to their relation to many other fields as quoted in the Introduction.
Summary
When two pulses with different frequencies and different field strengths (the latter ones being high enough, not less than about an order of magnitude below E c ) one can talk about two mechanisms for the increase of the pair production. If the frequencies of the two components are close (in the extreme case, we even can assume they are the same) and are small compared to the energy required for multi-photon pair creation, the nature of the increment of residual pairs is directly related to the highly nonlinear dependence of the effect on the field strength in the vicinity of E c . Alternatively, when one of the frequencies is not high and the second one is approaching the threshold of pair production by single photons we can talk about changing the properties of the vacuum for the high-energy photons. In this case, we can expect to more effectively promote the process of pair production and consider this process as pair production by a short-wavelength component catalyzing the low-frequency component.
In the present study we demonstrate that the increase of the rate of e − e + production by combining a strong lowfrequency field and a weak high-frequency field is a universal phenomenon and manifests itself in a certain range of parameters of the high-frequency field. Our results have been obtained within a non-perturbative framework. The shape of the electric field pulse is realistic and reproduces to some extent the characteristics of field pulses in experimental setups. The presented approach allows on the one hand to optimize the parameters for practical implementations of the dynamical Schwinger effect. On the other hand, by choosing parameters of the field model that characterise the actual experiment it allows to accurately estimate the number of residual pairs and their characteristics. | 5,145 | 2016-03-01T00:00:00.000 | [
"Physics"
] |
A CLOUD-BASED ARCHITECTURE FOR SMART VIDEO SURVEILLANCE
: Turning a city into a smart city has attracted considerable attention. A smart city can be seen as a city that uses digital technology not only to improve the quality of people’s life, but also, to have a positive impact in the environment and, at the same time, offer efficient and easy-to-use services. A fundamental aspect to be considered in a smart city is people’s safety and welfare, therefore, having a good security system becomes a necessity, because it allows us to detect and identify potential risk situations, and then take appropriate decisions to help people or even prevent criminal acts. In this paper we present an architecture for automated video surveillance based on the cloud computing schema capable of acquiring a video stream from a set of cameras connected to the network, process that information, detect, label and highlight security-relevant events automatically, store the information and provide situational awareness in order to minimize response time to take the appropriate action.
INTRODUCTION
Ensuring the safety of people should be a priority for every city.In order to address this issue some approaches have been proposed.Monitoring systems are the simplest solutions, while architectures capable of analyzing human behavior and determining whether there exists any possible dangerous scenario, such as fighting, theft, etc, are the most complex.Even though the development of complex surveillance schemes constitutes a great challenge, the importance along with the necessity of preserving the safeness of society have played a decisive role as one of the main incentives for researchers and developers to work on the integration of some technologies, such as data management and computer vision, to produce systems that are reliable and effective enough to serve as solution for tasks like cities surveillance, video analytics and efficient video management in order to support city officials and/or security employees in their duty.
Nowadays, video surveillance systems only act as largescale video recorders, storing images from a large number of cameras onto mass storage devices.From these schemes users have access to information that must be analyzed by themselves to detect and react to potential threats.These systems are also used to record evidence for investigative purposes.However, people are prone to be affected by mental fatigue as a consequence of performing the same task for a long period of time, resulting in a substantial increase of reaction time, misses and false alarms (Boksem et al., 2005).This fact has been one of the main reasons for the development of smart video surveillance systems.
In order to solve the problem of people's lack of concentration over long periods of time, one feasible solution might be the integration of automatic video analysis techniques.These techniques are based on computer vision algorithms that are capable of perform tasks as simple as detecting movement in a scene, to more complex ones such as classifying and tracking objects.The more advanced the algorithms are, the more sophisticated the system will be, and so will increase its capability to aid the human operator on real-time threat detection.
In this paper we present an architecture for automated video surveillance based on the cloud computing schema.Besides of acquiring video stream from a set of cameras, the approach we are proposing is also capable of extracting information related to certain objects within the scene.The extracted data is interpreted by the system as context information, from which we are able to detect securityrelevant events automatically.For testing purposes, we have implemented a prototype of our proposed architecture.
The rest of the paper is organized as follows.Section 2 summarizes the main advances in smart surveillance systems.In Section 3 we describe the FIWARE platform that we are using to deploy our architecture.Section 4 describes the proposed architecture of the smart video surveillance system based on cloud computing.In Section 5 we describe the set of computer vision filters that we have implemented.Section 6 explains how the information flows into the proposed architecture.In Section 7 we describe our implemented system prototype.Finally in Section 8 the Conclusions and Future work are presented.
RELATED WORK
Video surveillance and video analysis constitute active areas of research.In general, a video surveillance system includes the following stages: modeling of the environments, detection of motion, classification of moving targets, tracking, behavior understanding and description and fusion of information from multiple cameras (Brémond et al., 2006, Ko, 2008, Hu et al., 2004, Wang et al., 2003).(Ko, 2008).
Figure 1 shows the way that all the different stages described above are connected.This general representation can be seen as an active video surveillance system.Currently, there is a wide range of video surveillance systems that have been implemented to address problems such as intrusion detection or traffic surveillance.In works like (Mukherjee and Das, 2013) and (Connell et al., 2004), for example, the authors propose systems which perform human detection and tracking.Also, systems like the one proposed by (Calavia et al., 2012) are capable to detect and identify abnormal situations based on an analysis performed on object movement, then, they use semantic reasoning and ontologies to fire alarms.
On the other hand, there is an emerging research topic related to the integration of cloud-based services to video surveillance systems.In the work presented in (Xiong et al., 2014), for instance, the authors have proposed a general approach for implementing cloud video surveillance systems.Another example of the integration of cloudbased services is presented in (Rodríguez-Silva et al., 2012), in where the authors optimize video streaming transmission based on the network requirements, process and store videos based on cloud computing.
However, most of the work developed so far is focused on solving specific tasks in the context of smart security, either integrate cloud-based services or develop computer vision algorithms, moreover, very few of them propose a model for a complete surveillance system that takes care of both aspects.
For this reason, due to the incapability classic surveillance systems present on monitoring and processing data generated by large scale video surveillance applications, in this paper we propose a general architecture for a smart video surveillance system which integrates cloud-based services and image processing algorithms.
FIWARE PLATFORM
FIWARE1 or FI-WARE is a middleware platform, driven by the European Union.FIWARE was created with the idea of facilitating the development and global deployment of applications for Future Internet.According to its website, FIWARE provides a set of APIs that facilitate the design and implementation of smart applications at several levels of complexity.The API specification of FIWARE is open and royalty-free, where the involvement of users and developers is critical for this platform to become a standard and reusable solution.
FIWARE offers a catalogue that contains a rich library of components known as Generic Enablers (GEs), along with a set of reference implementations that allow developers to instantiate some functionalities such as the connection to the Internet of Things or Big Data analysis.
FIWARE is supported by the Future Internet Public-Private Partnership (FI-PPP) project of the European Union.
Access control:
The role of this module is to establish a secure connection between the user and the system, while it prevents strangers of gaining access.In other words, here is where the system grants predefined specific permissions to each user according to its role.The implementation was made using the KeyRock2 GE, which is an identity manager, developed by FIWARE, that takes care of a variety of tasks related to cyber security, such as users' access to the network and services, private authentication from users to devices, user profile management, etc.
Context Broker: To implement this module we use the Orion Context Broker3 (OCB) from FIWARE.The OCB component is a context information manager, thus, it enables the creation, update and deletion of entities, it is also possible to register subscriptions in order for other applications (context consumers) to retrieve the latest version of all the variables that constitute an entity when some event occurs.This component can be seen as the moderator that carries out the communication process between the other modules, so that, once a module has defined the entities it will send and receive, the OCB takes care of the rest.
Event Storage: This module persists the data related to the context information, this information might be an alarm from the system or a simple notification.By saving this information, we can retrieve it for later analysis.In order to implement this block we have used two GE, Cygnus4 and Cosmos5 .The first one is in charge of the data persistence, it handles the transfer of information from a given source to a third-party storage, serving as a connector, which is a great feature that increases the flexibility of the system and its scalability if required.While the second one provides a means for BigData analysis, so their users avoid the deployment of any kind of infrastructure.
Video Storage: This block is employed to storage raw video data, so that users have access to the video related to an event detected/stored according to the processing module.
Processing module: This block is conformed by two submodules: Kurento and Computer Vision Filters.The Kurento sub-module provides video streaming from IP cameras through the Kurento Media Server (KMS)6 .The KMS is based on Media Elements (ME) and Media Pipelines.Media Elements are the modules that perform a specific action on a media stream by sending or receiving media from other elements, while a Media Pipeline is an arrangement of connected ME's, that can either be a linear structure (the output of every ME is connected to a single ME) or a non-linear one (the output of a ME might be connected to several ME's).The ME's used in the implemented processing module were four: WebRtcEndpoint, PlayerEndpoint, Vision Filters and RecorderEndpoint.Figure 3 shows the Media Pipeline we implemented for our architecture prototype, i.e., the logic arrangement in which we connected the four ME.In this pipeline, we get the video stream with the PlayerEndpoint through a rtsp url, after that, the output goes to to the computer vision filters, then, the output of this ME is sent to the WebRTCEndpoint, after that, the processed video is ready to be visualized.Additionally, by using the RecorderEndpoint we are able to store video from the PlayerEndpoint and thus, we give the user the capability to play stored videos any time in the future.Among all of our system's modules, the processing module could be considered as the most relevant one, since it is here where we develop the set of filters required to detect all sort of events.In the following section we specify the set of events our system is capable of detecting, by describing the computer vision filters we have implemented so far.
COMPUTER VISION FILTERS
In order to extract relevant information about monitored scenes from the incoming video streams, Kurento provides a set of tools that enable the integration of computer vision algorithms to a Media Pipeline.For the implementation of any computer vision procedure, Kurento uses OpenCV (Bradski and Kaehler, 2008) libraries.
In our video surveillance application, we have implemented a set of submodules that are capable of detecting people and vehicles.To do so, three filters were designed, a background subtraction, classification and tracking filters, which are described below.
(i) Background subtraction: Background subtraction is a major preprocessing step in many vision based applications.However, detecting motion based background subtraction is not as trivial nor easy as it may appear at a first glance.In order to cope with such challenge, we integrated to our system SuBSENSE (St-Charles et al., 2015), one of the state of the art background subtraction algorithms.SuBSENSE can be considered as a foreground/background segmentation algorithm, based on the adaption and integration of features known as local binary similarity patterns in a background model that, as time goes by, is adjusted using a feedback loop at a pixel level.In other words, what this filter really does is that it takes the incoming video stream from a camera and yields another video stream of binary images, where the white pixels are part of a foreground object (blob) and the pixels colored in black are considered to be part of the background scene.
(ii) Object classification: In the context of computer vision, a classification algorithm enables a system to take actions that require discrete information about a real world scene, such as the identity or category of every entity inside it.In our case, to be able to detect vehicles and people, we have implemented a variation of K-nearest neighbor (Peterson, 2009) algorithm.Once the foreground objects (blobs) have been segmented from the scene by the background subtraction filter, this filter starts by extracting a set of shape features (constituted by the area, aspect ratio and compactness) from every blob in the scene.After this, the features of every object are stacked to form vectors.These vectors are passed to the classification algorithm (KNN) which queries a database in order to assign a class label.Contrary to the classical KNN which assigns an object to the class of the best match, our variation of KNN has an extra criteria that must be satisfied.That is, the similarity value between an input vector and its best match must surpass a threshold value, otherwise, it will be classified as an unknown object.This condition was integrated to reduce the amount of false positive detections by our system.One of the main reasons we integrated KNN for the classification task is that if we want to add new classes of objects we only need to modify the database file, thus, no retraining is required.Moreover, this classification algorithm has turned to be efficient enough for our real-time processing requirements.
(iii) Object tracking: In order for a system to be able to monitor what is going on among a set of entities in a scene within a period of time, this system must have the capability to gather information and persist it so as to relate it to data of following iterations, and so, create context information.In addition to the information extracted on each instant of time, with context information, it is possible to query the system about actions, temporal events and other high level concepts.For this task, we implemented a multi target tracking algorithm, which has no limit on the amount of objects it can track simultaneously and takes as input the binary image provided by the background subtraction sub-module.The algorithm extracts a set of features from each blob (a different set from the one used in the classification filter), and establishes a relation between the blobs in the current frame and those in the previous one.The most important feature this filter adds to our surveillance system is the ability to generate information about objects along time, this could be seen as a description of behavior.
Once we have designed our set of filters, we have to connect them to build logic arrangements so that a specific processing task is performed over the video stream.In Figure 4 we show these logic arrangements.Any filter in the processing module is able to communicate with the OCB.The communication between Kurento's filters and the OCB goes in both directions.The filters send information about objects and events they detect and also perform queries onto variables controlled by other modules or even by the user.Rebroadcasting data from one to many, is one of the context broker's greatest features.
SYSTEM WORKFLOW
So far we have described each of the elements that constitute our architecture and what their role is within the system.However, part of the definition of an architecture, in addition to its components, is the description of the relation between each element and the environment.In terms of our model, an internal interaction is when two components of the system are involved, and there is the external one, where an element within the system interacts with an entity that is outside the system, which in our case is the user.In this way, we can fully define the architecture.
As shown in Figure 2, the processing module interacts with other three elements: storage, context broker and cameras.Although the context broker has an important role by managing the data coming in and out the system, due to the context of video surveillance, the processing module could be considered the core module of the architecture.
Every time an entity in the OCB, which is how objects are called, is updated or modified, the context broker sends a notification to every client that has previously subscribed to this entity.In this way, every module keeps track on the information it needs.In Figure 5 we show an example of modules communication, in this case the processing module (through the GE Kurento) sends the event detection to the OCB, while the OCB sends commands to processing modules, such as start recording, add labels to video, etc., this communication is done through JSON messages.
Figure 5. Processing module and OCB communication through JSON messages.
In addition to being in charge of the management of realtime data, the OCB persists information by sending it to the notifications storage module (Cygnus).The data persistence is a necessary step in order for the system to provide a means for the user to query data with respect events that occurred at a given time.In this way, it turns a very The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-4/W3, 2017 2nd International Conference on Smart Data and Smart Cities, 4-6 October 2017, Puebla, Mexico easy task to find in a video a specific event within a defined span of time.
As a final step, our system sends all the detected events as well as the video stream of each camera to the user.In other words, by using this approach we have a video surveillance system as a service.
PROTOTYPE
Figure 6.The implemented prototype consists of a set of cameras connected to the network, a couple of desktop computers for pre-processing and a processing system implemented on FIWARE.
We have designed and implemented a prototype to test our system.For this prototype we are using four ip cameras and two desktop computers for video pre-processing tasks.
Functionalities of the prototype are complemented on the cloud through FIWARE GEs.
The proposed prototype can be separated in three stages.
The first stage includes the sensors, which in this case are cameras, however, we can include other kind of input devices such as fire and movement detectors, for instance.In this stage, we get the video stream, which is the information we send to the next phase.In the second stage, we proceed to compute the background subtraction, motion detection and object tracking, from this process we get information as position, speed, number of objects, etc.In the last stage, we perform the analysis of the video, which from all the video processing tasks, it is the one that requires the greatest processing capacity.At the end of this process, we get an interpretation of what is happening in the scene; furthermore, in this stage we also store both the video stream and the relevant information for posterior analysis if necessary.
We have also implemented a graphical user interface (see Figure7).The GUI was implemented in Web2Py.Four different tabs are at the user's disposal in order for him to interact with the set of functionalities our system offers.
The main tab is used for visualizing a single camera and a historical view of events detected so far.In the multiple tab, the user may visualize all of the cameras integrated in the surveillance system.Within the video search tab, the system allows the user to query videos previously stored by defining a search criteria based on different attributes such as date, type of event, camera id, etc.The management tab, displays the options available for customizing the way detected events are highlighted.It also enables the user to register new cameras.
CONCLUSIONS AND FUTURE WORK
To achieve more intelligent video surveillance systems, large amounts of data need to be collected at each instant, and then analyzed to extract useful information to make decisions and create an intelligent response.In this work we have proposed a video surveillance architecture based on The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-4/W3, 2017 2nd International Conference on Smart Data and Smart Cities, 4-6 October 2017, Puebla, Mexico the idea of cloud computing.By using this approach it is possible to provide the video surveillance as a service, this gives us the possibility of having a portable and scalable system that can be used in different scenarios.Furthermore, as a result of the video analysis it is possible to obtain a description of what is happening in a monitored area, and then take an appropriate action based on that interpretation.In addition, with this approach we are also able to add different kind of sensors besides cameras, this gives us the possibility to manage digital devices as in a IoT framework.Our system is based on the middleware FIWARE and has been implemented in a real scenario.
As a future work we want to implement more filters to incorporate them to the processing module.We also want to incorporate another type of sensors such as motion detection sensor, temperature sensor, etc., which we believe would improve the understanding the system has of a given situation.From scalability point of view, as the number of cameras and algorithms increase more computing resources are required.For this, we are also exploring a distributed approach.
Figure 2 .
Figure 2. System Architecture.In this work a system architecture for smart video surveillance based on the idea of cloud computing is proposed.The architecture is composed by five main functional blocks: Access control, Context Broker, Event Storage, Video Storage and Processing module, in Figure2the overall architecture of the smart video surveillance system is presented, each block has a unique role in the process of synthesizing data from real-time video stream into a human understandable format.
Figure 4 .
Figure 4. Example of filter logic arrangements based on Kurento's pipeline..
Figure 7 .
Figure 7. GUI for the video surveillance system prototype. | 5,080.6 | 2017-09-25T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Probing and dressing magnetic impurities in a superconductor
We propose a method to probe and control the interactions between an ensemble of magnetic impurities in a superconductor via microwave radiation. Our method relies upon the presence of sub-gap Yu-Shiba-Rusinov (YSR) states associated with the impurities. Depending on the sign of the detuning, radiation generates either a ferro- or antiferromagnetic contribution to the exchange interaction. This contribution can bias the statistics of the random exchange constants stemming from the RKKY interaction. Moreover, by measuring the microwave response at the YSR resonance, one gains information about the magnetic order of the impurities. To this end, we estimate the absorption coefficient as well as the achievable strength of the microwave-induced YSR-interactions using off-resonant radiation. The ability to utilize microwave fields to both probe and control impurity spins in a superconducting host may open new paths to studying metallic spin glasses.
We propose a method to probe and control the interactions between an ensemble of magnetic impurities in a superconductor via microwave radiation. Our method relies upon the presence of subgap Yu-Shiba-Rusinov (YSR) states associated with the impurities. Depending on the sign of the detuning, radiation generates either a ferro-or antiferromagnetic contribution to the exchange interaction. This contribution can bias the statistics of the random exchange constants stemming from the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction. Moreover, by measuring the microwave response at the YSR resonance, one gains information about the magnetic order of the impurities. To this end, we estimate the absorption coefficient as well as the achievable strength of the microwave-induced YSR interactions using off-resonant radiation. The ability to utilize microwave fields to both probe and control impurity spins in a superconducting host may open new paths to studying metallic spin glasses. DOI: 10.1103/PhysRevResearch.1.033091
I. INTRODUCTION
The nature of interactions between magnetic impurities embedded in a metallic host gives rise to an intriguing state of matter: a spin glass [1]. The Ruderman-Kittel-Kasuya-Yosida (RKKY) exchange interaction between the impurities is carried by itinerant electrons and alternates in sign, depending on the inter-impurity separation [2][3][4]. The random position of the impurities with respect to each other results in a randomsign exchange interaction, frustrating the magnetic order in a system of localized spins. The efforts to understand the resulting low-temperature spin glass phase and the corresponding phase transition have led to the introduction of several important concepts in condensed-matter physics, including the Edwards-Anderson [5] and functional [6] order parameters. Moreover, these efforts have also motivated an ever-expanding tool set of quantum control techniques aimed at directly controlling the interactions between magnetic impurities.
Remarkably, even the simplest spin glass model introduced by Sherrington and Kirkpatrick [7] in direct analogy to the Curie-Weiss model of a ferromagnet turns out to be extremely rich and, unlike the Curie-Weiss model, not amenable to a straightforward mean-field theory treatment [8,9]. The frustration of the magnetic moments manifests itself in both the thermodynamic and electron transport properties of a normal metal with a magnetic element dissolved in it. Starting with magnetic susceptibility measurements on AuFe alloys [10], there are a substantial number of such studies performed on bulk samples [11,12]. With the development of mesoscopic systems, electron transport through mesoscale-sized alloys also received their fair share of attention; for example, the remanence of the resistance (i.e., its dependence on the cooling protocol) of a mesoscopic AgMn device was investigated in FIG. 1. (a) Schematic of an S-s-S junction consisting of two large superconducting banks connected by a narrow constriction, with the arrows representing magnetic impurities. (b) Energy levels of the YSR states associated with two magnetic impurities in a superconductor. The radiation matrix element M YSR depends on the mutual spin orientation of the impurities (i.e., it vanishes for parallel magnetic moments), resulting in a spin-dependent AC Stark shift that translates to an effective, microwave-induced, spin-spin interaction. [13], while quantum interference effects in the conductance of CuMn and AgMn were studied respectively in Refs. [14,15].
In this paper, we explore a method for investigating mesoscopic spin glasses using techniques recently perfected in the development of superconducting qubit technologies [16]. In particular, we consider the possibility of utilizing microwave radiation to directly probe and possibly control the many-body state of an ensemble of magnetic moments embedded in a thin superconducting bridge (Fig. 1).
Much as in a normal metal, magnetic impurities in a superconductor also exhibit random-sign RKKY interactions. This RKKY interaction is hardly modified by superconductivity, so long as the typical distance d between the impurities is shorter than the superconducting coherence length ξ . For impurities separated by larger distances, the interaction is instead antiferromagnetic and dominated by a virtual process involving Yu-Shiba-Rusinov (YSR) states [17]; however, at such distances, the interactions are typically weak since they decay exponentially with d/ξ . Herein lies the intuition behind our approach: To utilize microwave driving to enhance the virtual hybridization between the superconducting condensate and the YSR states.
With respect to such microwave excitation, there are two main differences between normal-metal and superconducting hosts. The first is that there exists a gap in the spectrum of excitations in a superconductor. In the absence of a magnetic field and impurities, a conventional s-wave superconductor, such as aluminum, possesses time-reversal symmetry. As a result, the gap is "hard": At low temperatures there is a frequency threshold, ω th = 2 /h, for the absorption of electromagnetic radiation. The second difference is that a magnetic impurity in a superconductor creates a localized YSR state with energy E YSR within the gap [18][19][20][21][22][23][24][25][26]. A single YSR state may host no more than one quasiparticle, and therefore cannot facilitate absorption of a photon by exciting a Cooper pair from the condensate [27]. However, a pair of YSR states separated by distances ξ YSR creates a discrete-energy state for an electron pair where ξ YSR = ξ √ /( − E YSR ) is the characteristic length scale of a YSR state. To this end, at low temperatures, the subgap absorption results from a process in which a microwave photon transfers a Cooper pair from the condensate onto the pair of YSR states, leading to an absorption line centered at ω = 2E YSR /h. Crucially, the magnitude of this absorption by a YSR pair depends on the mutual orientation of the magnetic moments. For moments oriented in parallel, the associated pair of YSR states cannot accept a singlet Cooper pair (for simplicity, we assume that there is no spin-orbit coupling). Thus, the absorption is maximized for antiparallel moments and varies as F [S(R 1 ), S(R 2 )] = 1 −Ŝ 1 ·Ŝ 2 , whereŜ 1,2 = S 1,2 /S are the unit vectors indicating the orientation of the magnetic moments. Therefore, the subgap absorption coefficient provides information regarding ferromagnetic order at the scale, |R 1 − R 2 | ξ YSR . The absorption line width and its detailed shape depend on the inevitable spread of the contact exchange interaction [28][29][30][31] and the overlap between the YSR states [24,[32][33][34].
For an ensemble of moments with density n ξ −3 , the many-body ferromagnetic order can be deduced fromF is the Heaviside step function), N ∼ n 2 ξ 3 V (w/ξ ) 3−D is a normalization factor, and V is the volume of the sample which we think of as either a wire (D = 1) or as a film (D = 2) with transverse dimension w ξ . This quantity is related to the dissipative part of the conductivity integrated over the absorption line: where n is the impurity concentration, σ n and ν are, respectively, the normal-state conductivity and the density of states at the Fermi level, and l is the electron elastic mean free path; note that the last factor in Eq. (1) extrapolates between the regimes of long and short mean free paths. As aforementioned, in the absence of microwaves, there are two components of the inter-impurity interaction carried by virtual excitations of the itinerant electrons. The first (and dominant at d ξ ) one comes from the continuum of Bogoliubov quasiparticles and is responsible for the conventional indirect exchange coupling, i.e., RKKY interaction in normal metals and its counterpart in superconductors. The second one owes to the discrete YSR states and is specific to superconductors. Borrowing the idea of "off-resonant dressing" from quantum optics [35], we note that off-resonant microwave radiation creates an additional channel for virtual transitions of Cooper pairs onto a pair of YSR states. For sufficiently strong drives, the corresponding amplitude may successfully compete with the one existing in the absence of radiation [17], while the conventional RKKY component is only weakly affected, so long as the radiation remains far detuned from the gap edge. The sign of the effective interaction induced by the off-resonant drive depends on the sign of the detuning, ω − 2E YSR , and is ferromagnetic for ω − 2E YSR > 0.
One can estimate the strength of the microwave-induced interaction, J ind YSR , relative to the RKKY component J RKKY as where E is the electric field of the microwave. In order for superconductivity to remain intact, the last factor here must be small; to avoid resonant absorption, the denominator of the first factor must exceed the YSR absorption line width. Together, these two conditions set a limit for the strength of the "dressed" interaction. Intriguingly, in a dilute system, at d ∼ ξ , the dressed interaction may compete with the conventional RKKY component opening up the possibility of studying the de Almeida-Thouless line [9] in a spin glass.
II. MODEL
Our starting point is the BCS Hamiltonian of an s-wave superconductor with magnetic impurities. The Bogoliubov-de Gennes (BdG) Hamiltonian takes the form 033091-2 where p = − 1 2m ∇ 2 − μ is the kinetic energy (we seth = 1), μ the chemical potential, and the superconducting order parameter. The Hamiltonian H = dr † H /2 is written in conventional Nambu spinor notation, where = [ψ ↑ , ψ ↓ , ψ † ↓ , −ψ † ↑ ] T and τ (σ ) are Pauli matrices acting on the particle-hole (spin) space. The last term in the Hamiltonian represents the contact interaction between electrons and the impurity spins, where J i characterizes the coupling strength and S i is the spin of the ith impurity located at position R i .
The energy of the subgap YSR bound state localized around impurity i is [18][19][20], where α i = νπJ i S/2 is a dimensionless exchange coupling. We treat the impurity spins classically assuming they have equal magnitudes S but in general different orientations. The four-component eigenspinors of the BdG Hamiltonian H corresponding to the subgap YSR states are given by and − i,S i = CT + i,S i where a particle (+) and hole (−) states ± i,S i have energies ±E i YSR and are related by the antiunitary symmetry transformation CT = τ y σ y K where K denotes complex conjugation; N 2 i = 2πν α i /(1 + α 2 i ) is a normalization factor, r i = |r − R i | the relative distance from the impurity, δ i = tan −1 (α i ) the phase shift, |↑ is the +1 eigenstate of σ z , and U (Ŝ i ) is a unitary rotation operator aligning the quantization axis of the Nambu spinor with the direction of the impurity spin. As we are interested in the physics resulting from low-energy microwave excitation, we will project the electron field operator onto these subgap YSR states, where γ i is the annihilation operator of a YSR state located at the ith impurity. The projected Hamiltonian then becomes . We focus on densities n 1/(ξλ 2 F ) where we can ignore the hopping term γ † i γ j . The calculation of the radiation matrix element then reduces to an effectively two-impurity problem. At higher densities, one enters the phase of gapless superconductivity, see Ref. [32] and the references therein.
III. RADIATION MATRIX ELEMENT
We now turn to calculating the matrix element M YSR , corresponding to radiation-assisted YSR-pair creation. When the system is coupled to a weak microwave field, the vector potentialà enters as¯h i ∇ →¯h i ∇ + e cà and the superconducting order parameter is generally both complex and spatially dependent, i.e., (r) = | |e iθ (r) . We choose to work in the London gauge where the order parameter is real and the integral of the new vector potential, A =à −¯h c e ∇θ (r), yields a gauge-invariant phase difference that gives rise to the supercurrent in a superconductor.
The electromagnetic perturbation to the BdG Hamiltonian is given by H EM = d 3 r (e/2) † (A · v + v · A) , where v is the velocity operator. Using Eq. (5), the projected perturbation Hamiltonian takes the form We assume a negligible thermal population of YSR states and thereby ignore hopping terms γ † 1 γ 2 . The current density J 1,2 is given by Since the relevant microwave frequencies (ω 2 ) correspond to wavelengths significantly longer than both the superconducting coherence length and the characteristic YSR length scale, A can be treated as position independent. Moreover, since the integration domain contains all of space, the integral of the antisymmetric portion of the integrand vanishes. The integrand can therefore be symmetrized, where we have also used the fact that the current density, Eq. (6), is rotationally symmetric around the axis connecting the two impurities, R = R 2 − R 1 andR = R/|R|, and therefore only the component of the vector potential that is parallel to this axis contributes to absorption.
Owing to the rotational symmetry, the integral is effectively two-dimensional and can be done in elliptical coordinates [27], r + = (r 1 + r 2 )/2, r − = r 1 − r 2 , where the integral measure becomes d 3 r → 2π In these new coordinates, r 2 + − r 2 − /4 = r 1 r 2 , exactly canceling out the power-law decay of the YSR wave function.
For two identical impurities, the current density respects an additional reflection symmetry about the plane perpendicular to and bisecting R. This symmetry would imply that the integral in Eq. (7) vanishes. Thus, a nonzero radiation matrix element requires the breaking of this reflection symmetry. In practice [30], this is always the case as one invariably observes fluctuations in the exchange coupling strengths, suggesting that the reflection symmetry is naturally broken by disorder effects. Therefore, we assume hereafter that the impurities are not identical, α 1 = α 2 .
After a straightforward but tedious calculation, the general expression of the matrix element can be obtained. Assuming that | | sin 2δ 1 | − | sin 2δ 2 | | ξ/R, the matrix element can be expanded to first order in the coupling difference, |α 1 − α 2 |, and 1/(k F R) as [36] where E = −∂A/∂t is the electric field of the applied microwaves and α = (α 1 + α 2 )/2, the average coupling strength of the two impurities. One important and intriguing observation: Due to the longer intrinsic YSR length scale ξ YSR = ξ/| sin 2δ| as well as the absence of a power-law decay in the matrix element, the corresponding microwave-induced interaction has a significantly longer range than both the RKKY interaction and the bare YSR interaction [17] (in the absence of microwaves).
033091-3
FIG. 2. Schematic cross-over diagram of the integrated sub-gap conductivity as a function of the elastic mean free path l and the density of magnetic impurities n. In the clean and dense limit (upper right), the YSR states are strongly hybridized and ∝ n [32]. At lower densities (nξ 3 1), the hybridization is negligible and the absorption requires two magnetic impurities within ξ of each other, ∝ n 2 (bottom right). In the dirty case, l min(ξ, 1/nξ 2 ), the dependence of on n is similarly quadratic (left). In the dense regime, nξ 3 1 (top), the interactions between the magnetic impurities are dominated by RKKY mechanism (favoring a spin glass phase), while at low density, nξ 3 1 (bottom), the exchange interaction is exponentially weak and antiferromagnetic (AFM).
IV. EXPERIMENTAL IMPLEMENTATION
In this section, we propose an experimental implementation based upon superconducting circuits, which enables one to utilize microwave fields to both probe the spin state of the impurities as well as control their effective interactions. Since our proposed microwave-dressed interactions require both a superconducting background and a supercurrent, a natural setup is an S-s-S junction created from two large superconducting leads linked by a constriction (Fig. 1); such a setup has previously been used in experiments to probe and control Andreev bound states [37], see also [38][39][40].
An oscillating bias potential, V cos ωt, can then be applied to the leads, to create a time-dependent supercurrent j ∝ he m ∇θ governed by the Josephson relation ∂ t θ = 2eV h sin ωt, where θ is the gauge-invariant phase difference between the leads and ω is the microwave frequency.
To probe the many-body state of the impurities, we propose to apply a resonant microwave drive so that the ordering can be inferred from the integrated dissipative part of the subgap conductivity, 2 0 σ (ω)dω. The dissipative conductivity σ is related to the energy absorption rate ω = σ E 2 /2, where is the transition rate obtained by plugging Eq. (8) into Fermi's golden rule. At low temperatures, the initial state consists of unoccupied YSR states. The integrated subgap conductivity depends on the average distance between impurities and the elastic mean free path l. It can be analyzed in various limits, which are detailed below and summarized in Fig. 2.
In the low-density limit nξ 3 YSR 1, the YSR states are well localized so that the hybridization-induced energy splitting of the YSR states is negligible compared to . The subgap conductivity can be written as a sum of pair-creating transitions, where M i, j YSR is the radiation-assisted matrix element to create a pair of YSR quasiparticles on the ith and jth impurity. Assuming a uniform distribution of uncorrelated YSR levels in a narrow band, W 2E YSR , and ensemble-averaging the conductivity, we find [27] whereᾱ is the mean value of α i andĒ YSR the average YSR energy. The normalized distribution function g(ω, W ) = 2(1 − |ω|/W ) 3 (W − |ω|)/W (specific to a uniform YSR band) characterizes the energy dependence of the absorption and has a peak of width W . We normalize the conductivity by its normal-state value σ n = 2e 2 ν 1 2 v F l, where l is the electron mean free path from nonmagnetic impurities; Eq. (10) is valid in the clean limit l ξ . The dirty limit l ξ is obtained by replacing ξ → l in the second line of Eq. (10) [32,41].
In the high-density regime, nξ 3 YSR 1, the YSR states will strongly hybridize. The conductivity in this case is derived in Ref. [32]. At a qualitative level, the high-density limit can be obtained from Eq. (10) by replacing ξ (in the second line) by an effective mean free path arising from magnetic impurities, ξ → 1/(nξ 2 ). When the above conductivity is integrated over the subgap states, one naturally recovers Eq. (1).
Finally, we now estimate the achievable strength of the dressed interactions induced by an off-resonant microwave field. The relevant energy levels and matrix element are depicted in Fig. 1(b). At leading order, one finds that the radiation-assisted YSR pair creation results in an effective spin-spin interaction originating from a spin-dependent AC stark shift to the ground-state energy, where we have neglected a spin-independent overall shift. To compare the relative strength of this dressed interaction with the RKKY component, we use Eq. (8) and the expression for the RKKY interaction from Ref. [17], resulting [27] in the estimate presented in Eq. (2).
V. CONCLUSION
In summary, we have shown that the electronic subgap states hosted by magnetic impurities in a superconductor provide a way to access magnetic order. This opens the possibility to probe and control metallic spin glass physics in a superconducting narrow-bridge junction by using microwave driving [ Fig. 1(a)]. Keeping the transverse size w of the bridge thinner than the London length ensures that the supercurrent is approximately uniform and couples to the magnetic moments in the full volume. Unlike conventional Andreev bound states, the YSR levels are insensitive to a static phase difference across the junction which provides a simple way to distinguish the respective contributions to the dissipative conductivity. Looking forward, our work also opens the door to an intriguing quantum information platform where magnetic impurities in a superconducting host play the role of quantum memories, while microwave driving can lead to on-demand long-range gates [42]. Here, the absence of a power-law decay in the radiation matrix element could enable all-to-all connectivity between qubits as well as multibody interactions, both of which are important for reducing the gate depth of certain quantum algorithms [43,44] and realizing frustrated long-range spin models [45,46]. | 4,970 | 2019-05-17T00:00:00.000 | [
"Physics"
] |
Flexible optofluidic waveguide platform with multi-dimensional reconfigurability
Dynamic reconfiguration of photonic function is one of the hallmarks of optofluidics. A number of approaches have been taken to implement optical tunability in microfluidic devices. However, a device architecture that allows for simultaneous high-performance microfluidic fluid handling as well as dynamic optical tuning has not been demonstrated. Here, we introduce such a platform based on a combination of solid- and liquid-core polydimethylsiloxane (PDMS) waveguides that also provides fully functioning microvalve-based sample handling. A combination of these waveguides forms a liquid-core multimode interference waveguide that allows for multi-modal tuning of waveguide properties through core liquids and pressure/deformation. We also introduce a novel lifting-gate lightvalve that simultaneously acts as a fluidic microvalve and optical waveguide, enabling mechanically reconfigurable light and fluid paths and seamless incorporation of controlled particle analysis. These new functionalities are demonstrated by an optical switch with >45 dB extinction ratio and an actuatable particle trap for analysis of biological micro- and nanoparticles.
for active tuning by varying both pressure and core fluid. A 5 μ m wide and 7 μ m tall solid-core waveguide (dark grey) is used as an input for the wide liquid-core MMI section (width w 0 , length L). The MMI is surrounded laterally by 50 μ m wide air channels, which enable both optical waveguiding and tuning of the MMI width through pneumatic and fluidic pressure, as is illustrated in the right side of Fig. 1b.
The multimode interference leads to the formation of N images of the input mode for a given length, L, and pressure, P, according to This pattern formation is visualized in Fig. 1c (top) for a static MMI (P = 0; w 0 = 50 μ m) filled with fluorescent dye in ethylene glycol (n c = 1.45) and excited with λ = 532 nm laser light. Clean spot patterns are observed over a distance of several millimeters in excellent agreement with eq. (1) and finite difference method simulations shown in Fig. 1c (bottom). Liquid-core MMIs with widths between 50 and 200 μ m (25 μ m increment) were fabricated and characterized as presented in Fig. 1d. We were able to controllably vary the spot number from 1 to 34 images with device lengths less than 1 cm, all in excellent agreement with theory (lines). Such MMIs, therefore, provide a wide parameter space for multi-spot particle detection with high signal-to-noise ratio 21,27 .
Next, we turn to dynamic tuning of these optofluidic elements. The first mechanism is through replacement of guiding liquid, i.e the waveguide core refractive index, n c . Fig. 1e shows MMI tuning using different mixtures of ethylene glycol and water. Specifically, a sampling of waveguides (with various widths, w 0 , and spot numbers, N) were used to demonstrate the linear relationship between core refractive index, n c , and image length, L. Tuning of the spot number over a very wide range from 2 to 33 was realized, and excellent agreement between theoretical and experimental results was found. Thin sidewalls made from a pliable material (PDMS) allow for controlling a microfluidic channel's width through both inward and outward pressure 28 . Here, we use this principle for pressure-based dynamic tuning of the optofluidic MMI devices. Inward pneumatic pressure applied to the side channels causes a decrease in the MMI width, (Fig. 1b, right) and thus, a decreased spot number, N, at a given length, L. Conversely, positive fluidic pressure in the core increases both w and N as seen in Fig. 1b,f. Note that all data points in Fig. 1f are at a given length L that yields an integer spot number at zero applied pressure. The data closely matches theoretical expectations (lines in Fig. 1f). Furthermore, there is no notable decrease in fluorescence signal during sidewall deformation, indicating negligible effects on the optical loss of the waveguide.
We now turn to introducing a new approach for a fully-optically and fluidically-reconfigurable optofluidic platform. At its heart is an actuatable microvalve that simultaneously acts as an optical waveguide and actively moderates fluid flow, dubbed here as a "lightvalve". Our implementation is based on lifting-gate microvalves that have been used in microfluidic devices for complex bioassays 29,30 . Figure 2a shows the schematic design of the lightvalve, with the middle images showing its static architecture in cross-section and side view. It is composed of three PDMS layers, a control layer (I), a waveguide valve layer (II), and a substrate (III). The control layer, I, is designed to allow for both push-down (positive pressure, Fig. 2a The obvious Litmus test for photonic functionality of the lightvalve is operation as an on-off switch, which is reported in Fig. 2c for a 0.6 mm long valve. The top trace shows the temporal pressure sequence for the valve and the two bottom traces show the optical transmission across the valve in push-down (middle, red) and lift-up (bottom, blue) modes. Successful and repeatable switching with excellent extinction is observed for both pressure modes. Cycle rates can reach ~100 Hz and are limited by the microfluidic control system. The switches operated without degradation for over 100,000 switching cycles in both modes.
Next, we analyzed the on-off optical switching efficiency for different length lightvalves operated in lift-up mode. The results are displayed in Fig. 2d and show a steep increase in performance at around 500 μ m length (with control height, h c = 100 μ m). This is due the fact that optical switching in lift-up operation relies on bending of the entire membrane formed by layer II; as such, when the effected membrane bend is small, optical rejection is low. Figure 2d shows that the lightvalve switches off for length/height (L v /h c ) ratios above 5 and the on-off ratio continues to improve up to L v /h c~1 0. At even longer lengths, on-off ratios become inconsistent due to membrane deformations during actuation.
Push-down operation, on the other hand, is relatively length-independent as it relies only on deformation of the waveguide structure at the beginning of the lightvalve, which leads to poor mode coupling between the excitation and valve waveguides. Figure 2d shows that the on-off ratio depends on the applied pressure for a short valve length, L v = 300 μ m. After first reaching a maximum at 3psi due to optimized optical mode coupling, the transmission drops dramatically, resulting in an on-off ratio of ~45 dB at 40 psi, indicating excellent light blocking capability over short valve lengths length.
Finally, we demonstrate an implementation of the lightvalve as a functional element that unites both fluid handling and photonic functions of a biodetection assay. To this end, the lightvalve is built as an annular structure shown schematically in Fig. 3a. Fluidically, the lightvalves can be used to mechanically trap objects within the annulus when lowered into the channel. We fabricated annuli with 5-80 μ m diameters, enclosing volumes between 140 fL and 35 pL. The lightvalves also act as peristaltic pumps for refreshing fluid within the traps by connecting three or more valves in series and actuating them sequentially in lift-up mode. Optically, the annulus enables in-plane optical interrogation of trapped particles using light that traverses the valve ring along the straight waveguide path. The optical path shown in Fig. 3a defines the optical excitation and collection volume of the trap. The solid-core waveguides are narrow enough to create effectively single vertical and lateral optical modes as shown in Fig. 3b. This allows for implementing advanced optical spectroscopy methods on small numbers of particles trapped inside the annulus. We illustrate this capability using fluorescence correlation spectroscopy (FCS). Figure 3c left shows top-down camera images of 3, 5, and 10 trapped, fluorescent microbeads (note that only beads within the excitation volume are fluorescing in the image). The corresponding FCS traces-acquired by in-plane fluorescence detection along the solid-core PDMS waveguides-are shown on the right. When the ratio of physical trap volume and optical excitation volume V exc is taken into account, the particle concentration c obtained from the FCS curves (c = G(0)/V exc ) agrees well with the value obtained by camera observation.
Lastly, we demonstrate the lightvalve trap's ability to analyze single, trapped bioparticles -here, fluorescently stained E. coli bacteria. Figure 3d shows the time-dependent fluorescence signal collected from the trap. An initially empty trap is closed at t ~15 sec and a single E. coli bacterium is trapped within the observation volume. The detected fluorescence decreases continuously over the bacterium's 40 second residence within the trap due to photobleaching. After 55 seconds, the bacterium is released by activating the lift-up function, followed by a series of actuations (i.e. fluid pumping) in search of another bacterium. The inset of Fig. 3d shows high signal when the trap is up, and low signal when trap is down. After 110 seconds, the trap is locked down again because a bacterium is detected above the background optical signal threshold. Subsequently, this bacterium is diffusing in and out of the observation volume, yielding a fluctuating fluorescence signal. We note that FCS analysis of the two bacteria trapped herein yield diffusion coefficients of ~0.5 μ m 2 /s as expected for a particle of ~1 μ m diameter.
Discussion
In summary, we have introduced a new optofluidic platform that seamlessly marries optical and fluidic functions in a single chip. Based on combining solid-and liquid-core PDMS waveguides whose fabrication is compatible with purely microfluidic chips, we created devices that offer multi-modal photonic reconfigurability using core liquids, mechanical pressure and motion. The potential of this approach was illustrated using widely tunable liquid-core MMI waveguides and by the introduction of novel lightvalves that regulate both liquid and light flow. Extremely efficient optical switching and definition of physical particle traps for optical analysis were demonstrated. The fluidic valve shape and optical pathways created by the lightvalve can be designed independently and with great flexibility, making the lightvalve a powerful building block for future optofluidic devices.
Methods
Fabrication. The optofluidic chips were fabricated using soft lithography. Seen in Fig. 4, the workflow involves parallel fabrication of waveguide valve and control layers. The solid-core optical waveguides are fabricated by spinning 5:1 (base:curing agent) PDMS (Sylgard) onto a 7 μ m tall silanized 31 SU-8 master (Microchem) at 6000 RPM for 30 minutes (spin speed and duration were optimized to minimize residual PDMS on top of SU-8 features). A 2-hour cure at 60 °C ensures full polymerization of the waveguide core material. A subsequent 2 minute, 2000 RPM spin of 10:1 PDMS then creates a continuous membrane across the waveguide valve layer. This layer structure preserves the optical waveguide properties as the polymer is transparent throughout the optical spectrum 32 and 10:1 PDMS has a lower refractive index (n 10:1 ≈ 1.420) than 5:1 PDMS (n 5:1 ≈ 1.425) 25 . In parallel to waveguide fabrication, the control layer is fabricated by pouring and curing 10:1 PDMS on a silanized SU-8 master with 80 μ m tall features. Once cured, the PDMS layer is peeled from the SU-8 master mold and ports (d = 1 mm) are punched to enable pneumatic access. After punching, the bottom of the control layer and top of the waveguide/fluidic layer are activated via oxygen plasma (30 sec at 60 W power), aligned on a custom alignment stage, and brought into contact, whereupon the bond is enhanced via a 2-hour thermal activation in a 60 °C oven. Next, ports are punched into the stack to allow fluidic access, followed by another peeling and bonding process. This step occurs with negative pressure applied to the pneumatic ports to prevent the bonding of waveguide valve layer to the substrate layer. In the case of single layer devices (i.e. tunable optofluidic MMI), only the left hand side of Fig. 4 is followed, replacing the 10:1 PDMS spin step with a drop casting of 10:1 PDMS. Waveguide chips were diced using commercial razor blades to ensure good facet quality for low optical coupling losses 33 . Microscope images of three completed devices are shown in Fig. 5.
Experimental Setups. The optofluidic chips were stabilized by custom laser-cut acrylic manifolds, designed for simultaneous fluidic, pneumatic, and optical access. The chips were pneumatically operated using a custom control box (National Instruments and SMC), interfaced via Labview. All optical experiments used fiber-coupled laser excitation sources that were butt coupled to the PDMS optofluidic devices at the solid-core waveguide facets. Fiber vibrations were remediated by touching the fiber facet to the waveguide facet. In-plane signal was collected via objective (Newport) at the waveguide facet, spectrally filtered (filters varied depending on excitation/emission, Semrock), and focused into a connectorized multimode fiber that was attached to a single photon avalanche photodiode (Excelitas). A time correlated, single photon counting card-operated in time-tagged time-resolved mode-was used to accumulate and store the signal for downstream processing (Picoquant). Out-of-plane chip monitoring and signal collection was simultaneously achieved using a custom compound microscope 34 .
Finite difference method (FDM) optical simulations of the liquid-core MMI waveguides were performed using Fimmwave, a commercial photonic design software (Photon Design). DH5α E. coli staining was accomplished using 50 μ M Syto62 (Invitrogen). Once the nucleic acid was stained, the bacteria were pelleted, the excess dye was removed and replaced with T50 buffer, and the bacteria were injected into the fluidic inlet of the lightvalve trap device. | 3,109.4 | 2016-09-06T00:00:00.000 | [
"Physics"
] |
PERCOLATION CRITICAL PROBABILITIES OF MATCHING LATTICE-PAIRS
. A necessary and sufficient condition is established for the strict inequality p c ( G ∗ ) < p c ( G ) between the critical probabilities of site percolation on a one-ended, quasi-transitive, plane graph G and on its matching graph G ∗ . When G is transitive, strict inequality holds if and only if G is not a triangulation. The basic approach is the standard method of enhancements, but its implemen-tion has complexity arising from the non-Euclidean (hyperbolic) space, the study of site (rather than bond) percolation, and the generality of the assumption of quasi-transitivity. Thisresultiscomplementary to the work of the authors (“Hyperbolic site per-colation”, arXiv:2203.00981 ) on the equality p u ( G )+ p c ( G ∗ ) = 1, where p u is the critical probability for the existence of a unique infinite open cluster. It implies for transitive, one-ended G that p u ( G ) + p c ( G ) ≥ 1, with equality if and only if G is a triangulation.
Strict inequalities for percolation probabilities
It is fundamental to the percolation model on a graph G that there exists a 'critical probability' p c (G) marking the onset of infinite open clusters.Two questions arise immediately.
(a) What can be said about the value of p c (G)? (b) For what values of the percolation density p is there a unique infinite cluster?These questions have attracted a great deal of attention since percolation was introduced by Broadbent and Hammersley [7] in 1957.They turn out to be more tractable when G is planar.
Amongst exact calculations of p c (G), those for bond percolation on the square, triangular, and hexagonal lattices have been especially influential (see [16,23], and also the book [11]).Earlier discussion (falling short of rigorous proof) of these values was provided by Sykes and Essam [22] in 1964.The last paper includes also an account of site percolation on the triangular lattice, and a discussion of site percolation on a so-called 'matching pair' of planar lattices.This term is explained in the companion paper [13]; the current work is concerned with the matching pair (G, G * ), where the so-called matching graph G * is defined as follows.
Let G = (V, E) be a planar graph, embedded in the plane R 2 in such way that two edges may intersect only at their endpoints.A face of G is a connected component of R 2 \ E. The boundary of a bounded face F is comprised of edges of G.The matching graph of G, denoted G * , is obtained from G by adding all diagonals to all faces.See Figure 1.1.Evidently, G * = G when G is a triangulation.A graph with connectivity 1 or 2 may have a multiplicity of non-homeomorphic planar embeddings, and therefore there is potential ambiguity over the definition of its matching and dual graphs (see Theorem 2.1(c)).
Remark 1.1.A face F of the above graph G may be unbounded, in which case its boundary comprises infinitely many edges and vertices.Such F generates an infinite complete subgraph of G * , on which a percolation process is trivial.We shall usually assume that all faces are bounded.Since our graphs are assumed quasi-transitive, this is equivalent to assuming that G is one-ended.(See [14], [2, Prop.2.1].)For quasi-transitive graphs with two or infinitely many ends, see Remark 1.5.
Sykes and Essam presented motivation for the exact relationship (1.1) p site c (G) + p site c (G * ) = 1, and this has been verified in a number of cases when G is amenable (see [6,16]).Note that, since G is a subgraph of G * , it is trivial that (1.2) p site c (G * ) ≤ p site c (G).It is less trivial to prove strict inequality in (1.2) for non-triangulations, and indeed this sometimes fails to hold.
Suppose that G is planar, quasi-transitive, one-ended, and possibly non-amenable.If we are to embed G in a plane in an appropriate fashion, the plane in question may need to be hyperbolic rather than Euclidean.Site percolation in the hyperbolic plane is the subject of the recent paper [13], where it is proved, amongst other things, that (1.3) p site u (G) + p site c (G * ) = 1, where p site u is the critical probability for the existence of a unique infinite open cluster.When G is amenable, we have p site c (G) = p site u (G), in agreement with (1.1) (see [18,Chap. 7] for a discussion of critical points of quasi-transitive, amenable graphs).By (1.2), we have p site u (G) + p site c (G) ≥ 1, and it becomes desirable to know when strict inequality holds.(When G is non-amenable, it is proved in [5] that p site c (G) < p site u (G).)Let T (respectively, Q) be the set of all infinite, connected, locally finite, plane, 2connected, simple graphs that are in addition transitive (respectively, quasi-transitive).(It is explained in [13,Rem. 3.4] that the assumption of 2-connectedness is innocent in the context of site percolation.)A path (. . ., x −1 , x 0 , x 1 , . . . ) of G * is called nonself-touching if, for all i, j, two vertices x i and x j are adjacent if and only if |i−j| = 1.
Here is the main theorem of the current work, followed by a corollary.Theorem 1.2.Let G ∈ Q be one-ended.Then p site c (G * ) < p site c (G) if and only if G * contains some doubly-infinite, non-self-touching path that includes some diagonal of G.
Theorem 1.2 is proved in Section 5 using methods derived in Section 4. □ We turn to examples of Theorem 1.2 in action.Firstly, the condition of the theorem is satisfied by all transitive, one-ended non-triangulations G ∈ T , as in the next theorem.
Theorem 1.4.Let G ∈ T be one-ended but not a triangulation.Then G satisfes the condition of Theorem 1.2, and therefore p c (G * ) < p c (G).This is essentially the assertion of the forthcoming Theorem 3.1, which is proved in Section 6.2 by the so-called metric method.The inequality of Theorem 1.4 then holds by Theorem 1.2.
The situation for quasi-transitive graphs G is more complicated, and we have no useful necessary and sufficient condition for the inequality p c (G * ) < p c (G).Instead, we include in Section 3 a sufficient (but not necessary) condition.(Note added before publication: the quasi-transitive case is treated in [12].)Remark 1.5.The above results are subject to the assumption that G is one-ended.By [14] and [2, Prop.2.1], the number η of ends of G ∈ Q lies in the set {1, 2, ∞}.As in Remark 1.1, we have that p c (G * ) = 0 if η ̸ = 1.On the other hand, it is standard that p c (G) ≥ 1/(∆ − 1) where ∆ is the maximum vertex-degree of G.The inequality There follow some remarks about the proof of Theorem 1.2.The general approach of the proof is to use the method of enhancements, as introduced and developed in [1] (though there is earlier work of relevance, including [19]).While this approach is fairly standard, and the above result natural, the proof turns out to have substantial complexity arising from the generality of the assumptions on G, and the fact that we are studying site (rather than bond) percolation (see [3]); the proof is, in contrast, fairly immediate for the amenable, planar lattices mentioned above.
We remark that the version of (1.3) for bond percolation, namely was proved by Benjamini and Schramm [5,Thm 3.8] for one-ended, non-amenable, plane, transitive graphs.Here, G + denotes the dual graph of G. (The amenable case is standard.)The basic difference between the bond and site problems is the following.In the study of bond percolation, one is interested in open self-avoiding paths, whereas for site percolation we study open, non-self-touching paths -given an infinite path (. . ., x −1 , x 0 , x 1 , . . . ) such that, for some i + 1 < j, x i and x j are adjacent, the states of vertices x i+1 , . . ., x j−1 are independent of the event that the path contains an infinite, open sub-path.That is, one can cut out the loop.The central idea of the proof of Theorem 1.2 is as follows.Suppose G satisfies the given assumptions, and write π for the given doubly-infinite path containing the diagonal d.In order to apply the enhancement method, one needs to show that, if z is a pivotal vertex for the existence of a long (but finite) open path of G * between given regions A, B of space, after making local changes to the configuration one may find a pivotal diagonal near z.This is achieved by a surgery of paths.First, one cuts a finite subpath π ′ from π containing the diagonal d.Then one inserts a translate of π ′ into an open path ν from A to B in which z is pivotal.Such insertion requires 'adjustments' near the interfaces of these two paths, and it must be achieved without sacrifice of the non-self-touching property.It is an impediment to this surgery that G * is non-planar (unless G is a triangulation), and thus one works instead with a graph, denoted G, that is obtained from G by placing a new vertex within each non-triangular face of G and joining this new vertex to each vertex of the face.
Turning to the contents of the current article, after the introductory Section 2, we explain the relevance of Theorem 1.2 to transitive and quasi-transitive graphs in Section 3. The proofs begin with some preliminary observations in Section 4, and the main theorem is proved in Section 5.The claim of Section 3 for quasi-transitive graphs is proved in Section 6.
Notation and basic properties
2.1.Graph embeddings.We shall assume familiarity with basic graph theory and its notation, and refer the reader to [13] for relevant definitions.Let Q be given as prior to Theorem 1.2, and let T be the subset of Q comprising the transitive graphs.
An embedding of a graph G = (V, E) (with underling 1-complex denoted |G|) in a surface S is a continuous map ϕ : |G| → S such that the induced map |G| → ϕ(|G|) is a homeomorphism.An embedding ϕ is called cellular if S \ϕ(G) is a disjoint union of spaces homeomorphic to an open disc.(See [20] and [21,Sect. 3.2].) We are concerned here with embeddings of planar graphs in either the Euclidean or hyperbolic planes, and we shall use H to denote either of these as appropriate for the setting.A useful summary of hyperbolic geometry may be found in [8] (see also [15]).An embedding of a graph G in H is called proper if every compact subset of H contains only finitely many vertices of G and intersects only finitely many edges.Henceforth, all embeddings will be assumed to be proper.
An Archimedean tiling (or uniform tiling) of a two-dimensional Riemannian manifold is a tiling by regular polygons such that its isometry group (of the tiling) acts transitively on its vertex-set.The edges of the tiling are geodesics.A discussion of amenability may be found in [18,Sect. 6 We give a formal definition of the matching graph of a planar graph G = (V, E).Firstly, one embeds G in the plane in such a way that two edges intersect only at their endpoints; such an embedded graph is called a plane graph.A face of a plane graph G is a connected component of H \ E. In this work we shall treat only one-ended graphs, for which all faces G are bounded with (topological) boundaries ∂F comprised of finitely many edges; the size of F is the number of edges in its boundary.A cycle C of a simple graph G = (V, E) is a sequence v 0 , v 1 , . . ., v n+1 = v 0 of vertices v i such that n ≥ 3, e i := ⟨v i , v i+1 ⟩ satisfies e i ∈ E for i = 0, 1, . . ., n, and v 0 , v 1 , . . ., v n are distinct.Let G be a plane graph, duly embedded in the Euclidean or hyperbolic plane.In this case we write C • for the bounded component of H \ C, and Let V (∂F ) be the set of vertices lying along the boundary of the face F .For each face F and each non-adjacent pair x, y ∈ V (∂F ), we add an edge inside F between x and y.We write G * = (V, E * ) for the ensuing matching graph of G.Note that G * depends on the particular embedding of G.If G is 3-connected then, by Theorem 2.1(b), it has a unique embedding up to homeomorphism.If G is 2-connected but not 3-connected, we need to be definite about the choice of embedding, and we require it henceforth to be 'canonical' in the sense of Theorem 2.1(c).
Further notation.
A plane graph G is called a triangulation it every face is bounded by a 3-cycle.The automorphism group of the graph G = (V, E) is denoted Aut(G).The orbit of v ∈ V is written Aut(G)v, and we let and d G denotes graph-distance in G.We write u ∼ v if u, v ∈ V are adjacent, which is to say that d G (u, v) = 1.For any G, we fix some vertex denoted v 0 .
We shall work with one-ended graphs G ∈ Q.Since G is assumed one-ended and 2-connected, all its faces are bounded, with boundaries which are cycles of G (see Remark 2.2(d)).
Non-self-touching paths and cycles arise naturally when studying site percolation (such paths were called stiff in [1], and self-repelling in [11, p. 66]).
We shall consider non-self-touching paths in two graphs derived from a given G ∈ Q, namely its matching graph G * , and the graph G obtained by adding a site within each face F of size 4 or more, and connecting every vertex of F to this new site.The graph G * may possess parallel edges.The property of being non-selftouching is indifferent to the existence of parallel edges, since it is given in terms of the vertex-set of π and the adjacency relation of H.
Here is the fundamental property of graphs that implies strict inequality of critical points.This turns out to be equivalent to a more technical 'local' property, as described in Section 4.2; see Theorem 4.8.As a shorthand, henceforth we abbreviate 'doubly-infinite non-self-touching path' to '2∞-nst path'.Let p ∈ [0, 1].We endow Ω with the product measure P p with density p.For v ∈ V , let θ v (p) be the probability that v lies in an infinite open cluster.It is standard that there exists p c (G) ∈ (0, 1] such that and p c (G) is called the critical probability of G.
For background and notation concerning percolation theory, the reader is referred to the book [11], the article [13], and to Section 5.
Two criteria for property Π
In this section we present the 'metric criterion' for a one-ended graph G ∈ Q to have the property Π of Definition 2.4.This criterion is valid for one-ended, nontriangulations G ∈ T , and thus we arrive in particular at the following.
Theorem 3.1.Let G ∈ T be one-ended but not a triangulation.Then G has property Π.
The criterion holds for a certain class of quasi-transitive graphs, and the outcome is a sufficient but not necessary condition for a quasi-transitive graph G ∈ Q to have property Π, namely Theorems 3.4.
The embedding results of Section 2 may be used in proofs of the existence of 2∞nst paths in one-ended graphs G ∈ Q satisfying the following forthcoming metric criterion.First, recall the relevant embedding property.By Theorem 2.1(a, c), every quasi-transitive, one-ended G ∈ Q has a canonical embedding in H.
Throughout this section we shall work with the Poincaré disk model of hyperbolic geometry (also denoted H), and we denote by ρ the corresponding hyperbolic metric.For definiteness, we consider only graphs G embedded in the hyperbolic plane; the Euclidean case is similar, subject to the simplification that the geometry of the space is Euclidean rather than hyperbolic.
Let G ∈ Q be one-ended and not a triangulation.By 2-connectedness and Remark 2.2(d), the faces of G are bounded by cycles.As before, we restrict ourselves to the case when G is non-amenable, and we embed G canonically in the Poincaré disk H.The edges of G are hyperbolic geodesics, but its diagonals are not generally so.The hyperbolic length of an edge e ∈ E * \ E does not generally equal the hyperbolic distance between its endvertices, denoted ρ(e).
For e ∈ E * , let Γ e denote the doubly-infinite hyperbolic geodesic of H passing though the endvertices of e, and denote by π ρ(e) ≥ ρ(π e (x), π e (y)), for all f = ⟨x, y⟩ ∈ E.
The graph G is said to satisfy the metric criterion if G has a canonical embedding in H for which some diagonal d ∈ E * \ E is maximal.There always exists some maximal edge of E * , but it is not generally unique, and it may not be a diagonal.The following lemma is proved in the same manner as the forthcoming Lemma 6.1.
Here is the main theorem for quasi-transitive graphs using the metric method.Remark 3.5.The condition of Theorem 3.4 is sufficient but not necessary, as indicated by the following example.Let G be the canonical tiling of R 2 illustrated in Figure 3.1.By inspection, no diagonal is maximal, whereas G has property Π.The sufficient condition in question can be weakened as explained in Remark 6.4, and the above example satisfes the weaker condition.
Some observations
4.1.Oxbow-removal.We begin by describing a technique of loop-removal (henceforth referred to as 'oxbow-removal').Let H be a simple graph embedded in the Euclidean/hyperbolic plane H (possibly with crossings).(a) Let C be a plane cycle of H that surrounds a point x / ∈ H.There exists a nonempty subset C ′ of the vertex-set of C that forms a plane, non-self-touching cycle of H and surrounds x.(b) Let π be a finite (respectively, infinite) path with endpoint v.There exists a non-empty subset π ′ of the vertex-set of π that forms a finite (respectively, infinite) non-self-touching path of H starting at v. If π is finite, then π ′ can be chosen with the same endpoints as π.
Proof.(a) Let C = (v 0 , v 1 , . . ., v n , v n+1 = v 0 ) be a plane cycle of H that surrounds x / ∈ H; we shall apply an iterative process of 'loop-removal' to C, and may assume n ≥ 4. We start at v 0 and move around C in increasing order of vertex-index.Let J be the least j ≤ n such that there exists i ∈ {1, 2, . . ., j − 2} with v i ∼ v J , and let I be the earliest such i.Consider the two cycles (These cycles are called oxbows since they arise through cutting across a bottleneck of the original cycle C.) Since C surrounds x, so does at least one of C ′ and C ′′ , and we suppose for concreteness that C ′′ surrounds x.We replace C by C ′′ .This process is iterated until no such oxbows remain.
(b) This part is proved by a similar argument.When the endpoints v 0 , v n of π are not neighbours, we use oxbow-removal as above; otherwise, we set π ′ = (v 0 , v n ).□ Remark 4.2.Lemma 4.1 will be used in the following context.Firstly, one may apply oxbow-removal to certain paths of a planar graph in order to obtain a non-selftouching subpath (see the forthcoming Lemma 4.3).Similarly, oxbow-removal may sometimes be used to generate a non-self-touching subpath of a concatenation of two non-self-touching paths.
Path-surgery will be used in the forthcoming proofs: that is, the replacement of certain paths by others.Consider a one-ended G ∈ Q, embedded canonically in the hyperbolic plane H, which for concreteness we consider here in the Poincaré disk model (see [8]), also denoted H.By Theorem 2.1(c), every automorphism of G extends to an isometry of H. Let F be the set of faces of G.For F ∈ F and x, y ∈ V (∂F ), let L x,y be the set of rectifiable curves with endpoints x, y whose interiors are subsets of F • \ E, and write l x,y for the infimum of the hyperbolic lengths of all l ∈ L x,y .Let for any arc γ that crosses L δ laterally and intersects no vertex of G, the number of intersections between γ and π, if finite, is odd.
A more refined result may be found in Section 6.
Proof.(a) Since all faces of G are bounded, there exist vertices of G in both components of H \ L δ .Now, L δ fails to be crossed in the long direction if and only if it contains some arc γ that traverses it laterally and that intersects no edge of G. To see the 'only if' statement, let V − and V + be the subsets of V ∩ L δ that are joined in G ∩ L δ to the two boundary points of L, respectively; if V − ∩ V + = ∅, then there exists such γ separating V + and V − in L δ .For this γ, there exists a face F and A square of the square lattice, its matching graph, and with its facial site added.points x, y ∈ V (∂F ), such that γ ⊆ λ for some λ ∈ L x,y .Let ϵ ∈ (0, 2δ − Φ), and find λ ′ ∈ L x,y with length not exceeding l x,y + ϵ.We may replace γ by some subarc γ ′ of λ ′ ∩ L δ .The length of γ ′ is no greater than Φ + ϵ < 2δ, a contradiction since L δ has width 2δ.Therefore, L δ contains some path π of G that crosses L δ in the long direction.
We apply oxbow-removal in G to π as described in the proof of Lemma 4.1.For any arc γ that crosses L δ laterally and intersects no vertex of G, the number of intersections between γ and π, if finite, decreases by a non-negative, even number whenever an oxbow is removed.It follows that the non-self-touching path π ′ (obtained after oxbow-removal) crosses L δ in the long direction.Proof.Let F be a face.The path π cannot contain three or more vertices of F , since that contradicts the non-self-touching property.Similarly, if π contains two such vertices, it must contain also the corresponding edge.If π is non-plane, it contains two or more diagonals of some face, which, by the above, cannot occur.□ As a device in the proof of Theorem 1.2, we shall work with the graph G obtained from G = (V, E) by adding a vertex at the centre of each face F , and adding an edge from every vertex in the boundary of F to this central vertex.These new vertices are called facial sites, or simply sites in order to distinguish them from the vertices of G.The facial site in the face F is denoted ϕ(F ).See [17, Sec.2.3], and also Figure 4.2.If ⟨v, w⟩ is a diagonal of G * , it lies in some face F , and we write ϕ(v, w) = ϕ(F ) for the corresponding facial site.
The main reason for working with G is that it serves to interpolate between G and G * in the sense of (5.2) below: we shall assign a parameter s ∈ [0, 1] to the facial sites in such a way that s = 0 corresponds to G and s = 1 to G * .It will also be useful that G is planar whereas G * is not.
Next, we specify some desirable properties of the graphs G * and G. Recall the property Π of Definition 2.4.Proof.Let G have property Π and let π be a 2∞-nst path of G * .For any two consecutive vertices u, v of Π such that δ(u, v) is a diagonal, we add between u and v the facial site ϕ(u, v).The result is a doubly-infinite path π ′ of G.By Lemma 4.4, ν ′ is non-self-touching in G, whence G has property Π. □ The properties of Definition 4.5 are 'global' in that they concern the existence of infinite paths.It is sometimes preferable to work in the proofs with finite paths, and to that end we introduce corresponding 'local' properties.
Let ζ(G) be as in Lemma 4.3(b).We shall make reference to the non-self-touching cycles σ r (v), σ * r (v) given in that lemma.We write σ r (v) for the non-self-touching cycle of G obtained from σ * r (v) by replacing any diagonal by a path of length 2 passing via the appropriate facial site of G.We abbreviate the closure of the region surrounded by σ * r (respectively, σ r ) to σ * r (respectively, σ r ).Let A(G) be the real number given as The graph G is said to have property Π A if there exists a vertex v ∈ V and a non-self-touching path π = (x 0 , x 1 , . . ., x n ) of G * such that (i) every vertex of π lies in σ * A (v), and x 0 , x n ∈ σ * A (v), (ii) there exists i such that The graph G is said to have property Π A if there exist vertices v, w ∈ V and a non-self-touching path π = (x 0 , x 1 , . . ., x n ) of G such that (i) every vertex of π lies in σ A (v), and 3. An illustration of the property Π A : a non-self-touching path of G * containing a diagonal near its middle.
(ii) there exists i such that That is to say, G has property Π A (respectively, Π A ) if G * (respectively, G) contains a finite, non-self-touching path of sufficient length that contains some diagonal (respectively, facial site).This definition is illustrated in Figure 4.3.Note that Π A+1 (respectively, Π A+1 ) implies Π A (respectively, Π A ) for sufficiently large A. The proof of this useful theorem utilises some methods of path-surgery that will be important later, and it is given next.
Proof of Theorem 4.8. (a) Let A > A(G)
. First, we prove that Π ⇔ Π A .Evidently, Π ⇒ Π A .Assume, conversely, that Π A holds for some A > A(G).Let the non-self-touching path π = (x 0 , x 1 , . . ., x n ) of G * , the vertex v = x i , and the diagonal d = ⟨v, x i+1 ⟩ be as in Definition 4.7(a); think of π as a directed path from x 0 to x n , and note by Lemma 4.4 that π is a plane graph.We abbreviate Let π 1 be the subpath of π from v to x 0 , and π 2 that from x i+1 to x n .Let a i be the earliest vertex/site of π i lying in ∂ − σ A .See the central circle of In the easiest case when D ≥ 2, one finds (green) nontouching subarcs σ i A of σ A to which v may be connected by non-selftouching paths.These subarcs may be connected to the boundary of H using subpaths of a doubly-infinite path constructed using Lemma 4.3(a).claim the following.
(4.3)
There exist two non-touching subpaths σ 1 , σ 2 of σ * A , each of length at least 1 2 |σ * A | − 4, such that: (i) for i = 1, 2, the subpath of π i leading to a i may be extended beyond a i along σ i to form a non-self-touching path ending at any prescribed y i ∈ σ i , and (ii) the composite path thus created (after oxbow-removal if necessary) is non-self-touching.
The proof of (4.3) follows.Let (4.4) As illustrated in the centre of Figure 4.4, we may find a non-touching pair of non-self-touching subpaths of σ * A such that the conclusion of (4.3) holds.Some oxbow-removal may be needed at the junctions of paths (see Remark 4.2).Suppose D = 1.We may picture σ * A as a (topological) circle with centre v, and for concreteness we assume that a 2 lies clockwise of a 1 around σ * A (a similar argument holds if not).See Figure 4.5.
A. Suppose the path π 1 , when continued beyond a 1 , passes at the next step to some b 1 ∈ A 1 , and add b 1 to obtain a path denoted π ′ 1 .Since D = 1, the next step of π 2 beyond a 2 is not into A 2 .On following π 2 further, it moves inside (σ * A ) • until it arrives at some point a we then include the subpath of π 2 between a 2 and b ′ 2 to obtain a path denoted π ′ 2 .We declare σ 1 to be the subpath of σ * A starting at b 1 and extending a total distance Let θ ∈ (0, 2π) be the angle subtended by the vector − − → a 2 a ′ 2 at the centre v.If θ < 3 4 π, say, each π ′ i may be extended along σ i to end at any prescribed y i ∈ σ i .Therefore, claim (4.3) holds in this case.
The situation can be more delicate if θ ≥ 3 4 π, since then a ′ 2 may be near to σ 1 .By the planarity of π, the region R between π ′ 2 and σ * A contains no point of π ′ 1 (R is the shaded region in Figure 4.5).We position a hyperbolic tube of width greater than Φ in such a way that it is crossed laterally by both π ′ 2 and the path σ 2 (as illustrated in Figure 4.5).By Lemma 4.3(a), this tube is crossed in the long direction by some path τ of G.The union of π ′ 2 and τ contains a non-self-touching path π ′′ 2 of G * from x i+1 to σ 2 (whose unique vertex in σ 2 is its second endpoint).Claim (4.3) follows in this situation.B. Suppose the hypothesis of part A does not hold, but instead π 2 passes from a 2 directly into σ * A .In this case we follow A above with π 1 and π 2 interchanged.C. Suppose neither π i passes from a i in one step into σ * A .We add b 2 to the subpath from x i+1 to a 2 , and continue as in part A above.
Suppose D = 0. Statement (4.3) holds by a similar argument to that above.
Having located the σ i of (4.3), we position a hyperbolic tube as in Figure 4.4, to deduce (after oxbow-removal, see Remark 4.2) the existence of a 2∞-nst path of G * that contains the diagonal d.Therefore, G has property Π, as required.
Hyperbolic tubes are superimposed on the graph at two steps of the argument above, and it is for this reason that we need A to be sufficiently large, say A > A ′ (G).
(b) It remains to show that Π ⇒ Π A for large A. By Lemma 4.6, Π ⇒ Π, and it is immediate that Π ⇒ Π A for large A.
Proof of Theorem 1.2
Consider site percolation on G with product measure P p , and fix some vertex v 0 of G.We write v ↔ w if there exists a path of G from v to w using only open sites (such a path is called open), and v ↔ ∞ if there exists an infinite, open path starting at v. The percolation probability is the function θ given by Remark 5.1.It is an old problem dating back to [4] to decide which graphs G satisfy p c (G) < 1, and there has been a series of related results since.It was proved in [9, Thm 1.3] that p c (G) < 1 for all quasi-transitive graphs G with super-linear growth (see also [10]).This class includes all G ∈ Q with either one or infinitely many ends (see [2,Sect. 1.4] and Theorem 2.1).Proposition 5.3.There exists A ′ (G) < ∞ such that the following holds.Suppose G ∈ Q is one-ended and has property Π A where A > A ′ (G).Let s ∈ (0, 1).There exists ϵ = ϵ(s) > 0 such that θ(p, s) > 0 for p c (G) − ϵ < p < p c (G).
We do not investigate the details of how A ′ (G) depends on G.An explicit lower bound on A ′ (G) may be obtained in terms of local properties of the embedding of G, but it is doubtful whether this will be useful in practice.
The rest of this proof is devoted to an outline of that of Proposition 5.3.Full details are not included, since they are very close to established arguments of [1], [11,Sect. 3.3], and elsewhere.
Let n be large, and later we shall let n → ∞.Consider site percolation on G with measure P p,s .We call a vertex (respectively, facial site) z pivotal if it is pivotal for the existence of an open path of G from v 0 to ∂Λ n (which is to say that such a path exists if z is open, and not otherwise).Let Pi n be the set of pivotal vertices, and Di n the set of pivotal facial sites.Proposition 5.3 follows in the 'usual way' (see [11,Sect. 3.3]) from the following statement.
By making changes to the configuration ω within the box Λ 4M (z) for some fixed M , (5.7) we construct a configuration in which Λ M (z) contains a pivotal facial site.
This implies (5.3) with f depending on the choice of z.Since Λ 4M (z) is finite and there are only finitely many types of vertex (by quasi-transitivity), f may be chosen to be independent of z.The above is achieved in five stages.
Assume for now that ω ∈ Ω and the pivotal vertex z satisfies For clarity of exposition, our illustrations are drawn as if G is embedded properly in the Euclidean rather than the hyperbolic plane.The principal effect of this is that hyperbolic tubes are represented as Euclidean rectangles.
Let G have property Π A .Let π = (x j ), v = x i , be as in Definition 4.7(b), and write ϕ The outline of the proof is as follows.
I. If there exist one or more open facial sites in Λ M (z), we declare them oneby-one to be closed.If at some point in this process, some facial site is found to be pivotal, then we have achieved (5.7), by changing ω within a bounded region.We may therefore assume that this never occurs, or equivalently that (5.9) ω has no open facial site in Λ M (z).
II. Find a non-self-touching open path ν in ω from v 0 to ∂Λ n .This path passes necessarily through the pivotal vertex z.III.By making changes within Λ 2M (z), we construct non-touching subpaths of ν from v 0 (respectively, ∂Λ n ) to ∂Λ M (z), that can be extended inside Λ M (z) in a manner to be specified at Stage V. This, and especially the following, stage resembles closely part of the proof in Section 4.3.IV.We splice a copy (denoted π ′ = απ) of π inside Λ A (v ′ ), and we make local changes to obtain paths π 1 , π 2 from the two endpoints of αϕ, respectively, to ∂Λ A (v ′ ) that can be extended outside Λ A (v ′ ) in a manner to be specified at Stage V. V. Between the contours ∂Λ A (v ′ ) and ∂Λ M (z), we arrange the configuration in such a way that the retained parts of ν hook up with the endpoints of the π i .
In the resulting configuration, the facial site ϕ ′ := αϕ is pivotal.
Some work is needed to ensure that ϕ ′ can be made pivotal in the final configuration.Lemma 4.3(b) will be used to traverse the annulus between the two contours at Stage V.In making connections at junctions of paths, we shall make use of the planarity of G. Rather than working with the boundaries of Λ M (z) and Λ A (v ′ ), we shall work instead with the non-self-touching cycles σ M := σ M (z) and Proof of Lemma 5.4.Stage I is first followed as stated above.
Stage II.By (5.6), we may find an open, non-self-touching path ν of G from v 0 to ∂Λ n , and we consider ν as thus directed.By (5.9), ν includes no facial site of Λ M (z).The path ν passes necessarily through z, and we let u (respectively, w) be the preceding (respectively, succeeding) vertex to z.
1.An illustration of the construction at Stages II/III.The non-self-touching path ν contains subpaths from v 0 to σ M , and from the latter set to ∂Λ n .The subpaths σ i M of σ M are indicated in green.
For y ∈ V , and the given configuration ω (satisfying (5.9)), let and write C y also for the corresponding induced subgraph of G.By (5.6), A. C u and C w are disjoint (and also non-touching), B. the subpath of ν, denoted ν(u−), from v 0 to u contains no facial site of Λ M (z), C. the subpath of ν, denoted ν(w+), from w to ∂Λ n contains no facial site of Λ M (z), D. the pair ν(z−), ν(z+) is non-touching.
Stage III.This is closely related to the proof of Theorem 4.8 given in Section 4.3.Note that the intersection of ν(u−) ∪ ν(w+) and Λ 2M (z) comprises a family of paths rather than two single paths.See Figure 5.1.
We follow ν(u−) towards u, and ν(w+) backwards towards w, until we reach the first vertices/sites, denoted a 1 , a 2 , respectively, lying in ∂ + σ M .Let ν 1 be the subpath of ν(u−) between v 0 and a 1 , and ν 2 that of ν(w+) between ∂Λ n and a 2 .We now change the states of certain vertices/sites x ∈ Λ 2M (z) by declaring (5.10) every 2. An illustration of the case D = 1 in the Stage III construction.There are two subcases, depending on whether θ > 0 (solid line) or θ < 0 (dashed line).The green lines indicate the subpaths σ i M in the subcase θ > 0. The rectangle is added in illustration of the hyperbolic tube used in the case θ ≥ 3 4 π.
We investigate next the subsets of σ M to which the a i may be connected within σ M .We shall show that: (5.11) there exist two non-touching subpaths σ 1 M , σ 2 M of σ M , each of length at least 1 2 | σ M | − 4, such that, for i = 1, 2: (i) a i has a neighbour b i ∈ σ i M , (ii) for y i ∈ σ i M , the path ν i may be extended from b i to y i along σ i M , thereby creating (after oxbow-removal if necessary) a non-self-touching path from the other endpoint of ν i , (iii) the composite path ν ′ i thus created is non-self-touching, and (iv) the pair ν ′ 1 , ν ′ 2 is non-touching.An explanation follows.Let (5.12) (5.11) follows as illustrated in Figure 5.1.Suppose D = 1.We may picture σ M as a circle with centre z, and for concreteness we assume that a 2 lies clockwise of a 1 around σ M (a similar argument holds if not) See Figure 5.2.
A. Suppose the path ν 1 , when continued along ν(z−) beyond a 1 , passes at the next step to some b 1 ∈ A 1 , and add b 1 to ν 1 (to obtain a path denoted ν ′ 1 ).Since D = 1, the next step of ν(w+) beyond a 2 is not to A 2 .On following ν(w+) further, it moves inside H\ σ M until it arrives at some point a 2 ) ≥ 2; we then add to ν 2 the subpath of ν(w+) between a 2 and b ′ 2 (to obtain an extended path ν ′ 2 ).Let θ(a ′ 2 ) be the angle subtended by the vector − − → a 2 a ′ 2 at the centre z, counted positive if ν(w+) passes clockwise around z of σ M , and negative if anticlockwise.
(i) There are two cases, depending on whether θ := θ(a ′ 2 ) is positive or negative.Assume first that θ > 0. If θ < 3 4 π, say, we declare σ 1 M to be the subpath of σ M starting at b 1 and extending a total distance 1 2 | σ M |−4 around σ M anticlockwise.We declare σ 2 M similarly to start at distance 2 clockwise of b 1 along σ M and to have the same length as σ 1 M .Each ν ′ i may be extended along σ i M to end at any prescribed y i ∈ σ i M .Therefore, claim (5.11) holds in this case.The situation can be more delicate if θ ≥ 3 4 π, since then a ′ 2 may be near to σ 1 M .By the planarity of ν, the region R between ν ′ 2 and σ M contains no point of ν ′ 1 (R is the shaded region in Figure 5.2).We position a hyperbolic tube of width greater than Φ in such a way that it is crossed laterally by both ν ′ 2 and the path σ 2 M given above.By Lemma 4.3(a), this tube is crossed in the long direction by some path τ of G.As illustrated in Figure 5.2, the union of ν ′ 2 and τ contains (after oxbow-removal) a non-self-touching path ν ′′ 2 from ∂Λ n to σ 2 M (whose unique vertex in σ 2 M is its second endpoint).We now declare each vertex/site of Λ 2M (z) \ ( σ M ) • to be open if and only if it lies in ν ′ 1 ∪ ν ′′ 2 .Claim (5.11) follows in this situation, with the σ i M given as above.(ii) Assume θ < 0, in which case there arises a complication in the above construction, as illustrated in Figure 5.3.In this case, there is a subpath L of ν ′ 2 from a 2 to a ′ 2 , that passes anticlockwise around v 0 , and ν ′ of G from ∂Λ n to a ′ 2 that does not contain a 2 (see Figure 5.3).We declare every x ∈ ν ′′ 2 open and every x ∈ ν ′ 2 \ ν ′′ 2 closed.The subpaths σ i M of σ M may now be defined as above.B. Suppose the hypothesis of part A does not hold, but instead ν 2 passes from a 2 into σ M .In this case we follow A with ν(u−) and ν(w+) interchanged.This case is slightly shorter than A since the above complication cannot occur.C. Suppose neither ν i passes from a i directly into σ M .We add b 2 to ν 2 and continue as in A above.
Suppose D = 0. Statement (5.11) holds by a similar argument to that of case (ii), Stage IV.We next pursue a similar strategy within Λ A (v ′ ).The argument is essentially that in proof of Theorem 4.8 given in Section 4.3, and the details of this are omitted here.See Stage V. Having located the subpaths σ i M of σ M , and the subpaths σ i A of σ A , we prove next that there exists j ∈ {1, 2}, and non-self-touching paths µ 1 , µ 2 , such that: (i) µ 1 , µ 2 is a non-touching pair, (ii) µ 1 has endpoints in σ 1 M and σ j A , and µ 2 has endpoints in σ 2 M and σ j ′ A , where j ′ ∈ {1, 2}, j ′ ̸ = j, and (iii) apart from their endpoints, µ 1 and µ 2 lie in ( σ M ) • \ σ A .This statement follows as in Figure 5.4 by positioning two hyperbolic tubes of width exceeding Φ, and appealing to Lemma 4.3(a).It may be necessary to remove some oxbows at the junctions of paths.
Hyperbolic tubes are superimposed on σ A above, and it is for this reason that A is assumed to be sufficiently large.
Having satisfied (5.7) subject to (5.8), we next explain how to remove the assumption (5.8).Let the pivotal vertex v satisfy v ∈ Λ 2M ; a similar argument applies if v ∈ Λ n \ Λ n−2M .Let π be an infinite, non-self-touching open path of G starting at v 0 , and declare closed every vertex of Λ 4M not lying in π. (Such a π exists by connectivity and oxbow-removal.)In the resulting configuration, every vertex/site in the subpath of π from ∂Λ 2M to ∂Λ 4M is pivotal.We pick one such vertex and apply the above arguments to obtain a pivotal facial site lying in Λ 4M .□ 6. Strict inequality using the metric method Lemma 6.1.For x, y ∈ H, we have ρ(π(x), π(y)) ≤ ρ(x, y).
Proof.We assume for simplicity that x and y are distinct and lie in the same connected component of H \ Γ; a similar proof holds if not.The points x, π(x), π(y), y form a quadrilateral with two consecutive right angles (see Figure 6.1).Let z be the orthogonal projection of x onto the geodesic containing y and π(y).The triple x, z, y forms a right-angled triangle, and the quadruple x, z, π(y), π(x) forms a Lambert quadrilateral.By the geometry of such shapes (see, for example, [15, Sect.III.5]), we have that ρ(x, y) ≥ ρ(x, z) ≥ ρ(π(x), π(y)).□ Let G = (V, E) ∈ T be one-ended but not a triangulation.We shall consider only the case when G is non-amenable, so that it is embedded as an Archimedean tiling in the Poincaré disk; the Euclidean case is similar.For an edge e of G * = (V, E * ), let ρ(e) denote the hyperbolic distance between its endvertices; since every e of G * (in its embedding) is a geodesic, ρ(e) equals the hyperbolic length of e.Since the embedding is Archimedean, every edge of G has the same hyperbolic length, and we may therefore assume for simplicity that (6.1) ρ(e) = 1, e ∈ E.
Each e ∈ E * is a sub-arc of a unique doubly-infinite geodesic, denoted Γ e , of H. Let r be the maximal number of edges in a face of G, and let F be a face of size r.Since F is a regular r-gon, by (6.1), F has some diagonal d satisfying Proof.The first case arises when e, viewed as a geodesic, is perpendicular to Γ + d , and the second when it is not.See Figure 6.2.□ In proceeding along Γ + d , we make an ordered list (w i ) of vertices as follows.(a) Set w 0 = b.(b) Every time Γ d passes into the interior of a face F ′ , it exits either at a vertex v ′ or across the interior of some edge e ′ .In the first case we add v ′ to the list, and in the second, we add to the list an endvertex of e ′ with maximal p-value.(c) If Γ + d passes along an edge e ∈ E, we add both its endvertices to the list in the order in which they are encountered.
The following lemma is proved after the end of the current proof.Lemma 6.3.The infinite ordered list w = (w 0 , w 1 , . . . ) is a path of G * with the property that p(w i ) is strictly increasing in i.
The composite path ν obtained by following ν − towards a, then d, then ν + , fails to be non-self-touching in G * if and only if there exists s < 0 and t ≥ 0 with (s, t) ̸ = (−1, 0) such that e ′′ := ⟨ν s , ν t ⟩ ∈ E * .If the last were to occur, by (6.4)-(6. in contradiction of (6.3).Thus ν is the required non-self-touching path.The above may be regarded as a more refined version of part of Proposition 4.3.
Proof of Lemma 6.3.That w is a path of G * follows from its construction, and we turn to the second claim.Let m ≥ 0, and consider w 0 , w 1 , . . ., w m as having been identified.We claim that (6.6) p(w m ) < p(w m+1 ).
(a) Suppose w m ∈ Γ + d .(i) If Γ + d includes next an entire edge of the form ⟨w m , g⟩ ∈ E, then w m+1 = g and (6.6) holds.(ii) Suppose Γ + d enters next the interior of some face F ′ .If it exits F ′ at a vertex, then this vertex is w m+1 and (6.6) holds.Suppose it exits by crossing the interior of an edge e ′ .If w m is an endvertex of e ′ , then w m+1 is its other endvertex and (6.6) holds; if not, then w m+1 is an endvertex of e ′ with maximal p-value (recall Lemma 6.2).(b) Suppose w m is the endvertex of an edge e that is crossed (but not traversed in its entirety) by Γ + d , and let F ′ be the face thus entered.The next vertex w m+1 is given as in (a)(ii) above, and (6.6) holds.
The proof is complete.□ Finally in this section, we prove Lemma 3. where π denotes projection onto Γ.The last inequality holds by Lemma 6.1.Therefore, e is maximal.□ 6.3.The case of quasi-transitive graphs.Certain complexities arise in applying the techniques of Section 6.2 to quasi-transitive graphs.In contrast to transitive graphs, the faces are not generally regular polygons, and the longest edge need not be a diagonal.Let G ∈ Q be one-ended and not a triangulation.As before, we restrict ourselves to the case when G is non-amenable, and we embed G canonically in the Poincaré disk H.The edges of G are hyperbolic geodesics, but its diagonals need not be so.The hyperbolic length of an edge e ∈ E * \ E does not generally equal the hyperbolic distance ρ(e) between its endvertices.
The proof is an adaptation of that of Section 6.2, and full details are omitted.In identifying a path corresponding to the path w of Lemma 6.3, we use the fact that edges of E are geodesics, and concentrate on the final departures of Γ + d from the faces whose interiors it enters.Remark 6.4.The condition of Theorem 3.4 may be weakened as follows.In the above proof of Theorem 3.1 is constructed a 2∞-nst path of G * (see the discussion following Lemma 6.3).It suffices that, in the sense of that discussion, there exist a diagonal d and s < 0, t ≥ 1 such that (i) the path (ν s , ν s+1 , . . ., ν t ) is non-selftouching in G * , and (ii) for all e ∈ E we have p(ν t ) − p(ν s ) > p(π(e)).Cf.Theorem 4.8.
Note added before publication: the quasi-transitive case is treated in [12].
Corollary 1 . 3 .
Let G ∈ Q be one-ended.Then p site u (G) + p site c (G) ≥ 1, with strict inequality if and only if the condition of Theorem 1.2 holds.Proof of Corollary 1.3.The given (weak) inequality is proved at [13, Thm 1.1(b)], and the strict inequality holds by (1.3) and Theorem 1.2.
( a )
All one-ended, transitive, planar graphs are 3-connected, and all embeddings of a one-ended, quasi-transitive, planar graph have only finite faces.(b) By Theorem 2.1(b), any one-ended G ∈ Q that is in addition transitive has a unique cellular embedding in H up to homeomorphism.Hence, the matching and dual graphs of G are independent of the embedding.(c) The conclusion of part (b) holds for any one-ended, 3-connected G ∈ Q.(d) For a one-ended, 2-connected G ∈ Q, we fix a canonical embedding (in the sense of Theorem 2.1(c)).With this given, the dual graph G + and the matching graph G * are quasi-transitive, and furthermore the boundary of every face is a cycle of G.
An edge e ∈ E * \ E is called a diagonal of G or of G * , and it is denoted δ(a, b) where a, b are its endvertices.If δ(a, b) is a diagonal, a and b are called * -neighbours.
locally finite graph with bounded vertex-degrees.A site percolation configuration on G is an assignment ω ∈ Ω := {0, 1} V to each vertex of either state 0 or state 1.A vertex is called open if it has state 1, and closed otherwise.An open cluster in ω is a maximal connected set of open vertices.
Figure 3 . 1 .
Figure 3.1.The graph G is the tiling of the plane with copies of this square.Taking into account the symmetries of the square, this tiling is canonical after a suitable rescaling of the interior square.The diagonals are indicated by dashed lines.
Theorem 3 . 4 .
Let G ∈ Q be one-ended but not a triangulation.Assume that G satisfies the metric criterion of Definition 3.2.Then G has the property Π of Definition 2.4.See Sections 6.2 and 6.3 for the proofs of Theorem 3.1, Lemma 3.3, and Theorem 3.4 by the metric method.
Lemma 4 . 1 .
Let H be a graph embedded in H.
Lemma 4 . 3 .
Let G = (V, E) ∈ Q be one-ended and embedded canonically in the Poincaré disk H, and let L δ be a hyperbolic tube.(a) If 2δ > Φ, then L δ contains a 2∞-nst path of G, and a 2∞-nst path of G * , that cross L δ in the long direction.(b) There exists ζ = ζ(G) (depending on G and its embedding) such that, for r > ζ and v ∈ V , the annulus Λ r
Figure 4 .
Figure 4.2.A square of the square lattice, its matching graph, and with its facial site added.
4 . 2 .Lemma 4 . 4 .
The same conclusion applies to G * on letting π be a path ofG * .(b) Let ζ be such that ρ(u, v) ≥ 2Φ whenever d G (u, v) ≥ ζ.The proof of part (b) follows that of part (a).□Graph properties.The proofs of this article make heavy use of path-surgery which, in turn, relies in part on the property of planarity.Let G ∈ Q, and let π be a (finite or infinite) non-self-touching path of G * .(a)For every face F of G, π contains either zero or one or two vertices of F .If π contains two such vertices u, v, then it contains also the corresponding edge ⟨u, v⟩, which may be either an edge of G or a diagonal.(b) The path π is plane when viewed as a graph.
Definition 4 . 5 .
The graph G ∈ Q is said to have property Π if G has a 2∞-nst path including some facial site.Lemma 4.6.Let G ∈ Q be one-ended.Then Π ⇒ Π.
Figure 4 2 Figure 4 . 4 .
Figure 4.4.In the easiest case when D ≥ 2, one finds (green) nontouching subarcs σ i A of σ A to which v may be connected by non-selftouching paths.These subarcs may be connected to the boundary of H using subpaths of a doubly-infinite path constructed using Lemma 4.3(a).
1 2 2 Figure 4 . 5 .
Figure 4.5.An illustration of the case D = 1.The green lines indicate the subpaths σ i A .The rectangle is added in illustration of the case θ ≥ 3 4 π.
The constant A ′ (G) in part (b) depends on the embedded graph G, viewed as a subset of H, rather on the graph G alone.In advance of giving the proof of Theorem 5.2, we explain how it implies Theorem 1.2.Proof ofTheorem 1.2 (assuming Theorem 5.2).If G does not have property Π, by Theorem 4.8 for large A it does not have property Π A , whence by Theorem 5.2(a), p c (G * ) = p c (G).Conversely, if G has property Π, by Theorem 4.8 again it has property Π A for large A, whence by Theorem 5.2(b), p c ( G) < p c (G).The final claim follows by the elementary inequality p c (G * ) ≤ p c ( G); see (5.2). □ Proof of Theorem 5.2(a).Let A 0 ∈ Z. Assume G has property Π A for no A ≥ A 0 , and let p > p c (G * ).Let π be an infinite open path of G * with some endpoint x.By Lemma 4.1(b), there exists a subset π ′ of π that forms a non-self-touching path of G * with endpoint x.Let A > A 0 .Since Π A does not hold, every edge of π ′ at distance 2A or more from x is an edge of G, so that there exists an infinite open path in G. Therefore, p ≥ p c (G), whence p c (G * ) = p c (G). □ The rest of this section is devoted to the proof of Theorem 5.2(b).Let Ω = Ω V × Ω Φ where Φ is the set of facial sites and Ω Φ = {0, 1} Φ .For ω = ω × ω ′ ∈ Ω and ϕ ∈ Φ, we call ϕ open if ω ′ ϕ = 1, and closed otherwise.Let P p,s = P p × P s be the corresponding product measure on Ω V × Ω Φ , and θ(p, s) = lim n→∞ θ n (p, s) where θ n (p, s) = P p,s (v 0 ↔ ∂Λ n in G), so that (5.2) θ(p, 0) = θ(p; G), θ(p, p) = θ(p; G), θ(p, 1) = θ(p; G * ), where θ(p; H) denotes the percolation probability of the graph H.Note that θ(p, s) is non-decreasing in p and s.The following proposition implies Theorem 5.2(b).
1
contains no vertex/site outside the closed cycle comprising L followed by the subpath of σ M from b ′ 2 to b 2 .In order to overcome this problem, we alter the path ν ′ 2 as follows.Let α denote the annulus Λ M (a 2 )\Λ M −ζ (a 2 ), with ζ as in Lemma 4.3(b).(We may assume M ≥ 2ζ.)By that lemma, α contains a non-self-touching cycle β of G that surrounds a 2 .The union of ν ′ 2 and β contains (after oxbow-removal) a non-self-touching path ν ′′ 2
6. 1 . 6 . 2 .
Embeddings in the Poincaré disk.Throughout this section we shall work with the Poincaré disk model of hyperbolic geometry (also denoted H), and we denote by ρ the corresponding hyperbolic metric.Proof of Theorem 3.1.Let Γ be a doubly-infinite geodesic in the Poincaré disk.Pick a fixed but arbitrary total ordering < of Γ.Then Γ may be parametrized by any function p : Γ → R satisfying p(v) = p(u) + ρ(u, v) for u, v ∈ Γ, u < v, and we fix such p.Any x / ∈ Γ has an orthogonal projection π(x) onto Γ (for x ∈ Γ, we set π(x) = x).
(6. 2 )
ρ(d) ≥ ρ(e) ≥ 1, e ∈ E * , and we choose d accordingly.By Lemma 6.1 applied to the geodesic Γ d , (6.3) ρ(π(e)) ≤ ρ(e) ≤ ρ(d), e ∈ E * , where π denote orthogonal projection onto Γ d , and ρ(γ) is the hyperbolic distance between the endpoints of an arc γ.Let < and p be the ordering and parametrization of Γ d given at the start of this subsection.We extend the domain of p by setting p(x) = p(π(x)), x ∈ H.We construct next a doubly-infinite path of G * containing d and lying 'close' to Γ d .Write d = ⟨a, b⟩ where a < b.Let Γ + d (respectively, Γ − d ) be the sub-geodesic obtained by proceeding along Γ d from b in the positive direction (respectively, from a in the negative direction).As we proceed along Γ + d , we encounter edges and faces of G.If e ∈ E is such that e ∩ Γ + d ̸ = ∅, then the intersection is either a point or the entire edge e (this holds since both e and Γ d are geodesics).
Figure 6 . 2 .
Figure 6.2.The two cases that arise when Γ + d meets an edge which is either perpendicular or not. | 14,470.4 | 2022-05-05T00:00:00.000 | [
"Mathematics"
] |
Phosphor-converted LED modeling by bidirectional photometric data
For the phosphor-converted light-emitting diodes (pcLEDs), the interaction of the illuminating energy with the phosphor would not just behave as a simple wavelength-converting phenomenon, but also a function of various combinations of illumination and viewing geometries. This paper presents a methodology to characterize the converting and scattering mechanisms of the phosphor layer in the pcLEDs by the measured bidirectional scattering distribution functions (BSDFs). A commercially available pcLED with conformal phosphor coating was used to examine the validity of the proposed model. The close agreement with the measurement illustrates that the proposed characterization opens new perspectives for phosphor-based conversion and scattering feature for white lighting uses. ©2010 Optical Society of America OCIS codes: (120.5820) Scattering measurements; (290.1483) BSDF, BRDF, and BSDF; (230.3670) Light-emitting diodes References and links 1. S. Nakamura, and G. Fasol, The Blue Laser Diode: GaN Based Light Emitters and Lasers, (Springer-Verlag, New York, 1997). 2. R. Mueller-Mach, G. O. Mueller, M. R. Krames, and T. Trottier, “High-power phosphor-converted light-emitting diodes based on III-nitrides,” IEEE J. Sel. Top. Quantum Electron. 8(2), 339–345 (2002). 3. http://www.philipslumileds.com/technology/whitelighting.cfm 4. N. Narendran, Y. Gu, J. P. Freyssinier-Nova, and Y. Zhu, “Extracting phosphor-scattered photons to improve white LEDs efficiency,” Phys. Stat. Solid A 202(6), R60–R62 (2005). 5. H. Luo, J. K. Kim, E. F. Schubert, J. Cho, C. Sone, and Y. Park, “Analysis of high-power packages for phosphor based white-light-emitting diodes,” Appl. Phys. Lett. 86(24), 243505 (2005). 6. Y. Ito, T. Tsukahara, S. Masuda, T. Yoshida, N. Nada, T. Igarashi, T. Kusunoki, and J. Ohsako, “Optical design of phosphor sheet structure in LED backlight system,” SID Int. Symp. Digest Tech. Papers 39(1), 866–869 (2008). 7. C.-H. Tien, C.-H. Hung, B.-W. Xiao, H.-T. Huang, Y.-P. Huang, and C.-C. Tsai, “Planar lighting by blue LEDs array with remote phosphor,” Proc. SPIE 7617, 761707 (2010). 8. S. C. Allen, and A. J. Steckl, “ELiXIR—Solid-state luminaire with enhanced light extraction by internal reflection,” J. Disp. Technol. 3(2), 155–159 (2007). 9. S. C. Allen, and A. J. Steckl, “A nearly ideal phosphor-converted white light-emitting diode,” Appl. Phys. Lett. 92(14), 143309 (2008). 10. K. Yamada, Y. Imai, and K. Ishi, “Optical simulation of light source devices composed of blue LEDs and YAG phosphor,” J. Light Vis. Env. 27(2), 70–74 (2003). 11. Y. Zhu, N. Narendran, and Y. Gu, “Investigation of the optical properties of YAG: Ce phosphor,” Proc. SPIE 6337 (2006). 12. Commission Internationale de l'Eclairage, Radiometric and Photometric Characteristics of Materials and their Measurement, 2ndEdition, (CIE 38 (TC-2.3), Paris, 1977). 13. F. E. Nicodemus, J. C. Richmond, J. J. Hsia, I. W. Ginsberg, and T. Limperis, Geometrical Considerations and Nomenclature for Reflectance, (National Bureau of Standards (US), 1977), Monograph 160. 14. X. Francois Sillion, and Claude Puech, Radiosity and Global Illumination, (Morgan Kaufmann Publishers Inc., San Francisco, 1994). 15. J. de Boer, “Modelling indoor illumination by complex fenestration systems based on bidirectional photometric data,” Energy Build. 38(7), 849–868 (2006). 16. C.-H. Tien, and C.-H. Hung, “An iterative model of diffuse illumination from bidirectional photometric data,” Opt. Express 17(2), 723–732 (2009). 17. M. E. Becker, “Evaluation and characterization of display reflectance,” Displays 19(1), 35–54 (1998). 18. M. E. Becker, “Display Reflectance: Basics, Measurement, and Rating,” J. SID 14(11), 1003–1017 (2006). #124178 $15.00 USD Received 11 Feb 2010; revised 21 May 2010; accepted 7 Jun 2010; published 24 Jun 2010 (C) 2010 OSA 13 September 2010 / Vol. 18, No. S3 / OPTICS EXPRESS A261 19. H.-T. Huang, C.-C. Tsai, Y.-P. Huang, J. Chen, J. Lin, and W.-C. Chang, “Phosphor conformal coating by a novel spray method for white light-emitting diodes as applied to liquid-crystal backlight module,” in proc. International Display Research Conference (Rome, Italy, 2009) 17.5. 20. M. Shaw, and T. Goodman, “Array-based goniospectroradiometer for measurement of spectral radiant intensity and spectral total flux of light sources,” Appl. Opt. 47(14), 2637–2647 (2008).
Introduction
The wavelength converting schemes have been widely used in many illuminating subjects, including cold cathode fluorescent lamps (CCFLs), plasma emission devices, and phosphorconverted light-emitting diodes (pcLEDs).Over the variety of applications, the rapid progress of pcLEDs has attracted much attention due to the emerging LED lighting market.The most general pcLED schemes use the broadband yellow phosphor (yttrium aluminum garnet: YAG) to absorb the blue-light flux from a GaN chip and generate the white light by mixing the die-emitted blue light with the phosphor-converted yellow light [1].In additional to the phosphor improvement, the configuration of pcLEDs was also evolved from the conventional scheme, where the YAG powder phosphor is directly coated on the GaN surface [2,3], to the remote phosphor approaches.The separation of the phosphor from the LED surface was firstly proposed as the scattered photon extraction (SPE) structure by N. Narendran et al. to enhance the extraction efficiency [4].After that, the multi-functional consideration including the remote phosphor, diffuse reflector cup, and hemispherical dome was further designed to minimize the guided radiant flux inside the LED [5].The concepts of the remote phosphor separated by an air gap from the blue LED were addressed to achieve various targets, such as highly uniform planar source [6,7], and enhanced light extraction by internal reflection [8,9].
The phosphor material absorbs energy in a region of wavelengths and then re-emits the energy in a region of longer wavelengths.In terms of the pcLEDs, because the phosphorscattered blue light and phosphor-emitted yellow light have different radiant intensity distributions, there is a non-uniform angular color distribution.Few studies have attempted to characterize the optical properties of the YAG phosphor in the pcLEDs due to the massive quantities of measurement and underlying complicated physical processes.K. Yamada et al. simulated the YAG phosphor film in a pcLED by defining the transmitted and reflected flux of the blue and yellow light, respectively [10].Although they analyzed the pcLED structure by the optical simulation with the measured phosphor properties, the accuracy of the simulated results was not verified.Zhu et al. also used two integrating spheres to measure the amount of transmitted and reflected power for characterizing the optical throughput of a YAG phosphor slide, which was illuminated by a fiber-guided source [11].However, the interaction of the illuminating energy with the phosphor would not just behave as a simple wavelength-converting phenomenon, but also a function of the illuminating and viewing geometry.In other words, the description of the phosphor-light interaction shall be associated with wavelength as well as geometric considerations, which can't be merely defines by the transmitted or reflected flux.Furthermore, the fiber-produced light sources in the references are not appropriate to be involved as the LED chip-emitted field.A general and complete description of such wavelength-converting properties should be defined for the incident light fields of the various LED configurations.
In this study, we propose a simple but effective methodology to characterize the optical properties of the phosphor layer in the pcLEDs by using the measured bidirectional scattering distribution functions (BSDFs), which are regarded as the angular impulse responses of a phosphor layer.The BSDF characterization can completely describe the energy relation of the direction and wavelength.The characterizing methodology and its measurement are introduced.Finally, a commercially available pcLED with the conformal phosphor coating was examined to validate the proposed methodology accordingly.
Energy balance equation
To characterize the phosphor in a pcLED sample, the BSDFs was adopted to define the general description of light propagation with the angular and wavelength variables [12,13].However, the optical properties are prohibitively complicated and lead to massive quantities of data.In order to avoid the explosion of photometric data, some conditions were assumed: 1.The geometrical configuration of phosphor is treated as a thin-layer approximation, including a forward and backward surface related to the LED chip.
2. The optical features of the phosphor include the wavelength conversion and scatter.
3. The relation of the incident and outgoing flux satisfies scalability and addictivity due to the linear conversion between the LED-emitted and phosphor-emitted spectral power distribution.In this paper, the forward mode was expressed in the equations.The associated photometric and geometric quantities in polar coordinates are illustrated in Fig. 1, where all the scientific symbols and terminology through this paper are listed in Table 1.
First of all, the energy balance equation [14] for a radiating surface is expressed by the following equation: where L, L e , and L s are the total radiance, the emitted radiance, and the non-emitted (scattered) radiance leaving point (x, y) in transmitting-side direction (θ t , φ t ).Here we assume that the phosphor layer is homogenous, the position (x, y) is not included as parameter.Based on the wavelength conversion mechanisms of pcLEDs, the energy balance equation of the phosphor is composed of three terms: where , represent the bidirectional photometric data with blue-to-blue, blue-to-yellow, and yellow-to-yellow pairs, respectively.
where ρ fs B-B is defined by the ratio of the transmitted radiance dL fs B in the transmitted direction For pcLEDs, the GaN LED serves as the excitation source with a specific spectral power distribution P i B (λ).In terms of blue-to-blue radiance, the role of phosphor is a typical scattering material.Thus, the phosphor-scattered radiance has a spectral power distribution P fs B (λ) which is identical to that of the incident light peaking around 450 nm. Figure 2 shows the normally incident spectrum P i B (λ) of illumination and the transmitted spectrum P fs B (λ) detected in the normal direction of the phosphor layer by the experimental measurement.
Thus, the L i B and L fs B in Eq. ( 4) were derived by the integration of the measured P i B (λ) and P fs B (λ) over the blue-light wavelength region (400-500 nm), Here the spectral BSDFs, ρ fs B-B , can be regarded as the angular spreading function with spectrum dependence.
where the ρ fe B-Y is defined as the ratio of the transmitted radiance dL fe Y in the transmitted direction to the irradiance dE i B in an incident direction, ( , ) ( , ) ( , , , ) , ( , ) ( , ) cos Different from the blue-to-blue radiance, behavior of the forward emitted spectral distribution P fe Y (λ) depends on the phosphor conversion properties.Thus, the ρ fe B-Y shall be obtained by the wavelength integration over different spectral regions separated by blue (400-500 nm) and yellow (500-750nm), respectively.
Blue-light Region
Yellow-light Region ( , ) ( , , ) ( , , , ) .( , ) ( , , ) As a set of the measured spectral radiance distribution in Fig. 3, the emitted spectrum P fe Y (λ) has a broad spectral distribution over the region of yellow-light.Therefore, the down converting phenomena of the phosphor conversion can be simultaneously described by the dichromatic BSDFs.
Yellow-to-yellow radiance L fs Y
In addition to the incident radiance L i B from GaN LED, the backward phosphor-emitted yellow light can be recycled to induce additional forward scattering in the phosphor layer.To characterize this effect, the non-emitted (scattered) radiance of the energy balance equation Eq. ( 1) shall take account of recycled radiation, as the third term in Eq. ( 2).Similar to aforementioned manipulation, the recycling radiance L fs Y can be described by the integration of incident light L i Y with the yellow-to yellow bidirectional distribution function where ρ fs Y-Y is defined by ( , , , ) ( , , , ) ( , , , ) ( , , , ), In additional to the dichromatic white mixing scheme, proposed methodology can be extended to a general form subject to the multiple excitation radiance, where λ 0 is the wavelength of the light source, such as the blue light from the GaN LEDs.The index m represents m-th emitted spectrum which depends on the light source and the converting properties of the phosphor.Through the characterizations of the spectral BSDFs, the transmitting-side optical properties of a phosphor layer in a pcLED can be numerically obtained by the integration (Eq.(2).)Several studies have been proposed to tackle the radiance integration [15,16].Among the various approaches, the Monte Carlo method is most common and available from the commercial simulation tools [14].Here we used a Monte Carlo method based computational tool, LightTools, to implement the integration.
BSDFs measurement
The spectral BSDFs of a YAG phosphor layer were measured by the conoscopic approach associated with an in-house light source module [17,18].Because of the rotationally symmetric properties of the YAG phosphor layer, the two-dimensional illumination sampling for the BSDFs measurement can be reduced to one-dimensional illumination scanning.As the schematic illustration in Fig. 5(a), the specimen was illuminated by a collimated beam with a set of discrete incident directions θ i, , and the corresponding angular spread function can be collected by the objective lens with the moderate numerical aperture for the observation of the imaged plane.The definition of the spectral BSDF is a ratio of differentials, so the basic problem with practical measurement lies on the instrument angular resolution.Here the inhouse collimated source, which the divergent angle is limited within ± 1 degree, was equipped to illuminate the phosphor surface from each incident angle.We used illumination aperture of 2-mm diameter to measure BSDF.Detailed discussions can refer to the literature [15].Figure 5(b) shows one set of the measured angular spread functions of a test specimen with the different incident angles θ i along a constant azimuthal direction φ i = 90°.Every incident beam illuminating the specimen would lead to an angular spread function.As the measured BSDFs vary smoothly under different incident angles θ i, , the angular sampling is adequate to completely characterize the scattering behavior.The BSDFs are highly relevant to the geometric natures of the phosphor particulates.In order to obtain the spectral BSDFs ρ in Eq. ( 12), three components were individually measured.First of all, the collimated incident beams with the designated spectral radiance distributions P i B (λ) or P i Y (λ) was set to illuminate the YAG phosphor film, where phosphor was coated on a thin substrate with identical coating parameters.Then, the spectrophotometric measurement was executed by scanning over the imaging plane of the conoscopic system to obtain the corresponding spectral radiance distribution, P fs B (θ t ,φ t ,λ), P fe Y (θ t ,φ t ,λ), P fs Y (θ t ,φ t ,λ), accordingly.Figure 6 shows the measured results of P fs B (θ t ,λ) and P fe Y (θ t ,λ), where the spectrometer scanned over the θ t direction with 5-degree intervals under the normal illumination condition of the blue LED source.Based on the definition of dichromatic BSDF, the scattered and the emitted angular spread function can be separated by the integration boundary of the measured spectral distributions., can be obtained via the same procedure as well.The spectral BSDFs also provide a figure of merit to qualitatively examine the optical properties of the phosphor film.The spectral BSDFs are highly dependent on the recipe and micro-feature of the phosphor layer, different manufactures will have their own BSDFs.In this case, the transmitted angular distribution in Fig. 7 (a) and (c) is concentrated in a relatively narrow angular range with shift variant properties.On the other hand, blue-toyellow bidirectional distribution functions in Fig. 7 (b) implies the wavelength conversion mechanism effectively leave the YAG phosphor layer as a near Lambertian field.
Verification
In order to validate the phosphor characterization, a commercially available pcLED with conformal phosphor coating was examined.Firstly, the BSDF measurement of the YAG phosphor coated on PET substrate was conducted, where the coating method and recipe of the phosphor were identical to that in chip-level.Then we imported the measured bidirectional photometric data (in Section 3.1) into the simulation with GaN blue LED chip (0.5 × 0.5 mm 2 ) to predict both blue and yellow radiant intensity distribution.The inset of Fig. 8 shows the measured pcLED, where the phosphor coated on the blue LED chip (0.5 × 0.5 mm) was fabricated by the pulsed spray coating method [19].Because of the consideration of the multiwavelength emission, the simulation was performed in multi-steps.After summing the simulated results of each wavelength, the simulated far-field luminous intensity distribution and angular correlated color temperature (CCT) distribution are shown in Fig. 8(a) and (b), respectively [20].Comparing with the experimental measurements, the numerical predictions have 98.9% and 97.9% correlation with the practical results.The CCT deviations are mainly resulted from the BSDF measurement errors attributed by the distortion of conoscopic system.After the energy superposition of Eq. ( 2), the BSDF errors would be cumulated.Despite the deviations, the simulated CCT distribution curve still provides useful information to evaluate the color uniformity of a pcLED configuration.The close agreement of the measurement demonstrated the validity of the proposed model for phosphor description in the pcLED applications.
Conclusions
A simple but effective phosphor modeling of the pcLED is proposed.The major advantage of this study lies in that there is no need to formulate the complex physical mechanism of the phosphor scattering in a microscopic viewpoint.Instead, as long as the coated phosphor layer is available, the proposed methodology assisted by the measured BSDFs is able to characterize the phosphor properties with the direction and wavelength variables.By the Monte Carlo simulation, pcLED luminous intensity distribution and its angular CCT distribution can be predicted with high accuracy.Closed agreement with a commercially available pcLED validates the proposed scheme, which certainly has impact for the LED development in illumination applications.
fs B , L fe Y , and L fs Y indicate the forward-scattered blue-light, forward-emitted yellowlight, and forward-scattered yellow-light radiance, respectively.As the equation shows, the three transmitting radiance terms are obtained by the integration of the incident blue-light radiance L i B and yellow-light radiance L i Y over the full solid angle Ω i of the incident hemisphere.Here the weights, ρ fs B-B , ρ fe B-Y , and ρ fs Y-Y
Fig. 1 .
Fig. 1.Photometric and geometric quantities in the polar coordinate system.
Fig. 2 .
Fig. 2. The normally incident illumination Pi B (λ) and the illumination Pfs B (λ) detected in the normal direction 2.3 Blue-to-yellow radiance L fe Y The second term on the Eq.(2), L fe Y , represents the emitted radiance subject to wavelength conversion by the phosphor layer.In general, the converting efficiency of the phosphor is merely determined by the energy-based quantity.As the spatial dependence is considered, a more complete discussion including directional function is included.For pcLEDs, the phosphor-converted efficiency obeys scalability and addictivity due to the unique relation of chip-emitted and phosphor-emitted spectral distribution.The forward emitted radiance L fe Y can be linearly related to the incident radiance L i B and blue-to-yellow bidirectional distribution function ρ fe B-Y , ( , ) ( , , , ) ( , ) cos .
Fig. 3 .
Fig. 3.The normally incident illumination Pi B (λ) and the illumination Pfe Y (λ) detected in the normal direction
10 )
As the blue-to-blue bidirectional distribution function ρ fs B-B introduced in Section 2.2, the ρ fs Y-Y is obtained by the integrations of the spectral distributions P i Y
11 )
#124178 -$15.00USD Received 11 Feb 2010; revised 21 May 2010; accepted 7 Jun 2010; published 24 Jun 2010 (C) 2010 OSA A sample of the measured spectral distributions P i Y (λ) and P fs Y (λ) were conducted, as shown in Fig. 4. It is important to note that the spectral BSDFs are highly relevant to the phosphor recipes, wavelength and illumination/viewing geometry.Different specimens and illuminating sources would exhibit discrepant BSDFs.
Fig. 4 .
Fig. 4. The normally incident illumination Pi Y (λ) and the illumination Pfs Y (λ) detected in the normal direction 2.5 Radiance integration To summarize the aforementioned manipulation about phosphor characterization, the spectral BSDFs were contributed by the three independent components: ρ fs B-B , ρ fe B-Y , and ρ fs Y-Y .
Fig. 5 .
Fig. 5. (a) Schematic measurement setup of BSDFs, (b) the measured angular spread functions of an available specimen.
Figure 7 (
a)-(c) show the measured ρ fs B-B , ρ fe B-Y, and ρ fs Y-Y of the considered phosphor film.In addition to the forward radiance in the transmitting hemisphere of the phosphor layer, the backward components,
Table 1 . Nomenclature Abbreviations
Subscripts and superscripts L radiance (W/sr*m 2 ) i Incident I radiant intensity (W/sr) t Transmitted Φ light flux (W) Β blue light P spectral radiance (W/sr*m 3 ) Y yellow light Ρ bidirectional function (sr −1 ) B In terms of the non-emitted radiance, the forward scattered blue-light radiance, L fs | 4,750.2 | 2010-09-13T00:00:00.000 | [
"Physics"
] |
Design of an Integrated Sub-6 GHz and mmWave MIMO Antenna for 5G Handheld Devices
: The reported work demonstrates the design and realization of an integrated mid-band (sub-6 GHz) and mmWave multiple input, multiple output (MIMO) antenna for 5G handheld devices. The proposed prototype consists of the two-port MIMO configuration of the mid-band antenna placed at the top and bottom of the substrate, while the 4-port mmWave MIMO antenna is placed sideways. The MIMO configuration at the top and bottom consists of a two-element array to achieve high gain at the mid-band spectrum, while the antennas placed sideways are optimized to cover the 5G-mmWave band spectrum. The overall dimensions of the board were selected the same as the of smartphones, i.e., 151 mm × 72 mm. The mid-band antenna has an operational bandwidth of 2.73 GHz, whereas the mmWave antenna has an impedance bandwidth of 3.85 GHz with a peak gain of 5.29 and 8.57 dBi, respectively. Furthermore, the design is analyzed for the various MIMO performance parameters; it was found that the proposed antennas offer high performance in terms of envelop correlation coefficient (ECC), diversity gain (DG), mean effective gain (MEG) and channel capacity loss (CCL) within operational range. A fabricated prototype was tested and measured results show strong agreement with predicted results. Moreover, the proposed work is compared with state-of-the-art work for the same applications to demonstrate its potential for targeted application.
Introduction
In the present era, a lot of wireless communication technologies and systems are available which provide data exchange with high data rates and low loss rate. Currently, the 4th Generation of Communication (4G) also known as Long Term Evolution (LTE) leads all other wireless technology by providing the facility of high data rates and throughput [1]. However, the increased demand of the digital era requires wide bandwidths with high throughputs resulting in the limitation of using the sub-6 GHz band for future communication systems [2]. This forces the researcher towards the exploration of the mmWave operates in the lower 5G/4G LTE, i.e., 3.3-4.2, 4.4-5.0, 4.8-5.0 and IMT 2.5-2.690 GHz bands. In addition, the design also operates effectively in the 5G mmWave band, i.e., 24.25-27.5 GHz. Aside from the wide bandwidth coverage, the proposed architecture was meticulously examined in terms of both single and MIMO antenna performance. Owing to the high-performance parameters of the antenna and proper MIMO characteristics, the proposed design may be considered as a potential candidate for mid-and high-band handheld applications.
Proposed Integrated Design
The proposed integrated design was modeled and simulated by using one of the standard antenna design and simulation tools, Computer Simulation Technology (CST) Studio. The prototype was designed for smartphones or handheld applications (note: 10 board dimensions were assumed for the design). The proposed prototype was designed using RT/Duorid Rogers 5880 material as the substrate. The material has a relative permittivity εr of 2.2 and a dissipation factor-tanγ of 0.0004. The final optimized design is reported in Figure 1. In Figure 1, there are two MIMO configurations of the antennas, one along the length and the other along the width. The MIMO configuration consisting of two antennas along the length (one at the top and the other at the bottom) of the board is designed for the 4G LTE and lower 5G bands applications. In this MIMO configuration, each antenna element is a novel array of two linear elements with a parallel microstrip feeding network. The partial ground structure is used with the array antenna. The dimensions of the ground have been optimized to acquire the required characteristics in the desired band. Similarly, the four antennas MIMO configuration along the width of the board is designed for the 5G mmWave band. Each antenna element in this configuration is designed with the defected ground structure (DGS). The DGS for each element is optimized by empirical parametric analysis, with the objective of radiation pattern and gain enhancement in the first place and MIMO performance characteristics improvement in the second place. The optimized dimensions annotated in Figure 1 are reported in Table 1.
The proposed antenna design progression is discussed in two parts. In the first subsection, the mid-band MIMO antenna step-by-step designing is discussed. In the second subsection, the mmWave and high-frequency MIMO antenna design process is explained In Figure 1, there are two MIMO configurations of the antennas, one along the length and the other along the width. The MIMO configuration consisting of two antennas along the length (one at the top and the other at the bottom) of the board is designed for the 4G LTE and lower 5G bands applications. In this MIMO configuration, each antenna element is a novel array of two linear elements with a parallel microstrip feeding network. The partial ground structure is used with the array antenna. The dimensions of the ground have been optimized to acquire the required characteristics in the desired band. Similarly, the four antennas MIMO configuration along the width of the board is designed for the 5G mmWave band. Each antenna element in this configuration is designed with the defected ground structure (DGS). The DGS for each element is optimized by empirical parametric analysis, with the objective of radiation pattern and gain enhancement in the first place and MIMO performance characteristics improvement in the second place. The optimized dimensions annotated in Figure 1 are reported in Table 1. The proposed antenna design progression is discussed in two parts. In the first subsection, the mid-band MIMO antenna step-by-step designing is discussed. In the second subsection, the mmWave and high-frequency MIMO antenna design process is explained comprehensively.
Mid-Band (4G LTE and Lower 5G) MIMO Configuration
A systematic procedure was followed for the design of the low frequency MIMO configuration. The reported optimized antenna was designed after five evolution stages. In the first stage, standard mathematical Equations (1)-(3) were used for the design of the initial microstrip-fed rectangular patch antenna with complete ground [24]. The S11 parameter for the same stage is reported in Figure 2 as Step-1. It can be observed that the antenna operates as per −10 dB standard, from 3.4 to 6 GHz. However, in order to cover the mentioned applications band, horizontal slits were added into the design, referred to as Stage-2. The dimensions for the slits were optimized using the empirical parametric analysis. The S11 parameter for this stage is reported in Figure 2 as Step-2.
where, Afterwards, an additional pair of slits were introduced in the upper half part of the design, the position and dimension for the slit were optimized by the use of parametric analysis. The S11 parameter for the modified design is reported in Figure 2 as Step-3. Furthermore, in order to increase the performance of the design, different transformations were applied to the design. After the analysis, it was found that bending in the upper and lower half portion provides good results in terms of better VSWR and matching. The bending was explored at different angles and, after the analysis, it was found that bending the value of 8 generates the best results. The S11 parameter for the fourth and the fifth stage is reported in Figure 2 as Step-4 and Step-5.
After the design of the novel single element, an array for the final antenna design was explored for further improving the gain and bandwidth. The S-parameter for this stage is reported in Figure 3a. Finally, a two-element MIMO configuration for the midband applications was designed by placing the final designed array at the top and bottom of the sheet. The location was selected by keeping in view the place of other electronic components for the standard devices.
The detailed annotation and values for the different dimensions for the single, array and MIMO antenna designed for the mid-band are reported in Figure 3 and Table 1.
The S-parameters for the single-element array and MIMO configuration are reported in Figure 4. It can be noted that the designed single element resonates from 2.83 to 4.03 GHz, therefore covering a −10 dB bandwidth of 1.2 GHz (covering lower 5G and 4G LTE bands). However, as a result of gain-bandwidth trade-off, gain in the entire band is not high; therefore, an array configuration for the design was explored. The array was optimized by the use of empirical parametric study analysis. The final optimized array antenna operates from 2.43 to 5.22 GHz, therefore covering a −10 dB bandwidth of 2.79 GHz and covering the 4G LTE and 5G sub 6 GHz bands with maximum IEEE gain of 6.1 dBi, with maximum radiation and total efficiency of 96.3% and 95.4%, respectively. For the fulfillment of the requirement of the data rate, MIMO configuration is a requirement for the 4G LTE and 5G design. Therefore, a two-element MIMO configuration was designed in the final stage. The S-parameters for both MIMO antennas are reported in Figure 4a. It can be noted that both of the antennas have the same S-parameters as that of array antenna, which is an indicator of high isolation. The transmission parameters (|S 12 | and |S 21 |) for the configuration are reported in Figure 4b. It can be noted that, even in the worst-case scenario, the isolation between the antenna elements is 28 dB across the entire band. explored for further improving the gain and bandwidth. The S-parameter for this stage is reported in Figure 3a. Finally, a two-element MIMO configuration for the mid-band applications was designed by placing the final designed array at the top and bottom of the sheet. The location was selected by keeping in view the place of other electronic components for the standard devices.
The detailed annotation and values for the different dimensions for the single, array and MIMO antenna designed for the mid-band are reported in Figure 3 and Table 1. The S-parameters for the single-element array and MIMO configuration are reported in Figure 4. It can be noted that the designed single element resonates from 2.83 to 4.03 GHz, therefore covering a −10 dB bandwidth of 1.2 GHz (covering lower 5G and 4G LTE bands). However, as a result of gain-bandwidth trade-off, gain in the entire band is not high; therefore, an array configuration for the design was explored. The array was optimized by the use of empirical parametric study analysis. The final optimized array antenna operates from 2.43 to 5.22 GHz, therefore covering a −10 dB bandwidth of 2.79 GHz and covering the 4G LTE and 5G sub 6 GHz bands with maximum IEEE gain of 6.1 dBi, with maximum radiation and total efficiency of 96.3% and 95.4%, respectively. For the fulfillment of the requirement of the data rate, MIMO configuration is a requirement for
High-Band (mmWave 5G) MIMO Configuration
The initial design of the antenna is based on Equations (1)-(3). The same equations were used initially to design the mmWave patch antenna operating at 28 GHz. The antenna was then optimized by the use of the empirical multi-parametric study. The final design was optimized for the mmWave 24.25-28 GHz band. Afterwards, the antenna was used to create a 4-port MIMO configuration. The final single element design, along with the final MIMO design, is shown in the Figure 5.
The S-parameters for the single-element and the MIMO configuration are reported in Figure 6. It can be noted that the designed single element operates from 24.06 to 28 GHz and has an operational −10 dB bandwidth of 3.94 GHz, therefore covering the entire mmWave band-1 (24.25-27.5 GHz) of group 30 (GHz). The maximum IEEE gain of the antenna is 8.57 dBi.
The reflection parameters for the MIMO configurations are also reported in Figure 6a. It can be noted that the operational frequency range for the configuration is from 24.15 to 28 GHz and has an operational −10 dB bandwidth of 3.85 GHz. The MIMO configuration still operates in the required band with maximum IEEE gain of 8.57 dBi and the MIMO sum gain of 13.8 dBi. The transmission parameters for the proposed design are reported in the Figure 6b. It can be noted that, even in the worst-case scenario, the minimum isolation of 30 dB was achieved, which is significantly high.
High-Band (mmWave 5G) MIMO Configuration
The initial design of the antenna is based on Equations (1)-(3). The same equations were used initially to design the mmWave patch antenna operating at 28 GHz. The antenna was then optimized by the use of the empirical multi-parametric study. The final design was optimized for the mmWave 24.25-28 GHz band. Afterwards, the antenna was used to create a 4-port MIMO configuration. The final single element design, along with the final MIMO design, is shown in the Figure 5. The distance between the antennas in the MIMO configuration is provided in Figure 7. It can be seen, from Figure 7, that the minimum distance between the two antennas in the mentioned MIMO configuration is 45 mm and the maximum distance between any two antenna elements is 72.31 mm.
Integrated (4G LTE, Lower 5G and mmWave 5G) MIMO Configuration
The proposed antenna is designed for hybrid handheld devices. Both lower and higher band have their own significance and demand. Therefore, an integrated design for the future smart phone or handheld devices is proposed. The placement of the 4G LTE, lower 5G and mmWave 5G antenna was selected by the empirical parametric study. The integrated design is reported in Figure 1. The S-parameters for all the elements remain the same under integrated environment; therefore, the design may be considered as an appropriate candidate for the integrated mid-band and mmWave 5G applications.
Results and Discussion
The hardware manufactured prototype is shown in Figure 8. Here, the Sub Miniature version A (SMA) connectors used for the mmWave 5G-MIMO configuration are K8400A, that has an operational frequency range of up to 40 GHz. The connectors have an impedance of 50 ohm. The connectors belong to the 2.92 mm connector series. The equipment used for the measurements also has an operational range of up to 40 GHz
Surface Current
The surface current for the various lower band antenna configurations, i.e., singleelement, 2-element array and 2-port MIMO antenna, is depicted in Figure 9. It can be noted that, in all the cases when either antenna-1 or antenna-2 is excited while keeping the other elements un-excited, there is non-significant change in the surface current density of the other antenna. This is an indication of good isolation characteristics, or, in other words, the radiating characteristics of any single element remain free from the influence of another element. In addition, it can also be observed that the current density is distributed around feed and patch of each element, which results in the generation of an omni-directional radiation pattern, as explained in briefly in [25]. The surface current for the mmWave MIMO configuration is reported in Figure 10. It can be noted that, in all of the cases in which either antenna-1, antenna-2, antenna-3 or antenna-4 is excited, there is non-significant change in the surface current density of the other antennas, thus showing low influence of the surrounding elements on the performance of the excited element. Moreover, it can also be observed that the maximum current density is around the mid of the patch for both single-element and MIMO antenna configuration, as depicted in Figure 10. It is worthy to note here, that the surface current for the integrated antenna configuration remains identical to the individual MIMO configuration of 4G and 5G antenna system. Thus, the radiation characteristics of both lower and higher frequency antenna system remain unchanged after their integration on the same board.
Scattering Parameters
Scattering parameters, or S-parameters, for each of the MIMO configuration mounted as integrated design were simulated and measured for the understanding and reporting of the practical and simulation characteristics of the design. The Vector Network Analyzer (VNA) used for the measurement of the S-parameters was an Anritsu Shock Line MS46122B. The measurement setup for the S-parameters is reported in Figure 11. The measured and simulated S-parameters for both MIMO antenna configurations, including reflection and transmission coefficients, are reported in the form of plots in Figure 12.
It can be noted, in Figure 12a, that, for the low-band MIMO configuration, the measured reflection coefficients of both antennas correlate with the simulated results; therefore, the measured operational bandwidth of the design is also the same as that of simulation. The slight change in the characteristics is due to the non-ideal fabrication and mounting of the connectors. Similarly, the reflection coefficients for the mmWave antennas are also reported in Figure 12b. Here, it may also be noted that the measured results correlate with the simulated results. The slight change in the measured traces is due to the non-ideal fabrication and other real time impairments. Similarly, the transmission characteristics for the mmWave are reported in Figure 12c,d.
Here, it may be noted that the minimum isolation achieved is 29.03 dB, which is still high and is indicating good practical isolation characteristics. From the measured results, it may be noted that the effective bandwidth of the antenna, in the practical case, is still the same as that of the claimed simulated bandwidth.
Far-Field Results
For the verification of the far-field characteristics of the proposed integrated antenna, the fabricated antenna was tested in a shielded RF anechoic chamber. The testing setup images for the mid-band and high-band antennas are shown in Figure 13. The simulated and the measured GAIN results for the integrated antenna design are reported in Table 2. It can be noted that the measured and simulated gain have low residual error. The 2D radiation patterns of the integrated design are measured in the same chamber as mentioned for the GAIN. The radiation pattern corresponding to theta = 90 and phi = 90 plane for the simulated and measured radiation pattern at 3.8 GHz are reported in Figure 14. Similarly, the simulated and measured radiation patterns at 27.5 GHz are reported in Figure 15. Here, it can be noted that, at both frequencies, the simulated and measured radiation patterns have a good correlation with each other; however, at the lower frequency of 3.8 GHz, slight differences were observed, which are due to manufacturing impairments, while, at the higher frequency of 27.5 GHz, some ripples showed up because of unavoidable mismatches that happen at the higher frequency. The measured and simulated radiation patterns are different due to the non-ideal manufacturing process and the limitation of the testing and measuring equipment. In spite of such impairments, it can be noted that radiation patterns are still good enough for practical use.
MIMO Performance Analysis
The MIMO performance of the proposed design was analyzed using envelop correlation coefficient (ECC), diversity gain (DG), channel capacity loss (CCL) and mean effective gain (MEG). The detailed analyses for each of the parameters are reported in the relevant sections below.
Envelop Correlation Coefficient
The envelop correlation coefficient for a MIMO configuration can be directly measured using a 3D chamber, or can be calculated using Equation (4), provided in [26].
Equation (4) is used to calculate the value of the ECC between the antenna element i and j. For 4G wireless devices, the allowed limit for the ECC is 0.3. The envelop correlation coefficient for the mid-band MIMO configuration is reported in Figure 16 and for high-band is reported in Figure 17. The values for the ECC are meaured by using the above-mentioned method. Here, it can be noted that the value of the ECC (measured and simulated) for both configurations is within the allowed operational range for the designed bands.
Diversity GAIN
Diversity gain is a MIMO performance parameter that is used to capture the effect of spatial diversity, especially in a MIMO system. The more the received signals are uncorrelated, the higher is the value of the parameter. The diversity gain for a MIMO system can be calculated using Equation (5).
The plots for the simulation-and measurement-based DG for the mid-band and highband MIMO configurations are reported in Figures 18 and 19. Here, it can be noted that the values of the DG for both simulation and measurement achieve the typical range of the DG (reported in the literature).
Mean Effective Gain
For MIMO systems, independent antenna gain cannot be considered as the final performance metric. In addition to the typical standalone antenna gain, the mean effective gain was introduced; it is indicative of the gain performance of the antenna in a MIMO configuration. The typical formulae reported in the literature for the computation or determination of the MEG is reported below [27].
The typical range for the MEG is from −12 to −3. The MEGs based on simulation and hardware measurements are reported in Figures 20 and 21, for mid-band and high-band, respectively. Here, it can be noted the values for the MEG for all of the antennas in both of the MIMO configurations are within the allowed range. So, the antennas are not only performing well in the standalone, but also in the MIMO environment.
Channel Capacity Loss (CCL)
The channel capacity loss is an indicator for understanding the change in capacity due to the MIMO environment. The CCL for the MIMO antenna can be calculated by the equation reported in Equation (7), provided in [26,27].
where a = σ 11 σ 12 σ 21 σ 22 The typical value considered as the upper limit for the performance parameter is 0.4. The CCLs based on the simulation and measurements are reported in Figures 22 and 23. The values of the CCL for the simulated and the measured results is below 0.4.
Comparative Analysis
The summary for comparative analysis of the proposed design is reported in Table 3. The design reported in [16] provides the mid-band bandwidth of 0.2 GHz (3.4-3.6), whereas the proposed design provides a mid-bandwidth of 2.79 GHz; in addition to that, it can be noted that, in all other figures of merit, the proposed design provides even better performance. In [17], the reported design has an operation −6 dB bandwidth (in the best case) of 0.4 GHz, in the 5G lower band, and of 0.775 GHz (5. 15-5.925) in another band. The 0.4 GHz range is 3.4-3.8 GHz, as per −6 dB standard. In comparison, the proposed design provides a continuous −10 dB bandwidth of 2.79 GHz, covering 4G LTE and a complete lower 5G band. In all of the other mentioned performance metrics, the proposed design showed better performance. The design reported in [18] is a multi-layer design with the antenna dimension significantly larger than that of our proposed design. In addition to that, the reported design (in [18]) does not cover the complete lower 5G band. The gain of the reported low-or mid-band design is lower than that of our proposed design. The operational bandwidth and gain reported for the low-and high-frequency bands are lower than our proposed bandwidth and gain; therefore, the proposed design is a better option than the reported design. In [20], a design based on the conformal approach was proposed. The mid-band characteristics of the reported design are comparatively lower than those of our proposed design. In the high-band, the operational bandwidth of the reported design is 2.01 GHz (between 26.99 and 29 GHz), whereas our proposed design has an operational bandwidth of 3.85 GHz (24.15-28 GHz), covering a mmWave band from 24.25 to 27.5 GHz. Therefore, keeping in view the bandwidth and gain trade-off, our proposed design has better performance than the design reported in [20]. Our design offers 8.57 dBi gain at 28 GHz with a bandwidth of 3.85 GHz. In addition to that, the gain of 9 dB in the reported design's performance characteristics is not constant in the entire band. In [21], the reported design has an array configuration in the mid-band. The footprint of the reported antenna is larger than that of our proposed antenna. The reported design offers a bandwidth in chunks of 0.16 and 0.55 GHz, whereas our proposed design has a continuous bandwidth of 2.79 GHz. The gain-bandwidth trade-off of the proposed design, along with efficiency and all other figures of merit, is better than the reported design in mid-band. The mmWave antenna reported in [20] is also an array, with a footprint far bigger than the proposed design, with a bandwidth of 1.3 GHz. The proposed design has an operational bandwidth of 3.83 GHz in the high-band; so, our proposed design is better in performance than the design reported in [21], in terms of gain-bandwidth trade-off and compactness. The design reported in [22] is lower in performance in almost every aspect than our proposed design. In [23], an integrated design for lower 5G and mmWave was reported. The lower band or mid-band MIMO antenna has an operational bandwidth of 0.28 GHz, which is low, as compared to our proposed design, offering 2.79 GHz of bandwidth. The proposed design provides better performance in all of the other figures of merit. The design also reported a MIMO antenna configuration in the mmWave 5G band. The antenna reported for this band is also an array and has a footprint bigger than our proposed antenna. Moreover, the isolation offered by our proposed design is 29 dB or higher and the isolation of the design reported in [23] is 28 dB or higher. Therefore, our proposed design offers better isolation. In addition to that, the radiation efficiency of our proposed design is also higher. Therefore, our proposed design can be considered as a good alternative to the design reported in [23]. | 5,922 | 2021-09-08T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Removal of arsenic from water using copper slag
The potential use of copper slag (CS) as an adsorbent for removing arsenic contaminant from water was examined. The influence of solution pH, initial arsenate (As(V)) and arsenite (As(III)) concentrations, and adsorbent dosage was investigated by batch experiments to elucidate the adsorption mechanism of arsenate onto CS. The adsorption kinetics indicated that the second-order kinetic model best described the adsorption process. The adsorption data was analysed by the isotherm models, and was well fitted by the Langmuir model. The maximum removal of As(V) and As(III) achieved was 98.76% and 88.09%, respectively at an adsorbent dose of 10 g/L with an initial As(V) and As(III) concentration of 300 μg/L This study showed that copper slag is a suitable adsorbent for removal of arsenic from water, with a capacity to reduce arsenic levels to < 10 μg/L, below the limit set for water by World Health Organization.
Introduction
Arsenic is one of the most harmful and toxic elements found in nature. Arsenic is introduced into water through both natural and anthropogenic sources (Hossain, 2006). There have been reports of arsenic poisoning in Argentina, Bangladesh, India, Nepal, Taiwan, the USA, and Turkey (Jean, Bundschuh, and Bhattacharya, 2010). Prolonged exposure to arsenic can lead to serious health problems such as cancers of the liver, kidney, skin, and stomach (Meng, Bang, and Korfiatis, 2000). This suggests an urgent need for efficient and low-cost arsenic removal techniques. The World Health Organization (WHO) has set the drinking water standard at 10 ppb (0.01 mg/L) (WHO, 1993).
Surface and groundwaters contaminated with pollutants can be treated by various methods such as coagulation, precipitation, membrane processes, and adsorption (Mohan, and Pittman, 2007). Many kinds of adsorbents such as iron oxide-loaded slag (Lakshmipathiraj et al., 2006), activated alumina (Thirunavukkarasu, 2002), anionic clays (Türk and Alp, 2012), nano-magnetite (Türk, Alp, and Deveci, 2010) and Fe-hydrotalcite have been developed to remove arsenic. Previous studies reported that iron-based minerals were useful in the removal of arsenic from water. Although many adsorbents have been developed, few of them are implemented in small settlements to remove pollutants from groundwater (Makris, Sarkar, and Datta, 2006). For these reasons, low-cost waste materials for application in rural areas at a small scale are in high demand. An example of such processes is the use of a waste by-product from the flotation of a copper slag (CS) as a low-cost sorbent. CS is obtained during the smelting and refining of copper and contains copper, zinc, cobalt, and nickel (Gorai, Jana, and Premchand, 2003). During smelting, a copper-rich matte phase (sulphides) and slag phase (oxide-rich) are formed. The slag phase consists predominantly of FeO, Fe 2 O 3 , and SiO 2 , with small amounts of Al 2 O 3 , CaO and MgO, and Cu, Co, and Ni (as metal) (Davenport et al., 2002).
The annual CS waste production capacity in the Black Sea Copper Works is approximately 150 000t (Alp et al., 2009). As a result, about 1.5-2.0 Mt of CS waste has been disposed of. Improper disposal can result in the toxic metal ions leaching into the groundwater, with ensuing serious environmental problems (Davenport et al., 2002). Due to environmental restrictions and lack of storage space, various options for the treatment of waste materials have been investigated. CS wastes are used in various applications to recycle waste products and to prevent pollution (Alp et al., 2009). The use of CS as an iron source in the production of Portland cement has been reported. Adsorption is an effective and inexpensive method of waste remediation, if adsorbent can be obtained cheaply and in large amounts (Zhang et al., 2004). Arsenic is strongly adsorbed on Fe oxides and hydroxides (Türk and Alp, 2012) which are abundant in CS. However, a literature review discovered no reports on the use of CS for arsenic removal from water. In this research, the arsenic removal characteristics of CS were investigated.
Materials
CS obtained from the Black Sea Copper Works (Samsun, Turkey) was used as adsorbent. The chemical composition of the slag was determined using inductively coupled plasma mass spectrometry (ICP-MS, ACME Analytical Laboratories). The major components in the sample were Fe 2 O 3 (59.08%) and SiO 2 (30.60%). The Brunauer-Emmett-Teller method was used to determine the surface area (for CS; 4.81 m 2 /g). X-ray diffraction (XRD) patterns of CS were obtained using a RIGAKU D/Max-IIIC diffractometer. The operating conditions were 35 kV-15 mA, and the sample was scanned between 2° and 60°. A pinch of CS sample was poured onto adhesive carbon tape, and then its surface was covered with a very thin (5-10 µm) gold layer by the physical vapour deposition (PVD) technique. The results of the X-ray analysis and a scanning electron microscopy (SEM, JEOL JSM 5600) secondary ion image of CS are shown in Figure 1. In the example, fayalite (Fe 2 SiO 4 ), magnetite (Fe 3 O 4 ), quartz (SiO 2 ), and cristobalite (SiO 2 ) are present.
The point of zero charge (pH pzc ) of CS was found to be 7; this was determined by equilibrating CS with 0.1 M NaCl solutions at different pH values (3-12). Türk, Alp, and Deveci (2010) give details of the procedure.
Adsorption of arsenic
In adsorption processes, pH is an important parameter. It affects the speciation of metal in aqueous solution and the surface charge of adsorbent (Zhang, and Hideaki, 2005). Figure 2 shows the effect of pH on the adsorption of As(V) and As(III) onto CS. It can be seen that the arsenic adsorbed is independent of pH value. After 5 hours of adsorption, the residual As(V) in solution was determined to be 6.2-2.0 µg/L, and As(III) 55.2-40.0 µg/L at pH 3-9. The pH pzc value of the CS sample was determined to be pH 7, and at pH values greater than pH pzc , the CS surface is negatively-charged. Although the adsorbent surface and sorbate types were negatively charged, adsorption was not decreased at pH values above pH pzc . This may be due to the buffer effect. In a low-pH medium, the pH of the bulk solution increases in the presence of CS. At high pH values, acid dissociation of CS reduces the pH of the solution (Tian, and Shen, 2009). As a result, a pH value of 9.0 was selected, and all experiments in the study were performed at pH 9. The Journal of the Southern African Institute of Mining and Metallurgy VOLUME 120 MAY 2020 Figure 3 shows the effect of CS dosage. Arsenic removal efficiency increased with increasing CS. At the end of 5 hours at a CS dosage of 1 g/L, the concentrations of As(V) and As(III) in solution were 104.3 µg/L and 142.3 µg/L respectively, compared to 6.31 and 19.3 µg/L at 10 g/L CS. All further experiments in the study were performed at a dosage of 10 g/L CS.
Kinetic modelling
The removal of As(V) and As(III) versus time is illustrated in Figure 4. CS removed more than 81% of the arsenic within the first hour, and the removal efficiency increased gradually to 99% within 3 hours. The adsorption capacity for arsenic was approximated to 30 µg/g at sorption equilibrium. In these experiments, only 0.5 g of CS was placed in a flask containing up to 100 mL of arsenate solution at a concentration of 300 µg/L. The adsorption kinetics depend strongly on the characteristics of the adsorbent material and adsorbate species, which also influence the adsorption mechanism (Bektaş, Akman, and Kara, 2004). Pseudo-first-order and pseudo-second-order models were used to determine experimental kinetic data models and to understand the mechanism of the adsorption process. The pseudo-first-order equation (Ho, and McKay, 1998) is where q e is the adsorption capacity (µg/g) at equilibrium, q, the adsorption capacity (µg/g) at time t, k 1 the pseudo-first-order rate constant (min -1 ), and k 2 is the pseudo-second-order rate constant (g mg −1 s −1 ).
The first-order rate constant k 1 and equilibrium adsorption density q e (Equation [1]) at initial concentrations of As(V) and As(III) were calculated using the slope and intercept of plots of log(q e --q t ) versus t. The coefficients of determinations R 2 were found to be 0.947 and 0.854, respectively.
Pseudo-second-order adsorption parameters q e and k 2 in Equation [2] were determined by plotting t/q t versus t ( Figure 4a, 4b). The coefficient of determination (R 2 ) of the pseudosecond-order kinetic model is 0.999, higher than the coefficient of determination of the pseudo-first-order model. A high coefficient of determination suggests that the pseudo-second-order kinetic model better represents the adsorption kinetics and that As(V) and As(III) ions are adsorbed on the CS surface through chemical interaction (Bulut, Özacar, and Şengil, 2008). The calculated q e values also agreed well with the experimental data.
Adsorption equilibrium
The equilibrium adsorption of arsenic as a function of the arsenic concentration was investigated to determine the adsorption capacity of CS. The Langmuir, Freundlich, Dubinin-Radushkevich (D-R), and Temkin models were tested. The adsorption isotherms are shown in Figure 5.
Freundlich isotherm
The Freundlich isotherm is an empirical equation based on a heterogeneous surface and multilayered sorption. The higher the K f value, the higher the adsorbate affinity (Mohamad, Amir, and Sudabeh, 2013).
The linear form of the equation (Veli, and Akyüz, 2007) is where q e is the amount of adsorbate at equilibrium (µg/g), C e is the equilibrium concentration of adsorbate (µg/L), n is adsorption intensity, and K f is the Freundlich constant. The linear plot of ln MAY 2020 VOLUME 120 The Journal of the Southern African Institute of Mining and Metallurgy q e and ln C e gives a slope of 1/n and intercept of ln K f . In theory, values of n from 1 to 10 indicate that the adsorbate is adsorbed appropriately on the adsorbent (Konicki et al., 2012). In this study, n = 1.73. The Freundlich adsorption isotherm model is shown in Figures 6 and 7. The coefficient of determination of this model is lower than those for the other three isotherms (Table I).
Langmuir isotherm
The Langmuir isotherm is the most widely used isotherm for removal of contaminants from solutions (Crini et al., 2009;Özacar and Şengil, 2005). The Langmuir isotherm suggests that adsorption occurs in a single layer on the surface of a homogeneous adsorbent (Foo and Hameed, 2010). The linear form of the conventional Langmuir model (Kundu and Gupta, 2007) is [4] where C e is the equilibrium concentration (µg/L), b is the adsorption constant (L/µg), q e is the amount of arsenic at equilibrium (µg/g), and Q is the adsorption capacity (µg/g).
The Langmuir constants and coefficient of determination (R 2 ) values were obtained from the slope and intercept of the linear plot of 1/q e versus 1/C e (Figures 6 and 7, Table I). The Langmuir model was best fitted to experimental data with high R 2 values.
The Langmuir isotherm can be explained by the R L parameter (Kundu, and Gupta, 2007), which is defined by Equation [5] [5] where C 0 (µg/L) is the initial concentration and b (L/µg) is the Langmuir constant. R L values may indicate unfavourable (R L > 1), linear (R L = 1), favourable (0 < R L < 1), or irreversible (R L = 0) adsorption (Tekin, 2006). The values of RL for our experiments were found to be < 0.04, indicating that the adsorption by CS is favourable.
The Langmuir isotherm is well adapted to the experimental data. It is an indication of single layer adsorption of arsenic on CS (Bulut, Özacar, and Şengil, 2008). The Langmuir isotherm model is widely used in explaining chemical adsorption (Dubinin, 1960). The isotherm study was conducted using different arsenic concentrations (50-300 µg/L) at 25°C, pH 9, 0.5 g dosage of CS, and absorption time 5 hours. For CS, the maximum adsorption capacities in arsenate and arsenite solution were found to be 109 and 84.8 µg/g respectively, using the Langmuir isotherm. The maximum adsorption capacities of some iron-based adsorbent materials are compared with this value. Thirunavukkarasu, Viraraghavan, and Subramanian (2003) studied the adsorption of As(V). on iron oxide-coated sand. They found that the adsorption capacity was 45 µg/g. Guo, Stüben, and Berner (2007) studied As(V) removal using haematite. The adsorption capacity of haematite was found to be 204 µg/g (As(V) from a 1000 µgL solution at an adsorbent dosage of 10 g/L). reported adsorption capacities of 80 µg/g for As(III) on Fehydrotalcite.
Dubinin-Radushkevich (D-R) isotherm
The Dubinin-Radushkevich (D-R) isotherm model is used to distinguish the physical and chemical adsorption of arsenic ions on CS (Dubinin, 1960). This model is written as (Kundu, and Gupta, 2007) [6] and linearized as [7] where Q D is the maximum capacity (mol/g), B D is the D-R constant (mol 2 /kJ 2 ), and ε is the Polanyi potential and is equal to ε=RT ln(1+ 1 C e ). Figures 6 and 7 show the plot of ln q e against ε 2 . The coefficient of determination R 2 = 0.9166. B D was found to be 0.34 mol 2 /J 2 , and Q D was 35.63 mol/g. The free energy of adsorption (E) was determined using the relationship E = √2BD (Kundu,and Gupta,2007). The value of E is significant when ascertaining the type of adsorption. If the E value is less than 8 kJ/mol, adsorption is physical, and if it is 8-16 kJ/mol, adsorption is chemical (Islam, Mishra, and Patel, 2011). In this investigation the value of E was found to be 0.84 kJ/mol (Table I), thus in this case, the adsorption is physical. Similar results have been reported for the adsorption of arsenic by iron-oxide-coated granular activated charcoal (Ananta, Saumen, and Vijay, 2015).
Temkin isotherm
The Temkin isotherm is based on adsorbate-adsorbent interactions. This model ignores deficient and extremely high concentrations, and the free energy of adsorption is a function of the surface coating. The Temkin isotherm is given as (Mall et al., 2005): The linear form of the Temkin isotherm is [10] where A is the equilibrium binding constant (L/µg), b is the Temkin constant (J/mol), R is the gas constant (8.314 J/mol K), and T is the absolute temperature (K) equivalent. The adsorption data was analysed and the values of the Temkin constants A and B and coefficient of determination established; these are listed in Table I, and the theoretical plot of this isotherm is shown in Figures 6 and 7. According to the coefficients of determination (R 2 ), the order of suitability of the models was Langmuir, Freundlich, D-R, and Temkin (Figures 6, 7 and Table I). Thus, the Langmuir model is the most suitable for describing single-layer adsorption of arsenic on the CS surface. The adsorption capacities (Q) for As(V) and As(III) in the Langmuir model were 109.460 and 84.83 µg/g, respectively. As a result, the isothermic order is Langmuir > Temkin > Freundlich, > D-R isotherm.
Conclusion
In this study, the removal of As(V) and As(III) from aqueous solution by copper slag (CS) was demonstrated. The pH effect ▶ 318 MAY 2020 VOLUME 120 The Journal of the Southern African Institute of Mining and Metallurgy study showed that the amount of arsenic adsorbed by CS was independent of initial pH value. Kinetic studies showed that the rate of adsorption accords with the pseudo-second-order kinetic model and that arsenic ions are adsorbed on the CS surface through chemical interaction. The adsorption data was analysed by Langmuir, Freundlich, Dubinin-Radushkevich (D-R) and Temkin isotherm models and the data was well fitted using the Langmuir isotherm model. The Langmuir isotherm model is widely used in explaining chemical adsorption. The maximum adsorption capacities of the sorbent in As(V) and As(III) solution were found to be 109.29 and 84.83 µg/g, respectively. The RL value (< 0.04) showed that CS is favourable for the removal of arsenic from aqueous solutions. Under optimal conditions (adsorbent dosage 10 g/L, solution pH 9, temperature 25°C, and arsenic concentration of 300 µg/L the adsorbent was able to remove 98.76% of the arsenate and 85.43% of the arsenite. CS can be used as a practical, easy to use, and inexpensive adsorbent for arsenate and arsenite removal from water. | 3,782.8 | 2020-05-01T00:00:00.000 | [
"Environmental Science",
"Chemistry",
"Materials Science"
] |
Integrated Resource Management for Fog Networks
In this paper, we consider integrated resource management for fog networks inclusive of intelligent energy perception, service level agreement (SLA) planning and replication-based hotspot offload (RHO). In the beginning, we propose an intelligent energy perception scheme which dynamically classifies the fog nodes into a hot set, a warm set or a cold set, based on their load conditions. The fog nodes in the hot set are responsible for a quality of service (QoS) guarantee and the fog nodes in the cold set are maintained at a low-energy state to save energy consumption. Moreover, the fog nodes in the warm set are used to balance the QoS guarantee and energy consumption. Secondly, we propose an SLA mapping scheme which effectively identifies the SLA elements with the same semantics. Finally, we propose a replication-based load-balancing scheme, namely RHO. The RHO can leverage the skewed access pattern caused by the hotspot services. In addition, it greatly reduces communication overheads because the load conditions are updated only when the load variations exceed a specific threshold. Finally, we use computer simulations to compare the performance of the RHO with other schemes under a variety of load conditions. In a word, we propose a comprehensive and feasible solution that contributes to the integrated resource management of fog networks.
Introduction
In the paper, we propose integrated resource management for fog networks including load balancing, energy perception and SLA planning. The load balancing is the most important issue in integrated resource management because it plays a key role in the overall performance of the fog networks. The main functionality of load balancing schemes is to deal with bursty and unbalanced load conditions in fog networks. Unfortunately, load-balancing schemes that achieve excellent fairness are generally too complicated to implement. Apparently, the fog nodes that contain hotspot services have relatively heavier loads. Many replication-based load-balancing schemes have been proposed, which could be roughly divided into four categories. In the first category, all stored data are replicated to all fog nodes in a fog network regardless of their load conditions [1][2][3]. This strategy is easy to implement. However, it consumes a lot of resources inclusive of storage space and network bandwidth whenever data replication happens. The second category only stores a replica of stored data and consequently it has the highest storage utilization. However, it can only work well under very limited load conditions because there are no additional replicas. The third category keeps a pre-defined number of replicas for stored data and hence the number of replicas cannot be adjusted to cope with load variations. Furthermore, it needs specific request scheduling algorithms to efficiently distribute the arriving requests to adequate fog nodes, such as round robin [4]. The fourth category dynamically constructs an adequate number of replicas according to load conditions [5][6][7][8]. In order to deal with fairness, simplicity and resource efficiency, we propose a novel replication-based loadbalancing scheme that works well under bursty and unbalanced load conditions. More (1) We propose an intelligent energy perception scheme capable of efficient energy management. The fog nodes are dynamically classified into a hot set, warm set or cold set in terms of load conditions. The fog nodes in the hot set are responsible for QoS guarantee and the fog nodes in the cold set are used to reduce energy consumption. Additionally, the fog nodes that belong to the warm set are used to deal with sudden load conditions in time, while balancing energy consumption. (2) We propose an SLA mapping scheme for SLA planning, which systematically describes the relationship between resource metrics and SLA parameters. Moreover, this scheme can identify the SLA elements with the same semantics. (3) We propose a replication-based hotspot offload (RHO) scheme which achieves approximately perfect fairness under different load conditions. In addition, it is suitable to be deployed in fog networks whose resources are limited.
The remainder of the paper is organized as follows; in Section 2, we present the related work. Section 3 explains the details of the integrated resource management inclusive of intelligent energy perception, SLA planning and replication-based hotspot offload. Section 4 compares the fairness of the RHO scheme with other load-balancing schemes under various load conditions. Finally, we conclude the contributions and future work in Section 5.
Related Work
In this section, we review previous studies associated with integrated resource management. First of all, we introduce the load-balancing schemes. A block placement strategy that determines the best data nodes achieves real-time responses in the Hadoop environment [11]. This strategy requires preliminary replicas so it improves data synchronization. However, it has poor fairness. One study investigated the effects of job scheduling and data replication [12]. In order to efficiently manage the number of replicas, an auction protocol that combines an economic model was proposed. However, this proposal was evaluated under European data grid environments. Two ways are generally used to handle the arriving requests of replication-based load-balancing schemes and they both are classified into centralized control and distributed control [13][14][15]. In centralized control, the fog controller must select the best fog nodes for processing the arrival requests. In addition, the centralized control needs to collect real-time fog node information and therefore it results in huge additional communication overheads and larger latency. Moreover, all arriving requests are handled by the fog controller which might become the performance bottleneck. In distributed control, arriving requests are mainly handled by corresponding master fog nodes. Consequently, it greatly reduces communication overheads and gets rid of the performance bottleneck. Therefore, we take distributed control into account. Several load-balancing schemes require a prior knowledge used to efficiently deal with arriving requests [16]. Comparatively speaking, the other load-balancing schemes do not require any prior knowledge but they rely on other specific information, for instance, the system status or the amount of awaiting requests in each server. A solution that adopts a low-level load balancer works at the network level in the open system interconnection model [17]. When a request arrives, it is forwarded to a back-end server. Next, the header of the request is modified according to a mapping table. However, this scheme has non-linear speedup proportional to the number of servers.
A load-balancing algorithm was proposed to deal with the conditions where the backend servers should be familiar with the front-end servers [18]. As a result, it cannot be applied to conditions where the end-users can directly access back-end servers. Similarly, a dynamic load-balancing scheme that utilizes dynamic DNS updates and a round-robin mechanism was proposed [19]. The scheme can add or remove a server from the DNS list according to load variations. Accordingly, it decreases response time because the overloaded servers can be dynamically removed from the DNS list. A scheduling algorithm that depends on resource usages such as CPU, memory and bandwidth accordingly was also proposed. One scheme reduces scheduling overheads by creating several replicas of each job. Therefore, the requests could be effectively dispatched to adequate replicas [20]. When a replica reaches the head of the queue at one of the servers, the other servers that have the same replica of the job are removed from corresponding queues. Although it is useful to improve queuing overheads, inter-server communication still could lead to performance degradation such as high propagation delay. MapReduce is a distributed programming model and therefore it is very suitable for the development of large-scale dataparallel applications [21]. A load-balancing algorithm that considers node performance is applied to address the problem of data assignment. After the map phase, the execution time can be balanced for the reduce tasks [22].
In order to enhance system resource utilizations and constrain the frequencies of message exchange, a load-balancing strategy that incorporates an information exchange policy consisting of a random walk and a decentralized nature was proposed [23]. This strategy exchanges information using random packets so that each node keeps up-to-date knowledge of the other nodes. Furthermore, two message replication strategies are used to improve the efficiency and scalability of unstructured P2P networks while maintaining query performance [24]. First of all, distance-based message replication strategy replicates query messages among different topological regions. Next, landmark-based strategy optimizes query processing because both the topology and physical proximity of peers are considered. A dynamic replica technique utilizes acceleration to resolve reliability and latency for cloud storage [25]. This technique can identify hotspot data in the next period based on acceleration information. Accordingly, it selects the best node and then creates a new replica. One strategy uses the session time to explore time-related replication that get rid of bursty failures under P2P-based storage systems [26]. Furthermore, it provides sufficient time to replace the lost replica using the primary replica. A dynamic replication strategy was proposed to reduce delay because it considers access costs [27]. The strategy not only enhances load balancing but also improves data availability. However, it is too complicated to implement.
Several energy perception schemes enable the servers to work at low energy states. Therefore, the servers decrease energy consumption of high-performance applications with virtualization [28]. A system alleviates the impact of time-varying loads in fog networks because it can dynamically control the on/off states of the fog nodes [29]. The system modifies reconfiguration decisions by considering the overall load impact on the system, energy and performance. As a result, several adjustment policies were proposed to modify current configurations. With various combinations of dynamic voltage scaling and node states, the policies decrease the aggregate energy consumption within a server cluster when the overall workload is low [30]. A runtime system utilizes energy management that supports system-wide, application-independent, dynamic voltage and frequency scaling in a generic energy-aware cluster [31]. The system achieves 20% energy saving based on NAS parallel benchmark. In addition, it can constrain the performance degradation of most applications under user-specified limits. A run-time scheduling algorithm was proposed to improve energy consumption in cluster systems [32]. An energy-aware algorithm possesses energy reduction accompanied with minimal impact on performance by adjusting the settings of voltage and frequency [33]. In the past, load balancing and energy perception were studied separately and therefore in the idea that we consider, both load balancing and energy perception are novel and useful.
Finally, we review the work related to the SLA planning. The transfer functions are used to represent the relationships between resource metrics and QoS attributes in web services [34]. Furthermore, it continuously monitors and evaluates the states of QoS attributes so as to guarantee QoS requirements. On the other hand, it lacks monitoring on resource metrics, that results in infeasibility. LoM2HiS architecture manages the relationships between resource metrics and SLA parameters [35]. The architecture monitors the resource metrics and SLA parameters so that SLA violations can be detected. Furthermore, a prevention policy is used to deal with SLA violations. However, data inconsistencies may occur because many replicas are needed. More importantly, single point of failure is an unavoidable problem herein. The web service level agreement (WSLA) is an XML-based agreement [36]. In addition, the WSLA can be extended to establish new metrics based on current metrics. In other words, it can be used to implement multiple SLA parameters. The WS-Agreement was developed for grid computing [37]. The WS-Agreement is an XML-based agreement, which is generally used to represent the non-functional attributes of an SLA contract in web services. We found that the WSLA is relatively suitable for fog networks.
SLA management was proposed to multiple software architectures, including the operation layer, application layer, middle layer and basic layer [38]. In addition, a QoS model that associates with different SLA layers was proposed. However, it lacks feasibility because it only demonstrates a multilayer abstract software architecture. The SLA guarantee and penalty are two common components in SLA planning. For instance, the maximum and minimum thresholds should be set up in order to guarantee the SLA requirements. Once the SLA is violated, some penalty policies should be adopted for the service providers. In general, the users would like to satisfy their service level objectives with minimum cost. Inversely, service providers expect to get maximum revenue with minimum resources. An agent system reaches SLA negotiation based on auction policy [39]. SLA negotiation consists of three phases inclusive of negotiation media, auction selection and SLA configuration. Once the SLA negotiation is completed, it means that an SLA contact has been established. The auction procedures are too complicated to rapidly complete SLA negotiation. Therefore, additional time is necessary. The SLA negotiation can use ontology to realize the automatic SLA matches and then select the adequate service providers [40][41][42]. Apparently, the SLA mapping is a key component because it is mainly responsible for bridging the differences of SLAs [43,44]. In this study, we propose a novel SLA mapping mechanism which is the core of the SLA planning. Furthermore, the proposed mechanism is capable of rapidly identifying the SLA elements with the same semantics.
Integrated Resource Management
A system architecture that consists of M fog networks is depicted in Figure 1. In addition, a fog network consists of different numbers of fog nodes, for instance, fog network 1 consists of N fog nodes. Moreover, there is at least one fog controller in each fog network. The number of fog controllers is proportional to the scales of the fog networks. The fog controllers are responsible for managing fog nodes in corresponding fog networks. Besides, they communicate with other fog controllers regarding resource sharing among different fog networks. There are two ways to deal with arriving requests which include centralized control and distributed control. In centralized control, all arriving requests are completely handled by the fog controller. It means that the fog controller has to determine the best fog node to process the arriving requests. In addition, the fog controller uses a scheduling algorithm whose functionality is to schedule the arriving requests to fog nodes with the required services according to specific performance metrics, such as load conditions, resource usages and latency. Undoubtedly, centralized control is easy to implement, but it imposes a heavy burden on fog controllers. In other words, fog controllers may become the performance bottleneck. Besides, it may lead to high communication overheads between the fog controllers and fog nodes. In distributed control, the fog controller looks up the master fog nodes with the required service and then it directly forwards the arriving requests to the selected master fog node. Then, the fog controller is unnecessary to handle the following processing. The master fog node maintains the original service, which dispatches arriving requests to other fog nodes (slave fog nodes) with the replica of required service based on load conditions. A replica means a duplicate of the same service. In other words, the master fog node is responsible for the partial tasks of the fog controller so that it can relieve the load burden of the fog controller. Moreover, the master fog node may play the role of a slave fog node, depending on the originality of services. The access patterns of hotspot service usually have short-term and unpredictable characteristics. Therefore, load imbalance frequently happens. With the replicas, the master fog node can effectively distribute arriving requests to other slave fog nodes, thus improving overall load balancing. Accordingly, we consider the distributed control in this study. completely handled by the fog controller. It means that the fog controller has to determine the best fog node to process the arriving requests. In addition, the fog controller uses a scheduling algorithm whose functionality is to schedule the arriving requests to fog nodes with the required services according to specific performance metrics, such as load conditions, resource usages and latency. Undoubtedly, centralized control is easy to implement, but it imposes a heavy burden on fog controllers. In other words, fog controllers may become the performance bottleneck. Besides, it may lead to high communication overheads between the fog controllers and fog nodes. In distributed control, the fog controller looks up the master fog nodes with the required service and then it directly forwards the arriving requests to the selected master fog node. Then, the fog controller is unnecessary to handle the following processing. The master fog node maintains the original service, which dispatches arriving requests to other fog nodes (slave fog nodes) with the replica of required service based on load conditions. A replica means a duplicate of the same service. In other words, the master fog node is responsible for the partial tasks of the fog controller so that it can relieve the load burden of the fog controller. Moreover, the master fog node may play the role of a slave fog node, depending on the originality of services. The access patterns of hotspot service usually have short-term and unpredictable characteristics. Therefore, load imbalance frequently happens. With the replicas, the master fog node can effectively distribute arriving requests to other slave fog nodes, thus improving overall load balancing. Accordingly, we consider the distributed control in this study. To achieve efficient resource management among fog networks, the fog controllers are responsible for configuring an overlay control network that enhances resource sharing. The consistency of the overlay control network depends on the physical proximity of the fog networks. In the overlay control network, the fog controllers dynamically update their system information to others whenever the variations of monitored resources exceed pre-defined thresholds. For instance, when the overall load of a specific fog network turns into a high value, the corresponding fog controller will communicate with other fog controllers via the overlay control network. Next, the fog controllers will be in charge of looking up the fog node with the lowest load and sending the relative information of the selected fog node back to the corresponding fog controller. Next, the fog controller selects the best fog node according to the received information if new replicas are needed. Once the hotspot effect is eliminated, the replica can be deleted on demand. As a result, the load variations can be distributed to different fog networks and therefore it improves overall load balancing. In To achieve efficient resource management among fog networks, the fog controllers are responsible for configuring an overlay control network that enhances resource sharing. The consistency of the overlay control network depends on the physical proximity of the fog networks. In the overlay control network, the fog controllers dynamically update their system information to others whenever the variations of monitored resources exceed pre-defined thresholds. For instance, when the overall load of a specific fog network turns into a high value, the corresponding fog controller will communicate with other fog controllers via the overlay control network. Next, the fog controllers will be in charge of looking up the fog node with the lowest load and sending the relative information of the selected fog node back to the corresponding fog controller. Next, the fog controller selects the best fog node according to the received information if new replicas are needed. Once the hotspot effect is eliminated, the replica can be deleted on demand. As a result, the load variations can be distributed to different fog networks and therefore it improves overall load balancing. In order to simplify simulations and analysis, we consider one fog network in this study. However, it is applicable to multiple fog networks. The framework of the integrated resource management is described in Figure 2, which is composed of three components including replication-based hotspot offload (RHO), intelligent energy perception and SLA planning. The fog controllers contain two components including intelligent energy perception and SLA planning. The fog nodes contain one component, namely RHO. order to simplify simulations and analysis, we consider one fog network in this study. However, it is applicable to multiple fog networks. The framework of the integrated resource management is described in Figure 2, which is composed of three components including replication-based hotspot offload (RHO), intelligent energy perception and SLA planning. The fog controllers contain two components including intelligent energy perception and SLA planning. The fog nodes contain one component, namely RHO. Figure 3 is the architecture of intelligent energy perception where the fog controllers are responsible for power management and the fog nodes are responsible for physical power handling. In the overlay control network, all fog nodes in different fog networks are dynamically classified into a hot set, warm set or cold set based on load conditions or a pre-defined number of fog nodes in each set. The hotspot services generally maintain multiple replicas which are distributed to the fog nodes belonging to the hot set. In addition, the seldomly accessed replicas are stored in the fog nodes belonging to the cold set. According to the 80/20 rule, the hotspot services occupy a small proportion of total services in the overlay control network. Consequently, the number of fog nodes in the hot set is normally smaller than that in the cold set and warm set. In order to rapidly recover the request processing, some fog nodes are selected and they belong to the warm set. Apparently, the fog nodes can be automatically added/removed from the overlay control network. When the fog nodes belong to the hot set, they support high performance with high energy state. When the fog nodes belong to the warm set, they maintain at medium energy state and hence they can recover in time. Finally, the fog nodes that belong to the cold set maintain at low energy state and therefore energy consumption can be reduced. Accordingly, it achieves intelligent energy perception. It is easy to extend the RHO to achieve intelligent energy perception while keeping excellent load balancing. When the RHO needs to create a replica of a required service, the new replica will be sequentially allocated to one of the fog nodes in the hot set, warm set and cold set. In a word, the RHO can simply integrate with intelligent energy perception that achieves efficient energy control under various load conditions. Fog nodes have multiple energy states which consume different levels of energy. S0, S3 and S4 represent active state, sleep state and hibernate state respectively. The energy consumption of the sleep state is close to that of the hibernate state. However, it takes longer time to switch a fog node from the hibernate state to the active state. As a result, the energy efficiency of sleep state is better than that of the hibernate state. The three energy states correspond to hot set, warm set and cold set, respectively. First, the fog nodes in the hot set work in active state with the goal of guaranteeing QoS requirements. Second, the fog nodes in the warm set work in sleep state with the goal of balancing QoS requirements and energy consumption. Third, the fog nodes in Figure 3 is the architecture of intelligent energy perception where the fog controllers are responsible for power management and the fog nodes are responsible for physical power handling. In the overlay control network, all fog nodes in different fog networks are dynamically classified into a hot set, warm set or cold set based on load conditions or a predefined number of fog nodes in each set. The hotspot services generally maintain multiple replicas which are distributed to the fog nodes belonging to the hot set. In addition, the seldomly accessed replicas are stored in the fog nodes belonging to the cold set. According to the 80/20 rule, the hotspot services occupy a small proportion of total services in the overlay control network. Consequently, the number of fog nodes in the hot set is normally smaller than that in the cold set and warm set. In order to rapidly recover the request processing, some fog nodes are selected and they belong to the warm set. Apparently, the fog nodes can be automatically added/removed from the overlay control network. When the fog nodes belong to the hot set, they support high performance with high energy state. When the fog nodes belong to the warm set, they maintain at medium energy state and hence they can recover in time. Finally, the fog nodes that belong to the cold set maintain at low energy state and therefore energy consumption can be reduced. Accordingly, it achieves intelligent energy perception. It is easy to extend the RHO to achieve intelligent energy perception while keeping excellent load balancing. When the RHO needs to create a replica of a required service, the new replica will be sequentially allocated to one of the fog nodes in the hot set, warm set and cold set. In a word, the RHO can simply integrate with intelligent energy perception that achieves efficient energy control under various load conditions. Fog nodes have multiple energy states which consume different levels of energy. S0, S3 and S4 represent active state, sleep state and hibernate state respectively. The energy consumption of the sleep state is close to that of the hibernate state. However, it takes longer time to switch a fog node from the hibernate state to the active state. As a result, the energy efficiency of sleep state is better than that of the hibernate state. The three energy states correspond to hot set, warm set and cold set, respectively. First, the fog nodes in the hot set work in active state with the goal of guaranteeing QoS requirements. Second, the fog nodes in the warm set work in sleep state with the goal of balancing QoS requirements and energy consumption. Third, the fog nodes in the cold set work in hibernate state with the goal reducing energy consumption. By dynamically switching the fog nodes into different sets, we can guarantee QoS requirements and achieve intelligent energy perception. the cold set work in hibernate state with the goal reducing ener namically switching the fog nodes into different sets, we can guar and achieve intelligent energy perception. Apart from intelligent energy perception, the fog controller SLA planning. WSLA is generally used to describe the relationsh service definition and obligations in the SLA. There are three ma rics, SLA parameters and functions because they are transparent viders. Besides, the other factors can be set at default settings, s and service level objectives. Consequently, we complete the des There are no unified syntax definitions for the SLA template. How plates with different syntax but representing the same semantics In the past, many studies only considered the mapping betwee high-level SLAs. Therefore, we propose a complete and practica nism as depicted in Figure 4. To efficiently bridge the differences ers and clients, there are two SLA mapping rules and two conditi mechanism. The first rule is used to resolve the response time o tives. The second rule is more complicated because it has to esta different units of metrics. After completing the SLA mapping ru the required conditions related to the service level objectives an Furthermore, we establish the criteria to judge whether the condi are satisfied. We utilize the character-based similarity method be usually consist of short strings with less changing syntax. Howe similarity method cannot deal with strings with large syntax dif sent the same meaning of semantics. Therefore, we extract the su mation from the mapping repository and further combine with Apart from intelligent energy perception, the fog controllers are responsible for the SLA planning. WSLA is generally used to describe the relationships between the parties, service definition and obligations in the SLA. There are three main factors including metrics, SLA parameters and functions because they are transparent to different service providers. Besides, the other factors can be set at default settings, such as supporting party and service level objectives. Consequently, we complete the design of an SLA template. There are no unified syntax definitions for the SLA template. How to bridge the SLA templates with different syntax but representing the same semantics becomes the main issue. In the past, many studies only considered the mapping between low-level metrics and high-level SLAs. Therefore, we propose a complete and practical SLA mapping mechanism as depicted in Figure 4. To efficiently bridge the differences between service providers and clients, there are two SLA mapping rules and two conditions in the SLA mapping mechanism. The first rule is used to resolve the response time of the service level objectives. The second rule is more complicated because it has to establish formulas that map different units of metrics. After completing the SLA mapping rules, we further construct the required conditions related to the service level objectives and metrics, respectively. Furthermore, we establish the criteria to judge whether the conditions of the SLA contract are satisfied. We utilize the character-based similarity method because the SLA templates usually consist of short strings with less changing syntax. However, the character-based similarity method cannot deal with strings with large syntax differences but they represent the same meaning of semantics. Therefore, we extract the successful mapping information from the mapping repository and further combine with a case-based reasoning method. As a result, we can efficiently identify the SLA templates with the same semantics. As we mentioned previously, we use intelligent energy perception to adjust the fog nodes into a hot set, warm set and cold set that contributes to efficient energy saving. Finally, the RHO is in charge of dynamical replica management. Once it is possible to violate the SLA contract, such as data availability and delay, more fog nodes will be transferred into the hot set by intelligent energy perception. Next, we introduce the RHO scheme. First, we use Equation (1) to estimate the average load of fog node i at the beginning of kth time interval, which is denoted by , . Similarly, The , denotes the average load of fog node i at the beginning of (k − 1)th time interval. , denotes the current load of fog node i during (k − 1)th and kth time interval. Finally, is a parameter whose functionality is used to prevent the estimate of , from being affected by short-term or long-term load variations. T denotes the duration of each time interval.
Replicationbased Hotspot
The , is the sum of ℎ _ , and ℎ _ , , which represent the current loads of hotspot services and non-hotspot services in fog node i during (k−1)th and kth time interval respectively. We adopt the same formula to estimate the ℎ _ , and nonℎ _ , where ℎ _ , and nonℎ _ , denote the average load of hotspot services and non-hotspot services in fog node i at the beginning of kth time interval, respectively. Next, we present the definitions of a hotspot service and a non-hotspot service. , denotes the amount of arriving requests for fog node i with service j during (k−1)th and kth time interval. If , exceeds a hotspot threshold ℎ , then service j is classified as hotspot service; otherwise, service j is classified as non-hotspot service. If ℎ , /( ℎ _ , + ℎ , ) ≥ ℎ , then fog node i is identified as a hotspot node. Otherwise, the fog node i is identified as a non-hotspot node. To reduce frequent updates of load variations, we classify the loads into different levels.
ℎ denote the weight of load level p. N is the number of levels. and determine the level structure. N must be a multiple of and C denotes the maximum capacity of request processing.
Next, we use Equation (3) to transfer the loads into corresponding load levels. Next, we introduce the RHO scheme. First, we use Equation (1) to estimate the average load of fog node i at the beginning of kth time interval, which is denoted by load i,k . Similarly, The load i,k−1 denotes the average load of fog node i at the beginning of (k − 1)th time interval. load current i,k−1 denotes the current load of fog node i during (k − 1)th and kth time interval. Finally, k a is a parameter whose functionality is used to prevent the estimate of load i,k from being affected by short-term or long-term load variations. T d denotes the duration of each time interval.
The load current i,k−1 is the sum of hotspot_load current i,k−1 and nonhotspot_load current i,k−1 , which represent the current loads of hotspot services and non-hotspot services in fog node i during (k−1)th and kth time interval respectively. We adopt the same formula to estimate the hotspot_load i,k and nonhotspot_load i,k where hotspot_load i,k and nonhotspot_load i,k denote the average load of hotspot services and non-hotspot services in fog node i at the beginning of kth time interval, respectively. Next, we present the definitions of a hotspot service and a non-hotspot service. A j i,k denotes the amount of arriving requests for fog node i with service j during (k−1)th and kth time interval. If A j i,k exceeds a hotspot threshold hotspot th , then service j is classified as hotspot service; otherwise, service j is classified as non-hotspot service. If hotspot load i,k / nonhotspot_load i,k + hotspot load i,k ≥ hotspot th , then fog node i is identified as a hotspot node. Otherwise, the fog node i is identified as a non-hotspot node. To reduce frequent updates of load variations, we classify the loads into different levels. weight p denote the weight of load level p. N is the number of levels. k b and k c determine the level structure. N must be a multiple of k c and C denotes the maximum capacity of request processing.
Next, we use Equation (3) to transfer the loads into corresponding load levels.
To balance real-time updating and communication overheads, we propose a two-layer updating mechanism using Equation (4). update i,k denotes the information at which a fog node updates its load conditions. level th denotes a threshold of load level and load th denotes a threshold of load variation. If level i,k − level i,k−1 ≥ level th , then fog node i broadcasts its load level accompanied by the residual number of replicas to other nodes. Otherwise, we further check the fine load variations. If load i,k − load i,k−1 ≥ load th , then the mechanism updates current average load. If level i,k − level i,k−1 < level th and load i,k − load i,k−1 < load th , then no updating is needed. Accordingly, the two-layer updating mechanism improves communication overheads while keeping real-time updating because it dynamically updates coarse load levels or fine average loads based on load conditions.
Next, those fog nodes with minimum replicas of total hotspot service are sequentially selected and they are used to allocate new replicas based on Equation (5). r de f ault denotes the number of replicas of hotspot service. To simplify the replica estimation, we adopt a static number of replicas. node set i denotes the set of non-hotspot fog nodes that already have a replica of the same hotspot service for fog node i. node add i denotes a set of new fog nodes added to node set i . min(x) is the smallest element in x. In brief, RHO distributes replicas based on the number of replicas of total hotspot service residing in each fog node. A new replica represents that a certain of requests be dispatched to other fog nodes. Thus, RHO greatly improves load imbalances.
In RHO, the master fog nodes are responsible for scheduling the arriving requests to related slave fog nodes with the required services. The master fog node selects the fog node with the minimum load level based on Equation (6).
Simulation Results
We developed a software simulator to perform all simulations, which was designed to generate and process requests. We considered the application scenarios of the augmented reality (AR) service which was deployed in the fog nodes while using container technology. Unlike a virtual machine, the container was an efficient virtualization because it used a host operating system, shared libraries and related resources. In other words, the container only requires application-specific binaries and container libraries that apparently speed up the service deployment. As a result, it required a fraction of a second to launch a container. Therefore, our study is feasible for such application scenarios. We used computer simulations to compare the fairness of the RHO scheme with ripple load balancing (RLB) and optimal load-balancing (OLB) schemes. In the simulations, we assumed that the system architecture consisted of one fog network where 64 fog nodes and one fog controller existed. The users accessed the service deployed in the fog nodes and hence various requests were generated. We used ON-OFF models to simulate the request behaviors of hotspot service. Furthermore, we used ON-OFF models to simulate the load variations of non-hotspot service. Both RHO and RLB schemes work with distributed control but the OLB scheme works with centralized control. In OLB, we assumed that the fog controller always obtains the real-time load conditions of fog nodes. Furthermore, each fog node maintains a replica of the each resided service. As a result, the fog controller can forward the requests to the fog node with the required service while having the lowest load. The OLB is the most impractical algorithm. First, the OLB has the highest communication overheads because it requires real-time fog node information. Without this, it greatly degrades fairness performance. Second, the fog controller might become the performance bottleneck because it has to completely deal with all arriving requests. Third, to provide real-time load conditions of fog nodes is too difficult to implement. Fourth, it needs to duplicate the same service to all fog nodes that consumes a lot of resources such as storage, computing and bandwidth. In all experiments, the OLB worked as a benchmark of the fairness performance. RLB maintained a fixed number of replicas for resided services. In other words, RLB belongs to the static strategy.
Unless otherwise specified, we used the following parameter settings in all experiments. The RHO's parameters were set as follows: k a = 0.3, k b = 2, k c = 2, N = 16, level th = 1, load th = 250 and T d = 15 min. Furthermore, all fog nodes possessed the same request processing capacity, that is C =25,000. We considered 64 fog nodes in our simulations. Finally, there were 10 replicas for a hotspot service. The RLB's parameters were set as follows: k a = 0.8, k b = 0.01. The default number of requests residing in each fog node was between 7500 and 12,500, randomly assigned in each experiment. The total duration of each experiment was 20 h. The parameter definitions of the request generating models are depicted in Figure 5. The OLB adopted the same scenarios as RLB and RHO. Besides, the OLB had to deploy the same services to all fog nodes.
In the first experiment, we considered 16 fog nodes having high loads of hotspot services. The experimental results are illustrated in Figure 6. The parameter settings were as follows; on_off_pb = 160, off_on_pb = 40, on_off_factor = 2, incr_load_variation = 0.1, decr_load_variation = 0.58 and burstiness_factor = 200. Apparently, fog node 1 had the largest number of arriving requests, while fog node 16 had the lowest number of arriving requests. In other words, fog node 1 to fog node 16 were heavily congested. The OLB achieved the best fairness because all fog nodes had almost the same and stable number of average requests. The OLB was capable of redirecting requests in time to whichever fog node had the lowest number of residing requests. The reason is that each fog node keeps replicas of hotspot services. To make the OLB work well, each fog node has to maintain a replica of residing services so that real-time duplication is needed. Therefore, the OLB had the lowest storage utilization. In addition, each fog node had to timely and frequently update their load information to the fog controller and hence the OLB had extremely high communication overheads. RLB-6 denotes that RLB maintains six replicas for residing services, RLB-8 denotes that RLB maintains eight replicas for residing services, and so on. When the number of replicas increased from 6 to 10, RLB achieved better fairness because fog nodes with hotspot service were able to redirect more requests to other fog nodes which only had non-hotspot services. The RHO's fairness approached that of OLB but was better than that of RLB-10. This is mainly because the RHO dynamically dispatched arriving requests to the fog nodes with lower loads. Despite requiring fewer replicas, the RHO tended to approach the best fairness of the OLB, and apparently outperformed the RLB scheme.
In the second experiment, we considered the effect of the increment of fog nodes with hotspot services. The number of fog nodes increased from 16 to 32 and therefore the overall load variations increased as compared with the first experiment. The simulation results are illustrated in Figure 7. The other parameter settings were the same values as the first experiment. Repeatedly, the OLB demonstrated the best fairness. In evaluating the fairness of RLB-6, RLB-8 and RLB-10, we found that increasing replicas had less improvement on fairness. The overall throughput of RLB-10 apparently decreased because RLB was incapable of allocating replicas effectively. Therefore, many arriving requests were discarded due to limited request processing capacity. The fairness of the RHO was close to that of the OLB because more fog nodes with non-hotspot services had replicas of the fog nodes with hotspot services. With that, RHO greatly enhanced request dispatches. As a result, the performance of the RHO still approximated to that of the OLB. In addition, it was much better than that of the RLB.
Unless otherwise specified, we used the following parameter settings experiments. The RHO's parameters were set as follows: = 0.3, = 2, = 16, = 1, = 250 and = 15 min. Furthermore, all fog nodes po the same request processing capacity, that is C =25,000. We considered 64 fog node simulations. Finally, there were 10 replicas for a hotspot service. The RLB's par were set as follows: = 0.8, = 0.01. The default number of requests residing fog node was between 7500 and 12,500, randomly assigned in each experiment. T duration of each experiment was 20 h. The parameter definitions of the request gen models are depicted in Figure 5. The OLB adopted the same scenarios as RLB an Besides, the OLB had to deploy the same services to all fog nodes. In the third experiment, we studied the effect on fairness when 16 fog nodes had different load conditions. The parameter settings including the default number of requests residing in each node was between 5000 and 20,000 and incr_load_variation was set at 0.2. The simulation results are illustrated in Figure 8. Due to increasing load variations, the RLB demonstrated relatively poor fairness as compared with the first experiment. However, the RHO still demonstrated excellent fairness and approached the performance of OLB. From Figures 6-8, the RHO apparently outperformed the RLB and came close to the OLB under various scenarios, such as the number of fog nodes with hotspot services and load variations. In a word, the simplicity of the RHO was similar to that of the RLB but the RHO achieved excellent fairness close to the OLB. fog nodes with hotspot service were able to redirect more requests to other fog nodes which only had non-hotspot services. The RHO's fairness approached that of OLB bu was better than that of RLB-10. This is mainly because the RHO dynamically dispatched arriving requests to the fog nodes with lower loads. Despite requiring fewer replicas, the RHO tended to approach the best fairness of the OLB, and apparently outperformed the RLB scheme. In the second experiment, we considered the effect of the increment of fog nodes with hotspot services. The number of fog nodes increased from 16 to 32 and therefore the over all load variations increased as compared with the first experiment. The simulation results are illustrated in Figure 7. The other parameter settings were the same values as the firs experiment. Repeatedly, the OLB demonstrated the best fairness. In evaluating the fair ness of RLB-6, RLB-8 and RLB-10, we found that increasing replicas had less improvemen on fairness. The overall throughput of RLB-10 apparently decreased because RLB was in capable of allocating replicas effectively. Therefore, many arriving requests were dis carded due to limited request processing capacity. The fairness of the RHO was close to that of the OLB because more fog nodes with non-hotspot services had replicas of the fog nodes with hotspot services. With that, RHO greatly enhanced request dispatches. As a result, the performance of the RHO still approximated to that of the OLB. In addition, i was much better than that of the RLB. In the third experiment, we studied the effect on fairness when 16 fog nodes had different load conditions. The parameter settings including the default number of request residing in each node was between 5000 and 20,000 and incr_load_variation was set at 0.2 The simulation results are illustrated in Figure 8. Due to increasing load variations, th RLB demonstrated relatively poor fairness as compared with the first experiment. How ever, the RHO still demonstrated excellent fairness and approached the performance o OLB. From Figure 6 to Figure 8, the RHO apparently outperformed the RLB and cam close to the OLB under various scenarios, such as the number of fog nodes with hotspo services and load variations. In a word, the simplicity of the RHO was similar to that o the RLB but the RHO achieved excellent fairness close to the OLB. ever, the RHO still demonstrated excellent fairness and approached the performance OLB. From Figure 6 to Figure 8, the RHO apparently outperformed the RLB and cam close to the OLB under various scenarios, such as the number of fog nodes with hotsp services and load variations. In a word, the simplicity of the RHO was similar to that the RLB but the RHO achieved excellent fairness close to the OLB.
Conclusions
In this paper, we proposed an integrated resource management for fog networks inclusive of intelligent energy perception, service level agreement planning and replicationbased hotspot offload. First, fog nodes were dynamically classified into a hot set, warm set or cold set in terms of load conditions or a pre-defined number of fog nodes in each set. The fog nodes in the hot set are responsible for guaranteeing QoS and the fog nodes in the cold set are maintained at a low-energy state to save energy consumption. Moreover, the fog nodes in the warm set are used to balance the QoS requirements and energy consumption. Secondly, we described the relationships between resource metrics and SLA parameters. Accordingly, we proposed an SLA mapping mechanism which efficiently identified the SLA elements with the same semantics. Finally, we proposed the replication-based hotspot offload scheme which could be easily integrated with intelligent energy perception and service level agreement planning. The RHO scheme is easy to implement and it achieves excellent fairness. In addition, it has limited communication overheads and requires a limited number of replicas. The simulation results demonstrated that the fairness of the RHO outperformed that of the RLB and approached that of the OLB. In the future, we will further study ways to contribute to improving the practicality and efficiency for integrated resource management. First of all, we will design a transmission control protocol (TCP) latency model so that the study can approach practical situations. Furthermore, we will design an energy model that carries out the study for in depth evaluation of the energy consumption of the intelligent energy perception. In the end, we will consider the mobile environments and therefore the study can be applied to analyze more wide and complicated fog networks. | 11,092.2 | 2022-03-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
The Influence of Different Sera on the Anti-Infective Properties of Silver Nitrate in Biopolymer Coatings
The widespread prevalence of periprosthetic joint infections (PJIs) poses significant challenges in orthopedic surgeries, with pathogens such as Staphylococcus epidermidis being particularly problematic due to their capability to form biofilms on implants. This study investigates the efficacy of an innovative silver nitrate-embedded poly-L-lactide biopolymer coating designed to prevent such infections. The methods involved applying varying concentrations of silver nitrate to in vitro setups and recording the resultant bacterial growth inhibition across different serum environments, including human serum and various animal sera. Results highlighted a consistent and significant inhibition of S. epidermidis growth at all tested concentrations in each type of serum without adverse interactions with serum proteins, which commonly compromise antimicrobial efficacy. This study concludes that the silver nitrate-embedded biopolymer coating exhibits potent antibacterial properties and has potential for use in clinical settings to reduce the incidence of PJIs. Furthermore, the findings underscore the importance of considering serum interactions in the design and testing of antimicrobial implants to ensure their effectiveness in actual use scenarios. These promising results pave the way for further research to validate and refine this technology for clinical application, focusing on optimizing silver ion release and assessing biocompatibility in vivo.
Introduction
The field of biopolymer implant coatings has emerged as a promising solution to combat implant-related infections, which are particularly prevalent in orthopedic surgery [1].These infections pose a significant challenge, with mega-endoprostheses carrying a substantial risk of periprosthetic joint infection (PJI), reaching approximately 20% and escalating to over 50% after multiple revisions [2][3][4][5].The primary pathogens responsible for PJIs are Staphylococcus aureus and coagulase-negative staphylococci, notably Staphylococcus epidermidis [6].
To combat this, an implant coating comprising silver nitrate embedded within the biopolymer poly-L-lactide has been developed.The silver nitrate is contained within a reservoir in the polymer, facilitating the controlled release of silver ions exclusively at the material surface.This reservoir serves a dual purpose: safeguarding surrounding cells from cytotoxicity and enabling selective activation through non-invasive shock waves upon the occurrence of bacterial infection.A comprehensive assessment of the coating's efficacy has been conducted through a mechanical and microbiological testing concept [7].The results demonstrate a significant inhibition of biofilm formation and the antimicrobial properties of the shock wave-induced silver release mechanism [8,9].Focused high-energy shock wave therapy is a non-invasive clinical therapy that is employed in orthopedics for the treatment of conditions such as pseudoarthrosis, tendonitis, enthesopathies, and slow fracture healing [10,11].The shock waves are acoustic waves that traverse the soft tissue and release their energy due to an impedance change at hard surfaces, such as bone or titanium implants.With regard to the coating, the shock waves locally detach the biopolymer in small areas, thereby facilitating a burst release of silver from these areas.
In the past, a discrepancy between in vitro and in vivo studies has been observed in numerous investigations utilizing silver as an antibacterial agent [12].This discrepancy is also evident within different in vitro assays due to the use of varying culture media for bacterial growth and their interactions with silver ions [12,13].Consequently, the conventional methodology of the zone of inhibition test for silver is rendered inapplicable due to these interactions, necessitating the adoption of alternative assays to measure antibacterial efficacy [13].Hidalgo et al. (1998) observed diminished efficacy of silver nitrate in the presence of fetal bovine serum (FBS), and moreover, bovine serum albumin was found to attenuate the impact of silver nanoparticles on various bacterial strains, including S. aureus, Streptococcus salivarius, Escherichia coli, and Pseudomonas aeruginosa, within an agar matrix [14,15].Similarly, Liau et al. (1997) noted the neutralization of silver nitrate by glutathione (GSH) due to its cysteine component [16].Cysteines are characterized by thiol functional groups, which serve as sites for the binding of silver ions.They are also present in albumin, the predominant protein component in blood serum.The interaction between GSH and albumin with silver ions leads to an elevation in the minimum inhibitory concentration (MIC) of silver nitrate when these proteins are present [17].The binding of silver ions to thiol groups additionally represents a mechanism contributing to the antibacterial properties of these ions.The ions inactivate coenzyme A, a pivotal enzyme involved in the tricarboxylic acid (TCA) cycle, which possesses thiol groups.Through the inactivation of this enzyme, the normal cellular respiration of bacteria is disrupted, ultimately resulting in bactericidal effects [18].
The differences between the physiological fluids of various species have been documented for several decades.In 1945, Moore published a study on the differing electrophoretic patterns observed in sera from different species and their associated proteins [19].Warren et al. (2010) investigated macrophage stimulation in the blood of different species and found that, in particular, mice and humans exhibited a considerable difference in the induction of cytokines by serum proteins [20].The compounds formed by nanoparticles and serum proteins also vary greatly depending on the species, as evidenced by a comparison between human serum, FBS, and mouse serum with gold and silica nanoparticles [21].The differences between the serum albumins in different species are particularly well known.There are differences in enzymatic, transport, redox, and binding activity, as well as structure, that influence the behavior of the albumin used in diagnostics and other applications [22][23][24].For example, bovine serum albumin and human serum albumin share only 76% of their identity [23,25].
The evaluation of a medical device and/or drug comprises several phases, including in vitro testing, preclinical in vivo studies, and clinical studies.In the in vitro tests, FBS is utilized as a standard for cell culture.For the preclinical studies, an in vivo model must be selected.Small animals, most commonly mice, are often employed for this purpose.The choice of the animal model is often based on the cost and size of the animal.However, if there are potential limitations, such as differences in serum composition and thus differences in efficacy, these should be considered when selecting the model.In the case of clinical studies and, of course, later clinical application, the influence of the human body and thus of the human serum must also be contemplated.
The primary aim of this investigation is to evaluate the degree to which the antibacterial effectiveness of released silver diminishes when silver ions associate with thiol groups, especially within GSH and albumin.The findings from this study provide an evaluation Polymers 2024, 16, 1862 3 of 9 of the antibacterial performance of silver released from a biopolymer coating in different evaluation phases and across multiple biological systems.
Materials and Methods
This study was designed to investigate the impact of sera from four distinct sources on silver nitrate.To achieve this, a series of growth curves were recorded, focusing on the potential variations in protein compositions and their consequent impacts on ion binding.
Four different sera were intended to map the complete evaluative trajectory of the coating across several experimental contexts, encompassing in situ clinical scenarios, in vivo, and in vitro, and to reveal any possible influences during every step of the validation process of the coating.For the simulation of clinical applications, human serum was utilized; FBS was employed for the in vitro experiments; and both mouse and rabbit sera were used for the in vivo assessments.Leukocyte-depleted frozen fresh plasma was anonymized and distributed by the blood bank of the University Hospital Muenster for research purposes.The plasma was thawed and left to coagulate before being centrifuged at 3000 rpm for 10 min to retrieve the serum.FBS, rabbit serum, and mouse serum were purchased from PAN-Biotech GmbH (Aidenbach, Germany).Test solutions were created using silver nitrate (Carl Roth GmbH + Co. KG, Karlsruhe, Germany) in aqua dest.stock solution and either serum or tryptic soy broth (TSB; Becton Dickinson GmbH, Heidelberg, Germany) samples.This step was performed first to maximize the binding effect of the silver ions to any components of the serum and to ensure that bacteria could only be inhibited subsequently.In a previous study, it was determined that the average shock wave-induced release of silver from the 6% silver coating was 57.8 mg/L [8].From this study, it is known that the concentration effectively inhibits bacterial growth (Figure 1) [8].Using this value, the concentrations of 50, 100, 150, and 200 mg/L of silver nitrate were selected for these solutions.Only the two lower concentrations were used for the TSB controls.
The primary aim of this investigation is to evaluate the degree to which the antibacterial effectiveness of released silver diminishes when silver ions associate with thiol groups, especially within GSH and albumin.The findings from this study provide an evaluation of the antibacterial performance of silver released from a biopolymer coating in different evaluation phases and across multiple biological systems.
Materials and Methods
This study was designed to investigate the impact of sera from four distinct sources on silver nitrate.To achieve this, a series of growth curves were recorded, focusing on the potential variations in protein compositions and their consequent impacts on ion binding.
Four different sera were intended to map the complete evaluative trajectory of the coating across several experimental contexts, encompassing in situ clinical scenarios, in vivo, and in vitro, and to reveal any possible influences during every step of the validation process of the coating.For the simulation of clinical applications, human serum was utilized; FBS was employed for the in vitro experiments; and both mouse and rabbit sera were used for the in vivo assessments.Leukocyte-depleted frozen fresh plasma was anonymized and distributed by the blood bank of the University Hospital Muenster for research purposes.The plasma was thawed and left to coagulate before being centrifuged at 3000 rpm for 10 min to retrieve the serum.FBS, rabbit serum, and mouse serum were purchased from PAN-Biotech GmbH (Aidenbach, Germany).Test solutions were created using silver nitrate (Carl Roth GmbH + Co. KG, Karlsruhe, Germany) in aqua dest.stock solution and either serum or tryptic soy broth (TSB; Becton Dickinson GmbH, Heidelberg, Germany) samples.This step was performed first to maximize the binding effect of the silver ions to any components of the serum and to ensure that bacteria could only be inhibited subsequently.In a previous study, it was determined that the average shock wave-induced release of silver from the 6% silver coating was 57.8 mg/L [8].From this study, it is known that the concentration effectively inhibits bacterial growth (Figure 1) [8].Using this value, the concentrations of 50, 100, 150, and 200 mg/L of silver nitrate were selected for these solutions.Only the two lower concentrations were used for the TSB controls.S. epidermidis RP62A (ATCC-35984; American Type Culture Collection, Manassas, VA, USA) was cultured in TSB overnight at 37 °C with orbital shaking.This strain was chosen for its capacity to form high-quality biofilms and has been utilized in the majority S. epidermidis RP62A (ATCC-35984; American Type Culture Collection, Manassas, VA, USA) was cultured in TSB overnight at 37 • C with orbital shaking.This strain was chosen for its capacity to form high-quality biofilms and has been utilized in the majority of prior assessments of the coating's efficacy.The overnight culture was then adjusted to an optical density of 0.010 at 578 nm and then diluted 1:10 with TSB, resulting in a bacterial count of approximately 5 × 10 5 colony-forming units/mL (CFU/mL).
The test solutions were added to the wells of a 96-well plate as technical duplicates and biological triplicates, each consisting of 50 µL.Additionally, 50 µL of the adjusted Polymers 2024, 16, 1862 4 of 9 inoculum was added to each well, and the same amount was pipetted as a growth control in technical duplicates and biological triplicates.Each test solution was accompanied by a separate blank control consisting of 50 µL of the test solution and 50 µL of TSB in technical duplicates to allow for later blank correction after the measurement.
The bacterial growth in the solutions was monitored using optical density.The measurement was performed with the Synergy HTX Multi-Mode Reader (BioTek Instruments GmbH, Bad Friedrichshall, Germany) at 578 nm, 37 • C, and orbital shake with a speed of 282 cpm every 30 min for 16 h.After the measurement, the data were blank-corrected in Microsoft Excel (Microsoft Corporation, Redmond, WA, USA) and graphically analyzed in GraphPad Prism 5 (GraphPad Software Inc., Boston, MA, USA).
Results
To assess the effect of silver released from a polymer coating in situ, bacterial growth curves were utilized.A minimum concentration of 57.8 mg/L was assumed based on a previous investigation of the active shock wave release of silver from the coating [8].This concentration could potentially be increased by adjusting the handling and/or increasing the number of areas being activated.Four concentrations of silver nitrate were selected for testing to reflect this.Thus, the two lower concentrations were also utilized in TSB as a control.The optical density measurement, after being corrected for the blank, indicates the concentration of bacteria in the solution.
The silver samples demonstrated no growth in either human serum or TSB compared to the control curve with S. epidermidis RP62A (Figure 2).The control grew to a mean optical density of 0.459.In contrast, the curves from the TSB samples remained largely unchanged, with a maximum at 0.016.This indicates that silver nitrate at concentrations of 50 and 100 mg/L in TSB inhibits the growth of the bacteria.At 16 h, the maximum optical density of all the silver nitrate in human serum samples was 0.034, which demonstrated growth inhibition.However, there were fluctuations in the optical density in the first eight to nine hours.During this time, the variance between samples (i.e., the different wells) of a single test solution was considerable.To examine whether there are any differences between sera of different origins, this methodology was also tested on three distinct animal sera.These sera were selected based on their involvement in different stages of the typical validation process of a medical device or pharmaceutical: in vitro and in vivo tests.FBS was selected because of its typical use in cell culture.To reflect diverse small animal models and their potential variations, both mouse and rabbit sera were chosen.To examine whether there are any differences between sera of different origins, this methodology was also tested on three distinct animal sera.These sera were selected based on their involvement in different stages of the typical validation process of a medical device or pharmaceutical: in vitro and in vivo tests.FBS was selected because of its typical use in cell culture.To reflect diverse small animal models and their potential variations, both mouse and rabbit sera were chosen.
In general, no observable growth was identified in any of the animal sera (Figure 3).The maximum optical densities at 16 h observed in FBS, mouse serum, and rabbit serum were 0.025, 0.104, and 0.019, respectively.
Discussion
This study has demonstrated that silver nitrate effectively inhibits the growth of S. epidermidis across various serum environments, underscoring its potential utility in preventing implant-related infections in orthopedic surgeries.The consistent inhibition of bacterial growth in human serum, FBS, mouse serum, and rabbit serum, as demonstrated through controlled optical density measurements, points to the robust antibacterial properties of silver nitrate, reinforcing its value in clinical applications.
The results indicate that silver is not inhibited by the proteins in any of the sera mentioned at a concentration of 50 mg/L silver nitrate or higher.However, it is still possible that at a lower silver nitrate concentration, the binding of silver to thiol groups in 50% serum may lead to an increase in the minimum inhibitory concentration (MIC) in this medium.With 50 mg/L silver nitrate, the free binding sites are all occupied, resulting in saturation.It may be assumed that a significant proportion of the silver nitrate remains in its free form at this particular concentration, thus allowing the antimicrobial ions to exert their effect.It is assumed that a minimum release of approximately 50 mg/L of silver ions
Discussion
This study has demonstrated that silver nitrate effectively inhibits the growth of S. epidermidis across various serum environments, underscoring its potential utility in preventing implant-related infections in orthopedic surgeries.The consistent inhibition of bacterial growth in human serum, FBS, mouse serum, and rabbit serum, as demonstrated through controlled optical density measurements, points to the robust antibacterial properties of silver nitrate, reinforcing its value in clinical applications.
The results indicate that silver is not inhibited by the proteins in any of the sera mentioned at a concentration of 50 mg/L silver nitrate or higher.However, it is still possible that at a lower silver nitrate concentration, the binding of silver to thiol groups in 50% serum may lead to an increase in the minimum inhibitory concentration (MIC) in this Polymers 2024, 16, 1862 6 of 9 medium.With 50 mg/L silver nitrate, the free binding sites are all occupied, resulting in saturation.It may be assumed that a significant proportion of the silver nitrate remains in its free form at this particular concentration, thus allowing the antimicrobial ions to exert their effect.It is assumed that a minimum release of approximately 50 mg/L of silver ions is present as a result of the shock waves produced by the biopolymer coating.
Silver ions exhibit dual antimicrobial mechanisms against bacteria, which can be categorized into bacteriostatic and bactericidal effects.Initially, silver ions target the murein wall of bacterial cells, binding to it and altering its permeability.As a bacteriostatic strategy, this initial interaction prevents the proliferation of bacteria by restricting the passage of substances essential for their growth and survival.Subsequently, the bactericidal action of silver ions manifests as they bind to thiol groups present in bacterial enzymes, leading to enzyme inactivation.This binding critically impairs metabolic processes, including the TCA cycle and the respiratory chain.The disruption results in the accumulation of hydroxyl radicals, which are detrimental to bacterial DNA and further contribute to the bactericidal outcome (Figure 4) [18,26].
strategy, this initial interaction prevents the proliferation of bacteria by restricting the passage of substances essential for their growth and survival.Subsequently, the bactericidal action of silver ions manifests as they bind to thiol groups present in bacterial enzymes, leading to enzyme inactivation.This binding critically impairs metabolic processes, including the TCA cycle and the respiratory chain.The disruption results in the accumulation of hydroxyl radicals, which are detrimental to bacterial DNA and further contribute to the bactericidal outcome (Figure 4) [18,26].
In a similar manner, the silver ions can bind to the thiol groups of the cysteine component of GSH and albumin.GSH and albumin act as antioxidants, protecting against free radicals and reactive oxygen species (ROS) [17].Mulley et al. (2014) tested silver nitrate concentrations in human serum, human serum albumin, and GSH.The MIC of silver nitrate for S. aureus increased from 33 µmol/dm 3 to 1121 µmol/dm 3 when 1 mmol/dm 3 GSH was present.The MIC was found to increase to a lesser extent in human serum and human serum albumin.The MIC in 50% human serum was determined to be 174 µmol/dm 3 , which corresponds to 29.56 mg/L silver nitrate.It should be noted that the MIC is not directly comparable to the results presented here, as different media and a different bacterial strain were used.However, the results of this study do not contradict those of Mulley et al. (2014), as the former began at a significantly higher concentration of silver nitrate [17].The findings of this study, while promising, also highlight several challenges and opportunities for advancement, particularly with its reproducibility.Fluctuations in the first eight to nine hours of the human serum incubation period suggest the hypothesis that there are initial stages of growth observed in select wells.However, it is more probable that these fluctuations are a consequence of alterations in serum coloration resulting from incubation at body temperature or the presence of residual coagulation factors.The protein compositions of the various wells may exhibit slight variations, which could potentially influence the interactions between the proteins or their folding dynamics due to the change in temperature.This could result in subtle differences in coloration.This In a similar manner, the silver ions can bind to the thiol groups of the cysteine component of GSH and albumin.GSH and albumin act as antioxidants, protecting against free radicals and reactive oxygen species (ROS) [17].Mulley et al. (2014) tested silver nitrate concentrations in human serum, human serum albumin, and GSH.The MIC of silver nitrate for S. aureus increased from 33 µmol/dm 3 to 1121 µmol/dm 3 when 1 mmol/dm 3 GSH was present.The MIC was found to increase to a lesser extent in human serum and human serum albumin.The MIC in 50% human serum was determined to be 174 µmol/dm 3 , which corresponds to 29.56 mg/L silver nitrate.It should be noted that the MIC is not directly comparable to the results presented here, as different media and a different bacterial strain were used.However, the results of this study do not contradict those of Mulley et al. (2014), as the former began at a significantly higher concentration of silver nitrate [17].
The findings of this study, while promising, also highlight several challenges and opportunities for advancement, particularly with its reproducibility.Fluctuations in the first eight to nine hours of the human serum incubation period suggest the hypothesis that there are initial stages of growth observed in select wells.However, it is more probable that these fluctuations are a consequence of alterations in serum coloration resulting from incubation at body temperature or the presence of residual coagulation factors.The protein compositions of the various wells may exhibit slight variations, which could potentially influence the interactions between the proteins or their folding dynamics due to the change in temperature.This could result in subtle differences in coloration.This conclusion is based on the observed differences between the various wells and the negative values observed after blank correction.Given that the serum was produced manually from fresh plasma of uncontrolled origin, this explanation seems like a plausible hypothesis.
The clinical translation of these results could significantly impact the management of PJI, potentially lowering infection rates associated with orthopedic implants.The translation from the laboratory bench to the bedside involves not only confirming these findings in clinically mimetic models but also considering physiological factors such as the investigated interactions between serum proteins and the active agent that might influence efficacy in human patients.
Overall, while these in vitro assessments provide valuable insights into the interactions between silver ions and different sera, they also prompt a reevaluation of the typical experimental designs and models used in preclinical testing.Enhancing the predictiveness of these models could accelerate the development of antimicrobial coatings that are both effective and clinically viable, ultimately reducing PJI rates and improving patient outcomes.Possible alterations could be to use physiological fluids as well as a co-cultivation of cell lines and bacteria, mimicking the race for the surface of the implant [27].
The promising results from this study pave the way for several key future research directions that are essential for advancing the clinical application of silver-based antimicrobial coatings.While determining the MIC in sera could be informative, it may not be directly relevant to the current focus, which instead lies in optimizing the therapeutic window of silver release.Enhancing this aspect might involve testing different intensities or frequencies of the shock wave therapy used to trigger silver ion release, thereby ensuring consistent antimicrobial activity while avoiding adverse effects on surrounding tissues.Furthermore, in vivo studies are needed to monitor their overall safety, followed by clinical trials to test their practical viability.The insights gained from such investigations will not only validate the effectiveness of silver-based coatings in actual medical settings but will also refine their application protocols to maximize patient benefits and minimize risks.
Conclusions
In summary, the research presented in this paper demonstrates the efficacy of a biopolymer implant coating embedded with silver nitrate in inhibiting bacterial growth, particularly that of S. epidermidis, across a variety of serum environments.This study effectively highlights the potential of this approach for mitigating implant-related infections in orthopedic settings.The results also demonstrate that the effect of silver nitrate in the new polylactide coating remains consistent across different environments, as the anti-infective efficacy does not significantly diminish in various sera.This suggests that the observed in vitro effect should not be attenuated by these factors in an in vivo situation.Importantly, the findings also underscore the complex interactions between silver ions and serum proteins, which could influence the clinical translation of this technology, thereby guiding future research towards optimizing the conditions for clinical application to improve patient outcomes in the management of PJIs.
Patents
A patent application has been filed for the coating (international publication number: WO2023025944).
Funding: This research was funded by the Else-Kröner-Fresenius Stiftung, grant number 2021_EKEA.129.We acknowledge support from the Open Access Publishing Fund of the University of Muenster.
Institutional Review Board Statement: Ethical review and approval were waived for this study due to the use of purchased animal material and the use of anonymous human residual material.No live animals were used in this study, which is therefore exempt from ethical approval under the German Animal Protection Act.The use of anonymized residual material is exempt from ethical approval according to §24 of the German Medical Devices Act.
Figure 1 .
Figure 1.(Left) Silver release by shock waves on poly-L-lactic acid (PLLA) and 6% silver coating compared to uncoated samples.The red line indicates the minimal inhibitory concentration of Staphylococcus epidermidis RP62A.(Right) Inhibition of S. epidermidis growth through eluate from shock wave on poly-L-lactic acid and 6% silver coating.All data are from Puetzler et al. (2023) [8].
Figure 1 .
Figure 1.(Left) Silver release by shock waves on poly-L-lactic acid (PLLA) and 6% silver coating compared to uncoated samples.The red line indicates the minimal inhibitory concentration of Staphylococcus epidermidis RP62A.(Right) Inhibition of S. epidermidis growth through eluate from shock wave on poly-L-lactic acid and 6% silver coating.All data are from Puetzler et al. (2023) [8].
Polymers 2024 , 11 Figure 2 .
Figure 2. Optical density (OD) at 578 nm of solutions containing different concentrations (50, 100, 150, and 200 mg/L) of silver nitrate in human serum and tryptic soy broth over a 16 h period.The optical density corresponds to the growth of Staphylococcus epidermidis RP62A.
Figure 2 .
Figure 2. Optical density (OD) at 578 nm of solutions containing different concentrations (50, 100, 150, and 200 mg/L) of silver nitrate in human serum and tryptic soy broth over a 16 h period.The optical density corresponds to the growth of Staphylococcus epidermidis RP62A.
Polymers 2024 , 11 Figure 3 .
Figure 3. Optical density (OD) at 578 nm of solutions containing different concentrations (50, 100, 150, and 200 mg/L) of silver nitrate in various animal sera (fetal bovine, mouse, and rabbit) and tryptic soy broth over a 16 h period.The optical density corresponds to the growth of Staphylococcus epidermidis RP62A.
Figure 3 .
Figure 3. density (OD) at 578 nm of solutions containing different concentrations (50, 100, 150, and 200 mg/L) of silver nitrate in various animal sera (fetal bovine, mouse, and rabbit) and tryptic soy broth over a 16 h period.The optical density corresponds to the growth of Staphylococcus epidermidis RP62A.
Figure 4 .
Figure 4. Schematic illustration of the antibacterial effect of silver ions.
Figure 4 .
Figure 4. Schematic illustration of the antibacterial effect of silver ions. | 6,234.2 | 2024-06-29T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
ARMOR: An Automated Reproducible MOdular Workflow for Preprocessing and Differential Analysis of RNA-seq Data
The extensive generation of RNA sequencing (RNA-seq) data in the last decade has resulted in a myriad of specialized software for its analysis. Each software module typically targets a specific step within the analysis pipeline, making it necessary to join several of them to get a single cohesive workflow. Multiple software programs automating this procedure have been proposed, but often lack modularity, transparency or flexibility. We present ARMOR, which performs an end-to-end RNA-seq data analysis, from raw read files, via quality checks, alignment and quantification, to differential expression testing, geneset analysis and browser-based exploration of the data. ARMOR is implemented using the Snakemake workflow management system and leverages conda environments; Bioconductor objects are generated to facilitate downstream analysis, ensuring seamless integration with many R packages. The workflow is easily implemented by cloning the GitHub repository, replacing the supplied input and reference files and editing a configuration file. Although we have selected the tools currently included in ARMOR, the setup is modular and alternative tools can be easily integrated.
genes (Marini 2018;Monier et al. 2019;Powell 2018), or do not provide a single framework for the preprocessing and downstream analysis (Steinbaugh et al. 2018). Some workflows are based on predefined reference files and can only quantify abundances for human or mouse (Torre et al. 2018;Cornwell et al. 2018;Wang 2018). Additionally, workflows that conduct differential gene expression analysis often do not allow comparisons between more than two groups, or more complex experimental designs (Girke 2018;Cornwell et al. 2018). Some existing pipelines only provide a graphical user interface to design and execute fully automated analyses (Hung et al. 2018;Afgan et al. 2018). In addition to reference-based tools, there are also pipelines that perform de novo transcriptome assembly before downstream analysis (e.g., https://github.com/dib-lab/elvers). ARMOR performs both preprocessing and downstream statistical analysis of the RNA-seq data, building on standard statistical analysis methods and commonly used data containers. It distinguishes itself from existing workflows in several ways: (i) Its modularity, reflected in its fully and easily customizable framework. (ii) The transparency of the output and analysis, meaning that all code is accessible and can be modified by the user. (iii) The seamless integration with downstream analysis and visualization packages, especially those within Bioconductor (Huber et al. 2015;Amezquita et al. 2019). (iv) The ability to specify any fixed-effect experimental design and any number of contrasts, in a standardized format. (v) The inclusion of a test for differential transcript usage in addition to differential gene expression analysis. While high-performance computing environments and cloud computing are not specifically targeted, Snakemake enables the usage of a cluster without the need to modify the workflow itself.
In general, we do not advocate fully automated analysis. All rigorous data analyses need exploratory steps and spot checks at various steps throughout the process, to ensure that data are of sufficient quality and to spot potential errors (e.g., sample mislabelings). ARMOR handles the automation of "bookkeeping" tasks, such as running the correct sequence of software for all samples, and compiling the data and reports in standardized formats. If errors are identified, the workflow can re-run only the parts that need to be updated.
Overview
The ARMOR workflow is designed to perform an end-to-end analysis of bulk RNA-seq data, starting from FASTQ files with raw sequencing reads ( Figure 1). Reads first undergo quality control with FastQC (https://www.bioinformatics.babraham.ac.uk/ projects/fastqc/) and (optionally) adapter trimming using TrimGalore! (https://www.bioinformatics.babraham.ac.uk/projects/ trim_galore/), before being mapped to a transcriptome index using Salmon (Patro et al. 2017) and (optionally) aligned to the genome using STAR (Dobin et al. 2013). Estimated transcript abundances from Salmon are imported into R using the tximeta package (Soneson et al. 2015;Love et al. 2019) and analyzed for differential gene expression and (optionally) differential transcript usage with edgeR (Robinson et al. 2010) and DRIMSeq (Nowicka and Robinson 2016). The quantifications, provided metadata, and results from the statistical analyses are exported as SingleCellExperiment objects (Lun and Risso 2019) ensuring interoperability with a large part of the Bioconductor ecosystem (Huber et al. 2015;Amezquita et al. 2019). Quantification and quality control results are summarized in a MultiQC report (Ewels et al. 2016).
Other tools can be easily exchanged for those listed above by modifying the Snakefile and/or the template analysis code.
Input file specification ARMOR can be used to analyze RNA-seq data from any organism for which a reference transcriptome and (optionally) an annotated reference genome are available from either Ensembl (Zerbino et al. 2018) or GENCODE (Frankish et al. 2019). Paths to the reference files, as well as the FASTQ files with the sequencing reads, are specified by the user in a configuration file. In addition, the user prepares a metadata filea tab-delimited text file listing the name of the samples, the library type (single-or paired-end) and any other covariates that will be used for the statistical analysis. The checkinputs rule in the Snakefile can be executed to make sure all the input files and the parameters in the configuration file have been correctly specified.
Workflow execution ARMOR is implemented as a modular Snakemake (Köster and Rahmann 2012) workflow, and the execution of the individual steps is controlled by the provided Snakefile. Snakemake will automatically keep track of the dependencies between the different parts of the workflow; rerunning the workflow will thus only regenerate results that are out of date or missing given these dependencies. Via a set of variables specified in the configuration file, the user can easily decide to include or exclude the optional parts of the workflow (shaded ellipses in Figure 1). By adding or modifying targets in the Snakefile, users can include any additional or specialized types of analyses that are not covered by the original workflow.
By default, all software packages that are needed for the analysis will be installed in an auto-generated conda environment, which will be automatically activated by Snakemake before the execution of each rule. The desired software versions can be specified in the provided environment file. If the user prefers, local installations of (all or a subset of) the required software can also be used (as described in Software management).
Software management
First, the user needs to have a recent version of Snakemake and conda installed. There is a range of possibilities to manage the software for the ARMOR workflow. The recommended option is to allow conda and the workflow itself to manage everything, including the installation of the needed R packages. The workflow is executed this way with the command snakemake --use-conda The first time the workflow is run, the conda environments will be generated and all necessary software will be installed. Any subsequent invocations of the workflow from this directory will use these generated environments. An alternative option is to use ARMOR's envs/environment.yaml file to create a conda environment that can be manually activated, by running the command conda env create --name ARMOR \ --file envs/environment.yaml conda activate ARMOR The second command activates the environment. Once the environment is activated, ARMOR can be run by simply typing Figure 2 The files and directory structure that make up the ARMOR workflow.
snakemake Additionally, the user can circumvent the use of conda, and make sure that all software is already available and in the user's PATH. For this, Snakemake and the tools listed in envs/environment.yaml need to be manually installed, in addition to a recent version of R and the R packages listed in scripts/install_pkgs.R.
For either of the options mentioned above, the useCondaR flag in the configuration file controls whether a local R installation, or a condainstalled R, will be used. If useCondaR is set to False, the path to a local R installation (e.g., Rbin:,path.) must be specified in the configuration file, along with the path to the R package library (e.g., R_LIBS_USER=",path.") in the .Renviron file. If the specified R library does not contain the required packages, Snakemake will try to install them (i.e., write permissions would be needed). ARMOR has been tested on macOS and Linux systems.
Statistical analysis ARMOR uses the quasi-likelihood framework of edgeR (Robinson et al. 2010;Lun et al. 2016) to perform tests for differential gene expression, camera (Wu and Smyth 2012) to perform associated geneset analysis, and DRIMSeq (Nowicka and Robinson 2016) to test for differential transcript usage between conditions. All code to perform the statistical analyses is provided in Rmarkdown templates (Allaire et al. 2018;Xie et al. 2018), which are executed at runtime. This setup gives the user flexibility to use any experimental design supported by these tools, and to test any contrast(s) of interest, by specifying these in the configuration file using standard R syntax, e.g., design:" 0 + group" contrast:groupA-groupB Arbitrarily complex designs and multiple contrasts are supported. In addition, by editing the template code, users can easily configure the analysis, add additional plots, or even replace the statistical test if desired. After compilation, all code used for the statistical analysis, together with the results and version information for all packages used, is retained in a standalone html report, ensuring transparency and reproducibility and facilitating communication of the results.
Output files
The output files from all steps in the ARMOR workflow are stored in a user-specified output directory, together with log files for each step, including relevant software version information. A detailed summary of the output files generated by the workflow, including the shell command that was used to generate each of them, the time of creation, and information about whether the associated inputs, code or parameters have since been updated, can be obtained at any time by invoking Snakemake with the flag -D (or --detailed-summary). Using the benchmark directive of Snakemake, ARMOR also automatically generates additional text files summarizing the run time and peak memory usage of each executed rule.
The results from the statistical analyses are combined with the transcript-and gene-level quantifications and saved as SingleCellExperiment objects (Lun and Risso 2019), ensuring easy integration with a large number of Bioconductor packages for downstream analysis and visualization. For example, the results can be interactively explored using the iSEE package (Rue-Albrecht et al. 2018) and a template is provided for this.
Multiple project management
When managing multiple projects, the user might run ARMOR in multiple physical locations (i.e., clone the repository in separate places). snakemake --use-conda will create a separate conda environment in each subdirectory, which means that the installed software may be duplicated. If disk space is a concern, building and activating a single conda environment (using the conda env create command as shown in the Software management section), and activating this before invoking each workflow may be beneficial. It is also possible to explicitly specify the path to the desired config.yaml configuration file when snakemake is called: snakemake --configfile config.yaml Thus, the same ARMOR installation can be used for multiple projects, by invoking it with a separate config.yaml file for each project.
By taking advantage of the Snakemake framework, ARMOR makes file and software organization relatively autonomous. Although we recommend using a file structure similar to the one used n The suggested structure for the set of files that need to be organized to run ARMOR on a new dataset. The structure can deviate from this somewhat, since the location of the files can be specified in the corresponding config.yaml file.
for the example data provided in the repository (Figure 2), and managing all the software for a project in a conda environment, the user is free to use the same environment for different datasets, even if the files are located in several folders. This alternative is more of a "software-based" structure than the "project-based" structure we present with the pipeline. Either structure has its advantages and depending on the use case and level of expertise, both can be easily implemented using ARMOR. Figure 4 The set of output files from the workflow. This includes log files for every step and all the standard outputs of all the tools, such as R objects and scripts, BAM files, bigWig files and quantification tables. Note that the outputs for only one RNA-seq sample are shown; ... represents the set of output files for the remaining samples or contrasts. Directories ending in / contain extraneous files and are collapsed here.
Code availability ARMOR is available (under MIT license) from https://github.com/csoneson/ ARMOR, together with a detailed walk-through of an example analysis. The repository also contains a wiki (https://github.com/csoneson/ARMOR/ wiki), which is the main source of documentation for ARMOR and contains extensive information about the usage of the workflow.
Data Availability
Supplemental file DataS1.html contains the MultiQC report for the data used in the Real data walk-through section (ArrayExpress accession number E-MTAB-7029). Supplemental material available at FigShare: https://doi.org/10.25387/g3.8053280.
RESULTS AND DISCUSSION
The ARMOR skeleton Figure 2 shows the set of files contained within the ARMOR workflow, and what is downloaded to the user's computer when the repository is cloned. The example_data directory represents a (runnable) template of a very small dataset, which is useful for testing the software setup and the system as well as for having a structure to copy for a real project. The provided config.yaml file is pre-configured for this example dataset. We recommend that users prepare their own config.yaml and a similar directory structure to example_data, with the raw FASTQ files and reference sequence and annotation information in subfolders, perhaps using symbolic links if such files are already available in another location. We present an independent example below in the Real data walkthrough section.
Once everything is set up, running snakemake, which operates on the rules in the Snakefile, will construct the hierarchy of instructions to execute, given the specifications in the config.yaml file. Snakemake automatically determines the dependencies between the rules and will invoke the instructions in a logical order. The scripts and envs directories, and the Snakefile itself, should not need to be modified, unless the user wants to customize certain aspects of the pipeline.
Real data walk-Through
Here, we illustrate the practical usage of ARMOR on a bulk RNA-seq dataset from a study on Wnt signaling (Doumpas et al. 2019). For each of three genetic backgrounds (HEK 293T, dBcat and d4TCF) and two experimental conditions (untreated and stimulated using the GSK3 inhibitor CHIRON99021), three biological replicates were measured (18 samples in total). The number of sequenced reads for each individual sample ranges from 12.5 to 41 million. A more detailed overview of the dataset is provided in the MultiQC report generated by the ARMOR run (Supplemental File DataS1.html). An R script (down-load_files.R, which can be found at https://github.com/csoneson/ ARMOR/blob/chiron_realdataworkflow/E-MTAB-7029/download_ files.R) was written to download the FASTQ files with raw reads from ArrayExpress (https://www.ebi.ac.uk/arrayexpress/experiments/ E-MTAB-7029/), and create a metadata table detailing the type of library and experimental condition for each sample (Table 1). This table was saved as a tab-delimited text file named metadata.txt.
The raw data and reference files were organized into a directory, E-MTAB-7029, with the structure according to Figure 3. The default config.yaml downloaded with the workflow was copied into a new file called config_E-MTAB-7029.yaml and edited to reflect the location of these files. In addition, the read length was set and the experimental design was specified as " 0 + condition", where the condition information will be taken from metadata.txt. Then, a set of contrasts of interest (e.g., conditiond4Tcf__chir-conditiond4Tcf__ unstim) were specified, as well as the set of genesets to use. The final configuration file can be viewed at https://github.com/csoneson/ ARMOR/blob/chiron_realdataworkflow/config_E-MTAB-7029.yaml.
The set of files (not including the large data and reference files, which would be downloaded using the download_files.R) used in this setup can be found on the chiron_realdataworkflow branch of the ARMOR repository: https://github.com/csoneson/ARMOR/tree/ chiron_realdataworkflow.
After downloading the data, generating the metadata.txt file and editing the config.yaml file, the full workflow was run with the command: snakemake --use-conda--cores 20 \ --configfile config_E-MTAB-7029.yaml and upon completion of the workflow run, the specified output directory was populated as shown in Figure 4. The MultiQC directory contains a summary report of the quality assessment and alignment steps. In the outputR directory, reports of the statistical analyses (DRIMSeq_ dtu.html and edgeR_dge.html), as well as a list of SingleCellExperiment objects (in shiny_sce.rds) are saved. The latter can be imported into R and used for further downstream analysis. Using the template run_iSEE.R (available from https://github.com/csoneson/ARMOR/blob/chiron_ realdataworkflow/E-MTAB-7029/run_iSEE.R) and shiny_sce.rds (available from https://doi.org/10.6084/m9.figshare.8040239.v1), an R/shiny web application can be initiated, with various panels to allow the user to interactively explore the data and results ( Figure 5). Figure 6 shows the run time and maximal memory usage for generating each output file. Note that the ncores parameter in the configuration file was kept at 1, and thus each rule was run using a single thread. The most memory-intensive parts of the workflow, due to the large size of the reference genome, were the generation of the STAR index and the alignment of reads to the genome. The most time consuming parts were the generation of the STAR index and the DTU analysis with DRIMSeq. However, both of these can be executed using multiple cores, by increasing the value of the ncores parameter. | 3,970.4 | 2019-03-12T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Methodological Approaches to Development Strategies for the Tourism and Hospitality Industry Enterprises
The article is concerned with methodological approaches to development strategies for tourism and hospitality industry enterprises. It has been found that most regions, local tourist destinations, tourism and hospitality industry businesses do not have any clear formalized strategy. It was determined that some tourism and hospitality industry businesses apply only individual elements of strategic management, i.e. do not have an integrated system of strategic management. Methodological tools have been proposed for elaborating development strategies for tourism and hospitality industry enterprises, which show that strategic management, which is methodological by nature, i.e. is characterized by the conceptual approach and panoramic vision, cannot give an exact and clear view of the tourism industry’s future. It was proved that strategic management cannot be limited to a set of universal and routine rules, procedures or schemes. It was defined that strategic management requires great efforts, time and resources to make the activity of tourism and hospitality industry businesses remarkable.
Introduction
The contemporary theory of studying the scientific basis for the elaboration of development strategies in the tourism and hospitality industry is based on the paradigm of the theory of services since tourism services are the basis for the development of a complex tourism product and are connected with all non-material sectors of the economy. According to the theory of the three sectors of a national economy, the primary sector is agriculture and the extractive industry, the secondary sector -manufacturing, and the tertiary sector -services. In the framework of this approach, it can be pointed out that the sign of a developed economy is the rapid expansion of the services sector and its transformation into an important sector of the economy. More than half of the gross domestic product is forecast to be generated just in the services industry, which the tourism and hospitality industry belongs to. Over the past decade, the role of the services sector in Russia's gross domestic product has slightly changed. However, this was mainly driven by a sharp decline in production in the manufacturing sector which, in turn, caused shifts in the state's economic policy priorities. For this reason, a bigger role and wider scope of the services sector, to which the tourism and hospitality industry belongs, in the country's GDP confirms an increase in the circulation of capital among various industries and the possibility of a synergetic effect for the entire national economy. The study of problems related to the elaboration of development strategies for tourism and hospitality businesses was presented in research works by N.M. Gromova [1], E.A. Dedusenko [2], S.A. Kazakova [3], M.A. Los [4], L.S. Morozova [5], M.G. Musaev [6], etc. While scientists pay much attention to the issues of develop-ment of tourism and hospitality enterprises, the present socioeconomic and political realities have substantially complicated and changed the economic practice in the industry. New development patterns have emerged, requiring constant revision, deepening and renovation of the current theories, concepts and models.
Methods
The methodological basis of the research is the system approach, the methods of scientific abstraction, analysis and synthesis, the dialectic method of studying economic phenomena, and provisions for development strategies for tourist destinations in the unstable economic environment. To solve specific tasks the following methods were used in the article: theoretic generalization, logical, scientific abstraction, association and analogy (in order to study and generalize the methodological basis for strategic development of enterprises of the tourism and hospitality industry; the methods of system analysis, generalization, and comparison (in order to study the methodological approaches and methods of assessing efficiency of management in businesses of the tourism and hospitality industry). The information base of the research included legislative and statutory acts, statistical data from public authorities and selfgovernment bodies, scientific publications of Russian and foreign scientists about problems related to the elaboration of development strategies for enterprises of the tourism and hospitality industry [7,8,9]. In the course of the study, it is planned to develop approaches to management of enterprises of the tourism and hospitality industry, substantiate directions for strategic management in the tourism sector. In addition, a task is set to substantiate the development system for the tourism industry and determine the main areas of development for enterprises of the tourism and hospitality industry in the unstable economic environment.
Results
Under the current economic conditions, in order to increase activity in the tourism and hospitality industry, it became necessary to think strategically, which is mainly characteristic of economic entities for whom the entrepreneurial format of economic activities promoting future development is typical. When studying the strategic development of the tourism and hospitality industry in general, it is essential to define the stages of strategy development.
To make clear division of the rights, duties and authority to take and execute managerial decisions, it is proposed to elaborate development strategies for enterprises of the tourism and hospitality industry based on the principle of hierarchy by defining four levels of development (national, regional, specific tourist destinations and specific tourism and hospitality industry subjects) and relevant documents aimed at development at each level (national strategy, regional strategy, strategy of a specific destination (city, settlement, village, etc.) and strategy of a tourism and hospitality company). When developing the tourism and hospitality industry in Russia, assessment is carried out on the basis of the following macroeconomic indicators that characterize the national development of the industry as a whole, i.e. changes in tourist flow, the scope of tourism and hotel services provided, budget payments, employment in the tourism and hospitality industry, etc. Another indicator for assessing the development level of the tourism and hospitality industry in Russia is data from the Travel and Tourism Competitiveness Index which is published by the World Economic Forum. The Index shows results of competitiveness studies in the global tourism and hospitality industry every two years. This report in the form of ratings is often used as a tool of strategic management for businesses and governments to elaborate development strategies for the tourism and hospitality industry. The methods of formation of the competitiveness index for the tourism and hospitality industry are based on 79 indicators, grouped into 14 components. The data that make up the index form, in turn, 3 main sub-indices: 1) Sub-Index "Regulatory Environment in the Tourism and Hospitality Industry", which comprises 5 components, such as legislation and state regulation; environment, environmental protection; security; healthcare; the industry's priority for a country; 2) Sub-Index "Environment and Infrastructure for Businesses", which comprises 5 elements, such as air transport infrastructure; land transport infrastructure; tourism industry infrastructure; IT infrastructure; price competitiveness; 3) Sub-Index "Human, Cultural and Natural Resources in the Tourism and Hospitality Industry", which comprises 5 components, such as the availability of qualified staff; the desire to develop tourism; natural resources; cultural heritage; climate change. In regard to the development of the tourism and hospitality industry in the regions, the authors propose to elaborate and approve development programs or development strategies for a specific region every three or five years, with funds to be mainly provided from local budgets. A region's development programs for the tourism and hospitality industry should include efforts to develop and improve tourism and recreation infrastructure, to create favorable conditions for efficient management of the region's tourism and recreation industry, to raise professionalism of tourism and hospitality employees, to create safe conditions for tourists, to provide international cooperation and exchange of related experience. Nowadays, there is the practice of local governments' developing and implementing such programs and strategies for enterprises of the tourism and hospitality industry, but their mechanisms have yet to be finally adjusted to control, monitor and report to the local public about results achieved. Essentially, such strategies should stipulate not only events of development and sources of funds but also relevant mechanisms of control, monitoring and reporting. Primary tools of the mechanism to implement a development strategy in the tourism and hospitality industry can include (1) a system of regional statutory legal acts, which are logically connected, and specific methods and tools of state regulation, which are applied consistently, (2) information, methodological and instrumental support of preparation and adoption of managerial decisions by regional authorities, (3) wide use of strategic management elements provided that they are reasonably connected with methods and forms of operational management. At the same time, the theory and practice of entrepreneurship cover a wide range of general areas of development for standard economic conditions. Such development strategies are called standard or basic. In line with such an approach, several forms of development can be highlighted for enterprises of the tourism and hospitality industry to make reasonable use of. 1. Development through integration and diversification of activities, which are considered in several modifications. Integration is a strategy of intra-sector growth. The strategy has a form of horizontal integration when control is gained over competitors or enterprises of the tourism and hospitality industry merge in the course of business. When control is established over the links of the service chain, the strategy gets the form of vertical integration. Thus, related areas of tourism and hospitality activities are united in the course of integration. 2. Development via global expansion by forging strategic alliances or setting up joint ventures. Under the circumstances, enterprises of the tourism and hospitality industry can be expected to streamline operations within their current market positions. 3. Development via organizational flexibility, i.e. the ability to predict the development of competitors' economic processes also reduces uncertainty and independence. Ensuring organizational flexibility is another way of development for a company of the tourism and hospitality industry. Opposed to the above types of growth, this form aims, above all, to foresee competitors' development. For this reason, to gain competitive advantages the following strategies can be offered to enterprises of the tourism and hospitality industry: -The Strategy of Cost Leadership, which stipulates the inclination towards the lowest cost when creating and distributing tourism and hotel services. This strategy aims to set lower prices and increase market shares.
-The Strategy of Differentiation, which stipulates efforts to gain leadership in terms of the level, quality and technology of service, etc. This strategy aims to provide consumers with exclusive services, constituting the peculiar modification of standard services (discounts, bonuses, certificates, etc.).
-The Strategy of Concentration (Focus Strategy) aims to improve the specialization and concentration of a company of the tourism and hospitality industry on a relatively narrow target group of consumers or on certain services. This strategy is based on the selection of a narrow competitive area within the industry (market niche) without taking the whole market into consideration. -Early Entry Strategy is when a tourism enterprise brings an original tourist service to the market via innovations. The specific feature of this strategy is substantial risks, the complexity of planning when past experience is not extrapolated.
-The Strategy of Synergy aims to enhance operating efficiency through the common use of resources. In this case, this can mean the generation of competitive advantages by merging several enterprises of the tourism and hospitality industry for joint use of resources, general managerial experience, marketing tools, etc. This strategy is the basis for the formation of various unions, alliances and other associations (synergy of costs, sales, planning and management). Moreover, the business strategy of a specific business is the basis of the sub-system of the corporate strategy at enterprises of the tourism and hospitality industry, from which destination is made of. Provided that a company carries out only one specific type of business, the business strategy coincides with the corporate strategy. Thus, the approach of concepts used to build corporate strategies, which the authors considered and proposed, can be shown in the following scheme ( Figure 1). Research shows that modern enterprises in the tourism and hospitality industry are at the stage of managing strategic changes. For this reason, the formation of the modern paradigm of strategic management is urgent, i.e. the theory of strategy management merges with the theory of change management, and it is the theory of change management that is built into the theory of strategic management (Table 1). The practice has shown that any enterprise of the tourism and hospitality industry constitutes a complicated socio-economic system. It is, therefore, necessary to talk about its comprehensive development. Comprehensive development can mean straightforward and regulatory changes of technical, economic, social, organizational and other parameters.
Discussion
The reliability of the suggested approaches to the elaboration of development strategies for enterprises of the tourism and hospitality industry is confirmed by the fact that in practice it is hard enough to highlight and clearly classify strategies to be implemented to develop one or another enterprise [10,11,12]. However, taking into account the realities, the authors believe that in the course of business companies are reasonably guided by such strategies as "consumer proximity", "demand management", and "leadership of goods/services". Enterprises of the tourism and hospitality industry that strive to muster the Consumer Proximity strategy, i.e. to find a niche, are marked by five main distinctive features: the high degree of technology manageability; a pricing workshop; the best system of demand measurement by consumer groups; the focus on the solution of customer problems; readiness to bear costs to customize its services. At the same time, the development of the strategy of demand management is based on the study and analysis of factors impacting fluctuations in demand, i.e. determining the nature of fluctuations in demand; finding out cycles of fluctuations in demand (day, week, month, year); establishing reasons that cause changes in demand (nature & climate, cultural-public, socio-economic, etc.). Factors, which increase and decrease demand, can also be highlighted, and they are prices, changes in the location and time of service, priorities set for customer service (urgency of services, much higher costs of services). At the same time, the Leadership of Services Strategy envisages the following directions: understanding of the target market and consumer needs, specific policy available to satisfy these needs, which make it possible to attain high consumer loyalty; executives' paying attention to the level of service at all times; high standards of services set; the use of systems of monitoring the result of service, which are based on comprehensive assessment of performance; systems introduced to handle and satisfy consumer complains.
Conclusions
Summing up the results, the authors can point out that most regions, local tourist destinations and enterprises of the tourism and hospitality industry in Russia do not have any formalized strategies. Some of them make use only of individual elements of strategic management, i.e. do not have a comprehensive system of strategic management. The proposed methodological tools for the elaboration of development strategies for enterprises of the tourism and hospitality industry led the authors to the conclusion that strategic management, given its methodological essence, i.e. the conceptual approach and panoramic vision, cannot give an exact and clear view of the industry's future. At the same time, strategic management cannot be limited to a set of universal and routine rules, procedures and schemes. In addition, strategic management requires substantial efforts, time and resources to the achievements of enterprises of the tourism and hospitality industry remarkable. | 3,509.8 | 2018-12-03T00:00:00.000 | [
"Business",
"Economics"
] |
Developent and Application of Hybrid Method to Inhomogeneous Geology for Curtain Grouting -Case Study in Sedimentary Rock with Fold Movement for Nam Ngiep 1 Hydropower Project in Lao PDR-Yoichi
Curtain grouting for dam foundation treatment is one of the most crucial work items in dam construction to secure the impermeability of the foundation rock. Some decades ago, the Grouting Intensity Number (GIN) Method developed in Europe has been frequently applied to relatively simple geotechnical structures. On the other hand, the Conventional Method, which requires phased mix proportion and water pressure tests through a sequence of the works, is as yet reliable for inhomogeneous geology. This paper presents the development of a modied curtain grouting method and its application to the Nam Ngiep 1 Hydropower Project in Lao PDR, which has an inhomogeneous geology of sedimentary rock with weak layers affected by fold movement. The method has been dubbed as “hybrid” because it garners both the economical superiority of the GIN Method in that it enables the use of a single mix proportion, and the technical superiority of the Conventional Method in that the individual design pressure in each stage is based on water pressure tests.
Introduction
The Nam Ngiep 1 Construction Project in Lao PDR (Tsutsui et al. 2021) includes a 167 m-high RCC dam with an alternation of sandstone and mudstone in the dam foundation rock (hereinafter 'the Site'). The curtain grouting work for the dam foundation treatment is one of the most fundamental and critical work items in the dam's construction to secure the necessary impermeability Bruce 1991, 2007). A few grouting methods have been developed around the world. Although the Conventional Method is as yet applied for grouting work, it requires phased mix proportion and water pressure tests through a sequence of work, and thus has a disadvantage of low construction speed due to its elaborate procedure (Kudo et al. 1984 Some decades ago, in Europe, another concept of grouting method was introduced by Lombardi and Deer. Known as the Grouting Intensity Number (hereinafter referred to as 'GIN') Method (Lombardi 2003), it uses a single mix proportion during the entire grouting process until grouting reaches to the GIN Curve, represented by the combination of grouting pressure and injected grout volume. This has the advantages of high construction speed and low construction cost thanks to the simple procedure.
Several researches (Li et al. 2017;Shahzad. et al. 2017) have also reported that the GIN Method is more competitive than the Conventional Method in terms of cost and construction speed.
However, the GIN Method, which was applied to Nam Ngiep 1 in its early stage, was not obviously proven superior to the Conventional Method in terms of grouting time, volume and workability. This was because the weak layers in the riverbed, fold zone on the middle of right bank or vertical cracks at high elevation on both the banks due to toppling. It is supposed that, due to the high-pressurized injection of the GIN Method, cracking would be induced in the inhomogeneous geology, where crustal forces caused misalignment and folding of geological layers. The authors, therefore, set out to develop an innovative, competitive and economically feasible method that would t the inhomogeneous geology on Site.
This paper introduces a new method, named the "Hybrid Method" that has the advantage of both the Conventional and the GIN Methods, and proposes its applicable geological conditions.
Site Geology
The geology on Site is composed of an alternation of sandstone and mudstone of the Jurassic to Cretaceous ages as shown in Fig. 1 . Conglomerate is mainly distributed on the higher elevation of the left bank with thin mudstone layers bedded at nearly horizontal intervals of a few meters.
Each bed in the alternated beds tends to behave independently, resulting in geological separation at the contact plane (bedding plane). At the same time, the separation might have been considerably damaged by orogenic fold movement. Later, a river course stepped across the fold axis and the river down-cutting continued to create a gorge. On both the riverbanks except for the fold zone shown in Fig. 1, toppling phenomenon occurred due to gravitational movement along the two-sets of vertical geological cracks and horizontal bedding planes. The vertical cracks developed with spacing at a few meters parallel and at right angles to the river course.
The surface rock zone down to 50 m in depth from the excavated surface tends to be loose due to a decrease in overburdening and weathering by in ltration of water through these vertical cracks and horizontal bedding planes. On the other hand, the strata deeper than 50 m from the excavated surface are generally sound, fresh and tight. Sandstone is mainly distributed with intercalation of thin mudstone layers above the weak F7 layer on the higher elevation of the right bank. The fold zone, where cracks might develop because of extreme bending of the strata by orogenic compressive movement, is located at the middle elevation of the right bank. In particular, a high permeability zone of a Lugeon Value (hereinafter 'Lu') of 10 to 20 is widely distributed around the anticlinal axis, as shown in Figs. 2 and 3.
In order to con rm the distribution of cracks and their direction in the fold zone, a borehole camera was sunk in the drilled holes P18 (Block 17) located at the edge of the fold zone and P20 (Block 19) in the center of the anticline. Figure 4 shows geological characteristics such as the number of cracks, crack width and Lu in each stage, in P18 and P20.
As for P18, four (4) high permeability zones were con rmed at 20-25 m, 75-90 m, 95-105 m, and 110-115m in depth, respectively through the geological investigation and Lugeon tests. The high permeability zone at 20-25 m in depth would be related to the weak layer FL-D. Although cracks around the weak layer are both fewer number in and smaller in width, some cracks in the mudstone layer just below the weak layer might show high permeability. The high permeability zone at 75-90 m is also considered to be related to the weak layer FR-A. The high permeability zones at 95-105 m and 110-115 m are located around the boundary between sandstone of 10 m in thickness and thin mudstone. Although the sandstone is impermeable and categorized to be sound and hard rock (as CH-class according to the standard of Central Research Institute of Electric Power Industry, Japan (Central Research Institute of Electric Power Industry 1992), the cracks observed in the drilling core samples are attributed to the stress concentration of the fold movement.
As for P20, relatively high permeable zones are generally seen throughout the hole. These high permeable zones might have originated during crack development in the alternation of sandstone and mudstone that segregated at the stratum boundaries during fold movement.
All cracks observed in holes P18 and P20 are plotted separately on a Schmidt Net as shown in Fig. 5 ). The data for P18 are concentrated in the center of the net, indicating that a lot of cracks are of low dip, while the data for P20 are scattered variably in the net, indicating that the cracks for P20 are of variable dip and strike.
Conventional Method
The Conventional Method has been most commonly used for foundation treatment in many hydropower projects in Japan. In the Conventional Method, the water-cement ratio of the grout material is varied from a large number at rst for better injection into small cracks, to a smaller number for better injection into larger and wider open cracks. The latest standard for the Conventional Method stipulates a water cement ratio within a range of 10:1 to 0.5:1, and that check holes be drilled in order to evaluate the performance of grouting works (Kudo et al. 1984). This method is popular and common because it is suitable especially for inhomogeneous geological structures, which are characterized by a variety of geological types and a lot of faults formed by ancient geotechnical activities such as folding and fracture movements. On the other hand, the method has some drawbacks due to its very deliberate and elaborate process: 1) low construction speed, and 2) strict requirements.
Since Japan, which is located on the boundaries of several tectonic plates, has inhomogeneous geological structure compared with Northern Europe, which is located on the Eurasian Plate, the Conventional Method has been applied since the early stage of hydropower development and dam construction in Japan. Records show that the method was applied for the rst time to the foundation with fault treatment of the Komaki Dam in 1929. Thereafter, the method has gradually become more sophisticated and standardized based on cumulative experiences. . Accordingly, the method has been matured as a technical standard that ts for the inhomogeneous geology in Japan.
Regarding the grouting process, the split spacing method is generally applied where an interpolated layout of holes along the grouting line is applied regardless of the grouting methods as shown in Fig. 6. The intervals of grouting holes in each step on Site are 24 m for the pilot hole and 12 m for the primary hole, accordingly.
GIN Method
The GIN Method has a set of simple theories represented by total injected volume and grouting pressure to avoid the elaborate work procedure of the Conventional Method. In the GIN Method, a single watercement ratio and stable slurry are used (Kudo et al. 1984; Japan Commission on Large Dam 1957) based on the test results of viscosity and bleeding, so as to inject into even thin cracks, and a single GIN Value is set to plot the below mentioned GIN Curve when applied even in poor geotechnical characteristics. Therefore, the GIN Method helps reduce project costs compared with the Conventional Method because it is not necessary to repeatedly change the grout mix proportion in a grouting procedure. The GIN Method has been adopted for some recent projects in Lao PDR as shown in Table 1. The termination criteria of grouting treatment is de ned as a conceptual formula proposed by Dr. Giovanni Lombardi, which indicates the range of grouting pressure and total injected volume (Lombardi 2003). The GIN Value is determined based on the experimental observation and engineering considerations. The upper limits of grouting pressure and total injected volume are proposed as ve (5) limit curves, represented by separate GIN Values as shown in Fig. 7 (Lombardi 2003). Known as "GIN Curves", the most suitable curve is selected by trial grouting depending on the characteristics of the geotechnical structure. A sample of the termination criteria of curtain grouting in each step is shown in Fig. 8 (Lombardi 2003).
In the design stage of Nam Ngiep 1, the Conventional Method was planned to be applied to the Site. However, the dam excavation work was so prolonged that the remaining works including the curtain grouting work had to be accelerated. Thus, the GIN Method was applied to recover the time and cost.
Modi ed GIN Method
Yoshizu proposed the "Modi ed GIN Method" as a new grouting method (Yoshizu et al. 2019), which uses a single mix proportion and injected grouting pressure (IGP) principally based on the GIN Curve, but it also uses the water pressure tests. The Modi ed GIN Method is a practical variation of the GIN Method in that the grouting process may continue even after it reaches the GIN Curve to enhance the e ciency of the grouting work in the inhomogeneous geology.
The new rule is summarized as shown in Fig. 9 and below.
The grouting process is completed when reaching the Modi ed GIN Curve with a grouting ow velocity equal to or lower than 0.4 liter/min/m. The grouting pressure is maintained until the grouting ow velocity falls below 0.4 liter/min/m even after reaching the GIN Curve.
In the event that the grouting ow velocity is over 0.4 liter/min/m when cumulative grouting volume reaches 600 liter/m, grouting is paused and resumed 3 hours later.
Yoshizu raised the issues to be solved for better application of the method as below (Yoshizu et al. 2019). Figure 10 shows the average Lu and cement take per meter (hereinafter, "unit cement volume" or UCV) in each step on Site. Theoretically, Lu and UCV should decrease as the steps progress. The results of Block 1 (see Fig. 1) show theoretical behavior (see Fig. 10(a)). However, the UCV increases in the nal step of Block 2 (see Fig. 10(b)). As shown in Fig. 11, the geology in Block 1 is generally simpler, harder and more impermeable than that in Block 2. It is considered that the main causes of this are that 1) large-scale fractures are newly formed or developed by over-pressurized grouting just before the grouting in the nal step, and/or 2) Block 2 has an extremely steep topography on the surface, which is supposedly derived from toppling phenomenon. As a result, the vertical cracks perpendicular to the dam axis in Block 2 might be di cult to grout from the grouting holes that were vertically drilled parallel to the vertical cracks, and hence not improved in the earlier steps. According to Figs. 10(c) and 10(d), it is found that the UCV of Blocks 13 and 14 is similarly larger in the nal step than in the early steps as in Block 2.
In order to further pursue the reason, parameters such as Critical Grouting Pressure (CGP) and UCV are analyzed in each block. The average CGP and UCV of the sandstone, mudstone and weak layers in Blocks 12, 13, 14 and 15 are shown in Fig. 12. It is assumed that there are few cracks in Blocks 12 to 15 because they are located in the riverbed where there are few of the large-scale fractures seen in Block 2. However, comparing the respective values shown in Fig. 12, the UCV in the weak layers is larger than that in the sandstone and mudstone though the CGP in the weak layers is smaller than that in sandstone and mudstone. The reason for this is assumed that the weak layers are so sensitively affected by the geotechnical actions that the fractures might have easily induced.
To pursue the possible reason of the fractures inducement in the weak layers, the step-by-step transition of the Lu and the UCV are as follows. The degree of grout improvement in weak layers by the Modi ed GIN Method is investigated by Progress Management Method ("PMM") for grouting. As shown in Fig. 13, the horizontal axis represents the difference in Lu (⊿Lu) and the vertical axis represents the difference in UCV (⊿UCV) at the transition of each step. For example, when Lu decreases from 40 to 30 and UCV decreases from 300 to 200 through a step, (⊿Lu; -10, ⊿UCV;= -100) is plotted. That is, the chart of PMM indicates that the grouting is proceeding appropriately when the data concentrates in the third quadrant and moves to the center as the step progresses. Figure 14 shows the transition of Lu and UCV in each step determined by PMM for Blocks 12 to 15 that had weak layers that were grouted by the Modi ed GIN Method. Figure 14(a) indicates that Lu and UCV did not improve well with tertiary hole grouting because the data are scattered in all of the quadrants. Figure 14(b) indicates that both Lu and UCV show some improvement by quarterly hole grouting because the data are concentrated in the 3rd quadrant. Figure 14(c) indicates that Lu and UCV get larger in the nal step because the data moves to the 1st quadrant again. This means that cracks might develop due to over-pressurized grouting.
Next, the CGP are analyzed in detail. Too high of an IGP may cause harmful damage to the dam foundation, therefore the IGP should not exceed the CGP too much. However, it is common practice to set IGP slightly higher than the CGP as shown by the green line in Fig. 15, so as not to leave any voids in the cracks in the dam foundation. It should be noted that the Modi ed GIN Method has a weak point in that the IGP might in certain cases not reach the CGP, as occurred in Blocks 1, 2, 13 and 14 as shown in the red area in Fig. 15.
Originally, the CGP was simply supposed in accordance with the depth of the grouting holes. However, as shown in Fig. 16, its variation is too large compared with the highest envelop of the CGP against depth (H). Thus, it was concluded that the CGP should be correctly determined by conducting water pressure tests stage by stage. It is considered that the injection may have been paused or terminated before the void was su ciently lled in the intermediate step due to a relatively lower IGP than CGP in some stages.
It is also considered that the injection volume may unhelpfully have increased in the nal step due to cracks developing as a result of IGP becoming much higher than CGP as shown in the blue area in Fig. 15. As shown in Fig. 15, IGP was signi cantly higher than CGP in 110 stages, equivalent to 65%, where the injection volume may impractically have increased in the nal step due to cracks developing because of the excessive IGP. To solve the issue, a new method should have been developed whereby a proper IGP is set based on the CGP obtained from the water pressure test in each stage. Applicable conditions of this new method and the GIN Method are also proposed depending on the geological situation. Therefore, by carefully regulating the IGP according to geotechnical conditions, it is possible to improve permeability of the dam foundation effectively.
Planning and Veri cation
The new method the authors have developed is totally different from the Modi ed GIN Method. The grouting work is managed only by grouting pressure, not by correlations between grouting pressure and grouting volume. The design pressure is set based on the CGP obtained from the water pressure test in each stage. Since the new method shown in Fig. 17 has characteristics of both the Conventional and the GIN Method, it is named the "Hybrid Method". In the Hybrid Method, permeability and CGP are con rmed in advance through the water pressure tests at each stage in the same manner as the Conventional Method. A single mix proportion is adopted in the same manner as the GIN Method. The grouting speci cations of the Hybrid Method are shown in Fig. 17. A water-cement ratio (W/C) of 1.5 is applied as a single mix proportion, which has been optimized by the preliminary grouting test (Yoshizu et al. 2019).
In the case of a zero IGP, open cracks are considered to be distributed, and thus surplus cement volume is required. Therefore, the mix proportion is immediately changed to a W/C of 0.8, which has higher viscosity.
Technical Applicability
A ratio of incompatibility of the target Lu (P Lu ) is adopted to assess the effect of curtain grouting by Hybrid Method applied to Blocks 18 and 22 located in the fold zone (see Fig. 1). As shown in the following formula, it is de ned as the ratio of the number of stages which do not meet the target Lu against the total number of stages. It is di cult to achieve a su cient improvement effect of grouting on an inhomogeneous geotechnical structure using the GIN Method. To illustrate this, the transition of grout volume and Lu by the order of the steps of the Modi ed GIN Method is shown in Fig. 10.
The Hybrid Method was applied to the fold zone (Blocks 18-22) which is the most inhomogeneous geotechnical structure on Site, as this is where alternate sandstone and mudstone with the weak layer FR-A intercalated are distributed, as shown in Fig. 1. In addition, the fold zone bends extremely under fold movement resulting in the development of cracks according to the Schmidt net plots at P20 as shown in Fig. 5.
On the other hand, the Hybrid Method veri es the effectiveness of grouting as follows. Figure 18 shows PLu as explained above for Blocks 18-22. It was shown that all the blocks in Blocks 18-22 achieved a PLu lower than the target value of 15% for the ratio of the number of stages that fail to satisfy the target Lu of 2.0 in the nal step. Figure 19 shows the average Lu and UCV in each step of Blocks 18-22. As grouting sequences progress using the Hybrid Method, the average Lu and UCV decrease so consistently, that these results would almost seem theoretical when compared to the Modi ed GIN Method. Also, Fig. 20 shows that the correlation between the IGP at the vertical axis and the CGP at the horizontal axis seems to be quite appropriate. If the IGP is much higher than the CGP, it is likely to fracture the foundation rock due to surplus grouting pressure. On the contrary, if the IGP is much lower than the CGP, it is not likely to improve su ciently by grouting. The graph indicates that the IGP is slightly higher than the CGP. An average UCV of 533.5 kg/m for all Blocks 18-22 using the Hybrid Method is smaller than that of 627.6 kg/m at Block 14 using the Modi ed GIN Method. Based on the above, it has been concluded that the Hybrid Method is technically as well as cost-wise superior to the Modi ed GIN Method for the inhomogeneous geotechnical structure under the same criteria of target Lu of less than 2.
Economical Applicability
The previous section demonstrated the technical superiority of the Hybrid Method over the Modi ed GIN Method in the context of the inhomogeneous geotechnical structure. This section of the paper demonstrates the economical superiority of the Hybrid Method, vis-a-vis the Conventional and GIN Methods (Dou et al. 2020). Blocks 1, 2, 12, 13, 14 and 15 were selected for the comparative study among the methods; all these methods were applied for the preliminary grouting tests and, as a result, all necessary data for the comparative study is available. As mentioned in the "Site Geology" section, conglomerate is mainly distributed in Blocks 1 and 2 on the higher elevation of the left bank, and the weak layers FL-D and FR-A are located in Blocks 12, 13, 14 and 15 in the riverbed. Both simple and inhomogeneous geological conditions with conglomerate or weak layers were assumed in the same dimensions (50 m deep) below. The target Lu should be the same among the GIN, Conventional and Hybrid Methods as shown in Table 2. In the simple geological structure with pure sandstone and/or mudstone, grouting could be terminated by the tertiary hole in all methods based on Site records. An image of a grouting layout in the simple geological condition is shown in Fig. 21.
-The grouting can be terminated by the tertiary hole in all stages and methods based on Site records.
-Unit prices in each work scope are set referring to the construction contract of the Nam Ngiep 1 Project.
-The unit price of admixture per kg is supposed as 1 and the other unit prices are proportionally estimated. Table 3 shows the result of the study for the simple geological condition. In the case that the cost using the GIN Method is 1.0, the cost using the Hybrid Method is 1.16 and the cost using the Conventional Method is 1.53 because the target Lu can be achieved by high-pressured grouting in a short time.
Next, the inhomogeneous geological conditions for the comparative study are mentioned below. In the inhomogeneous geological structure with conglomerate and weak layers on Site, additional grouting more than a quaternary hole is required for the GIN Method based on Site records. An image of grouting under the inhomogeneous geological conditions is shown in Fig. 22.
-In case of using the GIN Method, additional drilling is required for inhomogeneous geology based on Site records because the target Lu cannot be achieved by the tertiary hole.
-Unit prices in each work scope are set referring to the construction contract of the Nam Ngiep 1 Project.
-The unit price of admixture per kg is supposed as 1 and the other unit prices are proportionally estimated. Table 4 shows the study results in the inhomogeneous geological conditions with conglomerate or weak layers. In the case that the cost using the GIN Method is 1.0, the cost using the Hybrid Method is 0.85 because additional drilling cost is not needed.
The results of the economical comparative study are shown in Fig. 23. The estimated cost of the GIN Method for the simple geological condition is regarded as 1.0 (100%). Generally, the GIN Method is considered to be preferable in cases of simple geological conditions without signi cant cracks, while the Hybrid Method is considered to be preferable in cases of inhomogeneous geological conditions with weak layers and fold zone.
Conclusions
The Hybrid Method has been newly developed for the inhomogeneous geology of weak layers, toppling and fold zone in the Nam Ngiep 1 Hydropower Project. The method is literally "hybrid" as it garners the economical superiority of the GIN Method in that it enables use of a single mix proportion, and the technical superiority of the Conventional Method in that the individual design pressure in each stage is based on the CGP obtained from water pressure tests. The followings are the conclusions in this study.
-The GIN Method is the economically preferred method for simple geological site conditions.
-However, in situations with inhomogeneous geological conditions with weak layers, the Hybrid Method is economically superior to the GIN/Modi ed GIN Method under the same technical criteria.
-CGP should be correctly determined by conducting water pressure tests for the inhomogeneous geotechnical structures.
-It is believed that further simpli cation of the water pressure test procedure will make the Hybrid Method more economically competitive.
-An applicational boundary between the Modi ed GIN and the Hybrid Method could be speci ed by the geological conditions with and without weak layers. Further studies are recommended to specify the boundaries between the GIN Method and the Modi ed GIN Method, and between the Hybrid Method and the Conventional Method.
The proposed categories of grouting methods are summarized in Fig. 24. As mentioned in this paper, as the geological features vary from simple to inhomogeneous, the applicable grouting method generally transitions from the GIN Method to the Conventional Method (left to right in the gure). In this study, the boundary between the Modi ed GIN and the Hybrid Method could be speci ed as shown by the red line in this gure. Unfortunately, no data could be obtained for the use of the Modi ed GIN Method in the stages with conglomerate. It is expected to specify the boundaries between the GIN Method and the Modi ed GIN Method, and between the Hybrid Method and the Conventional Method. Further studies are recommended to introduce a rigorous system for specifying the most appropriate grouting method for the clearly de ned category of the geological condition. For that purpose, preliminary grouting for a comparative study of the grouting methods in the other dam construction sites is recommended to accumulate such data.
It is expected that the number of dam constructions with inhomogeneous geological conditions in Southeast Asia will increase going forward, and it is anticipated that further sophistication of the Hybrid Method will improve its economical attractiveness. This, in turn, will have a positive impact on the economic feasibility and the construction period of the projects to which it is applied. It is desirable that a simpler and quicker method than the elaborated water pressure tests be developed soon. Termination Criteria of Grout Injection The grouting process is completed when reaching the Modi ed | 6,496 | 2021-10-28T00:00:00.000 | [
"Geology"
] |
Molecule-photon interactions in phononic environments
Molecules constitute compact hybrid quantum optical systems that can interface photons, electronic degrees of freedom, localized mechanical vibrations and phonons. In particular, the strong vibronic interaction between electrons and nuclear motion in a molecule resembles the optomechanical radiation pressure Hamiltonian. While molecular vibrations are often in the ground state even at elevated temperatures, one still needs to get a handle on decoherence channels associated with phonons before an efficient quantum optical network based on opto-vibrational interactions in solid-state molecular systems could be realized. As a step towards a better understanding of decoherence in phononic environments, we take here an open quantum system approach to the non-equilibrium dynamics of guest molecules embedded in a crystal, identifying regimes of Markovian versus non-Markovian vibrational relaxation. A stochastic treatment based on quantum Langevin equations predicts collective vibron-vibron dynamics that resembles processes of sub- and superradiance for radiative transitions. This in turn leads to the possibility of decoupling intramolecular vibrations from the phononic bath, allowing for enhanced coherence times of collective vibrations. For molecular polaritonics in strongly confined geometries, we also show that the imprint of opto-vibrational couplings onto the emerging output field results in effective polariton cross-talk rates for finite bath occupancies.
I. INTRODUCTION
Molecules are natural quantum mechanical platforms where several atoms are interlinked via electronic bonds. The inherent coupling between the electronic transitions at optical frequencies and the mechanical nuclear motions (vibrons) at terahertz frequencies renders molecular systems ideal for the realization of quantum optomechanical effects. This is however different from the radiation pressure coupling mechanism in macroscopic systems, as optomechanical interactions in molecules intrinsically occur in a hybrid fashion involving a two-step process of photon-electron and vibron-electron (vibronic) interactions [1][2][3][4]. The vibronic coupling resembles the radiation pressure Hamiltonian (via a boson-spin replacement) which can be in the strong coupling regime since the strength of the coherent coupling can be comparable to the vibrational frequency. At cryogenic temperatures (e.g., at T ∼ 4 K), molecular vibrations are in their quantum ground state thus circumventing usual complications arising from additional optical cooling requirements [5]. Moreover, naturally occurring or engineered differences in the curvatures of the ground and excited state potential surfaces of the molecular electronic orbitals can lead to the direct generation of non-classical squeezed vibrational wavepackets [6]. These aspects suggest that molecular systems offer natural platforms, where one can exploit the inherent opto-vibrational coupling as a quantum resource.
When molecules couple to their condensed-matter environment, e.g. in the solid state, the mechanical modes of localized intramolecular vibrations (vibrons) are augmented by collective delocalized vibrational excitations of the host material (phonons), which allow for electron-phonon (polaron) couplings. In practice, coupling to a large number of phonon modes makes the study of molecular vibrations in the solid state notoriously challenging. Some of the challenges can be tamed under cryogenic conditions where experiments manage to reduce phonon coupling on the so-called zero-phonon line (ZPL) of the transition between |g, n ν = 0⟩ and |e, n ν = 0⟩ sufficiently to reach its natural linewidth limit. This can be verified in ensemble measurements, e.g. via hole burning, or in single-molecule spectroscopy [7]. A good example of an experimental platform is provided by dibenzoterrylene (DBT) molecules embedded in anthracene crystals [see Fig. 1(a)], exhibiting a lifetime-limited linewidth and near-unity radiative yields at cryogenic temperatures [8][9][10][11][12]. However, even if vibrational spectroscopy at the single-molecule level is readily accessible in the laboratory [13,14], a quantitative understanding of the couplings between the molecular vibrational modes and their internal and external degrees of freedom is still largely missing. In particular, a detailed study of decoherence sources is necessary. An open quantum system approach, such as employed in our treatment, can shed light onto a few aspects of coherent and incoherent vibrational dynamics and onto the light-matter interactions in the presence of vibrons, phonons and cavity-localized photon modes. Our formalism makes use of quantum Langevin equations which allows us to follow the evolution of system operators such as the electronic coherence and vibrational quadratures and to derive analytical results for the time dynamics of both expectation values and two-time correlations (needed for the computation of emission and absorption spectra). We find that closely spaced molecules can experience col-lective vibrational relaxation, an effect similar to the suband superradiance of quantum emitters in the electromagnetic vacuum. This can be exploited to decouple collective two-molecule vibrational states from the decohering phononic environment leading to the possibility of coherently mapping motion onto light and vice versa. In addition, at the level of the pure light-matter interface, coupling to confined optical cavity modes can increase the oscillator strength of the molecule by effectively reducing vibronic couplings [12].
Our formalism also allows us to treat problems relevant to experiments in cavity quantum electrodynamics with molecules, where standard concepts such as strong coupling or the Purcell effect can suffer important modifications once couplings between electronic transitions and vibrations are taken into account. To this end, we make use of analytical tools based on quantum Langevin equations [15] to account for an arbitrary number of vibron and phonon modes. Earlier theoretical works have either traced out the typically fast vibrational degrees of freedom [16,17], used limited numerical simulations, or focused mostly on aspects such as vibrational relaxation in solids [18][19][20][21], electron-phonon and electron-vibron couplings [22][23][24], temperature dependence of the zerophonon linewidth [25,26] and anharmonic effects [27,28].
However, it should also be borne in mind that the relevance of our treatment is not restricted to the physical system considered here as very similar effects also occur in related solid-state emitters such as quantum dots or vacancy centers in diamond. The coupling of such systems to photonic nanostructures has been studied quite extensively over the last years [29][30][31][32][33][34][35][36]. There is, furthermore, a general current interest in impurities interacting with a quantum many-body environment, such as molecular rotors immersed in liquid solvents [37,38], Rydberg impurities in quantum gases [39] or magnetic polarons in the Fermi-Hubbard model [40]. Our treatment can then be understood as a general model for the coupled dynamics of spin systems to many, possibly interconnected, bosonic degrees of freedom as illustrated in Fig. 1(c).
A. General considerations
We develop here a complex model where all interactions between light, electronic transitions, vibrons and phonons are taken into account for finite temperatures. We derive general expressions for the light scattered by a molecular system (of one or more molecules) embedded in a solidstate environment outside or inside an optical cavity [see Fig. 1(a)]. As schematically illustrated in Fig. 1(c) the light (mode a) couples to electronic transitions (Pauli operator σ) via a Tavis-Cummings Hamiltonian. These are in turn affected by the vibronic coupling to one or more molecular vibrations which leads to the red-shifted Stokes lines in emission [cf. Fig. 1(b)]. We focus here on a single mode with relative motion coordinate Q for the sake of simplicity. The solid-state matrix supports a multitude of bosonic phonon modes with displacements q k (k from 1 to N ) which directly modify the electronic transition leading to the occurrence of phonon wings in the emission and absorption spectra. In addition, molecular vibrons can deposit energy into phonons as a displacement-displacement interaction, leading to an irreversible process of vibrational relaxation. We will start with the description of the vibrational relaxation process in Sec. III since all subsequent effects will depend on this mechanism. We show that linear phonon-vibron couplings can already result in irreversible vibrational relaxation involving both single-and multi-phonon processes. Moreover, such dynamics can be either Markovian or non-Markovian, depending on the relation between the vibrational frequency and the maximum phonon frequency. For closely spaced molecules, the same formalism allows for the derivation of collective relaxation dynamics exhibiting effects similar to super/subradiance in dense quantum emitter systems. Classical light driving is included in Sec. IV by calculating absorption spectra for coherently driven molecules under the influence of vibronic and phononic couplings as well as thermal effects. We show that interestingly, the vibronic and electron-phonon couplings do not cause any dephasing dynamics even at high temperatures, i.e. the zero-phonon line is mainly lifetime-limited in the linear coupling model. Following a quantum Langevin equations approach, we derive absorption spectra for coherently driven molecules under the influence of vibronic and phononic couplings as well as thermal and finite-size effects. Finally, for molecular polaritonic systems in a cavity setting, we derive transmission functions of the cavity field (see Sec. V), showing the reduction of the vacuum Rabi splitting with increasing vibronic and phononic coupling, as well as phononic signatures in the Purcell regime. The effect of temperature on the asymmetry of cavity polaritons is quantified by deriving effective rate equations for the polariton cross-talk dynamics.
B. Hamiltonian formulation
We consider one molecule (later we extend to more than one) embedded in a bulk medium comprised of N unit cells. Our perturbational assumption is that, since the bulk is large, the guest molecule does not significantly change the overall modes of the bulk. The electronic degrees of freedom of the molecule are denoted by states |g⟩ and |e⟩ with the former at zero energy and the latter at ω 0 (we set ℏ to unity), corresponding to a lowering operator σ = |g⟩ ⟨e|. We assume only a pair of ground and excited potential landscapes with identical curvature along the nuclear coordinate and make the harmonic approximation, where the motion of the nuclei can be described by a harmonic vibration at frequency ν and bosonic operators b and b † , satisfying the usual bosonic commutation relations [b, b † ] = 1.
From the displacement between the minima of the two potential landscapes one obtains a vibronic coupling quantified by a dimensionless factor λ (the square root of the Huang-Rhys parameter) and described by a standard Holstein-Hamiltonian [41], where Q = (b + b † )/ √ 2 is the dimensionless position operator of the vibronic degree of freedom (the momentum quadrature is given by . The Holstein coupling also leads to a shift of the electronic excited state energy ω 0 +λ 2 ν, which is removed by the diagonalizing polaron transformation U el-vib . The polaron transformation U el-vib = e i √ 2λP σ † σ = |g⟩ ⟨g| + B † |e⟩ ⟨e| can be seen as a conditional displacement affecting only the excited state, where B † = e i √ 2λP is the inverse displacement operator for the molecular vibration creating a coherent state when applied to vacuum: B † |0 ν ⟩ = |−λ⟩. The Hamiltonian in Eq. (1) does not consider nonadiabatic vibronic coupling which would lead to off-diagonal coupling terms (proportional to σ x and σ y ) and which could drive electronic transitions. Such nonadiabatic terms become relevant if two potential surfaces come close to each other [42]. In Appendix H we briefly discuss how one could treat such terms in the Langevin equations of motion. One could also consider a difference in curvatures between ground (frequency ν) and excited state (frequencyν) potential surfaces which would result in a quadratic coupling term H quad el-vib = βQ 2 σ † σ with squeezing parameter of the vibrational wavepacket β = (ν 2 − ν 2 )/(2ν). We will assume that the vibron quickly thermalizes with the environment (via the fast mechanism of vibron-phonon coupling described below) at temperature T and achieves a steady state thermal occupancyn The electronic transition is coupled to the quantum electromagnetic vacuum which opens a radiative decay channel with collapse operator σ via spontaneous emission at rate γ. For a general collapse operator O with rate γ O we model the dissipative dynamics via a Lindblad term applied as a superoperator to the density operator ρ of the system. The vibronic coupling leads to the presence of Stokes lines in emission and to a mismatch between the molecular emission and absorption profiles. Following the stochastic quantum evolution of a polaron operatorσ = B † σ (vibrationally dressed Pauli operator for the electronic transition) analytical solutions for the absorption and emission spectra of the molecule can be derived in the presence of vibrons [15].
In addition to the coupling to internal vibrations of its nuclei, the electronic transition is also modified through coupling to the delocalized phonon modes of the crystal. We describe the bulk modes as a bath of independent harmonic oscillators with bosonic operators c k and c † k and frequencies ω k . The electron-phonon coupling (see Appendix A for derivations) can then be cast in the same Holstein form as for the vibron where the displacement operators refer to each individual collective phonon mode q k = (c k + c † k )/ √ 2 (the momentum operator is given by . The coupling factors λ k depend on the specifics of the molecule and the bulk crystal. Similarly to the vibronic case, the electron-phonon interaction can be diagonalized by means of a polaron transformation U el-phon = |g⟩ ⟨g| + D † |e⟩ ⟨e|, 2λ k p k is the product of all phonon mode displacements, signifying a collective transformation for all phonon modes. We will assume that the bulk is kept at a constant temperature and is always in thermal equilibrium with the individual mode thermal average occupancies amounting tō n k = [exp(ω k /(k B T ))−1] −1 . The coupling to the phonons gives rise to a multitude of sidebands in the absorption and emission spectra which coalesce into a phonon wing that becomes especially important at elevated temperatures. We will then follow the temporal dynamics of a collective polaron operatorσ = D † B † σ which includes both vibronic and electron-phonon couplings.
Phonons also affect the dynamics of the vibrational mode. Modifications of the bond length associated with the molecular vibration leads to a force on the surrounding crystal (and vice versa), giving rise to a displacementdisplacement coupling, The coupling coefficients α k are explicitly derived in Appendix A. In the limit of large bulk media, this Hamiltonian can lead to an effective irreversible dynamics, i.e. a vibrational relaxation effect. This is the Caldeira-Leggett model widely treated in the literature as it leads to a non-trivial master equation evolution which cannot be expressed in Lindblad form and is cumbersome to solve analytically [43][44][45]. To circumvent this difficulty, we follow the formalism of Langevin equations under the concrete conditions imposed by the one-dimensional situation considered here. We are then in a position to identify the Markovian versus non-Markovian regimes of vibrational relaxation conditioned on the phonon spectrum, namely on the maximum phonon frequency ω max of the system. We can additionally account for a finite phonon lifetime by including a decay rate γ ph k for each phonon mode. To perform spectroscopy, we add a laser drive modeled as H ℓ = iη ℓ σ † e −iω ℓ t − σe iω ℓ t with amplitude η ℓ . We will assume weak driving such that the assumption of thermal equilibrium is still valid. Furthermore, to treat various aspects of molecular polaritonics, we describe the dynamics of a hybrid light-matter platform by adding the coupling of a confined optical mode at frequency ω c to the electronic transition via a Jaynes-Cummings interaction The bosonic operator a satisfies the commutation relation [a, a † ] = 1 and the coupling is given by g = [d 2 eg ω c /(2ϵ 0 V )] 1/2 , where d eg is the electronic transition dipole moment and V is the quantization volume (ϵ 0 is vacuum permittivity). Spectroscopy of the cavitymolecule system can be done by adding a cavity pump H ℓ = iη c a † e −iω ℓ t − ae iω ℓ t with amplitude η c . The cavity loss is modeled as a Lindblad process with collapse operator a and rate κ. In standard cavity QED, depending on the magnitude of the coherent exchange rate g to the loss terms κ and γ one can progressively advance from a strong cooperativity Purcell regime to a strong coupling regime where polaritons emerge. We will mainly focus on analytical derivations of the effects of electron-vibron and electron-phonon couplings at finite temperatures on the emergence of a spectral splitting in the strong coupling regime as well as the transmission in the Purcell regime.
III. VIBRATIONAL RELAXATION
A decay path for the molecular vibration stems from its coupling to the bath of phonon modes supported by the bulk. While it is generally agreed that nonlinear vibron-phonon couplings contribute to the vibrational relaxation process, especially in the higher temperature regime [46], we restrict our treatment to a coupling in the bilinear form of Eq. (3). To understand the physical picture, we first show that in perturbation theory the bilinear Hamiltonian leads to a competition between fundamental processes that involve the decay of a vibrational quantum into superpositions of either single phonon states or many phonons adding together in energy to the initial vibrational state energy. Afterwards we proceed by writing a set of coupled deterministic equations of motion for the vibrational quadratures of the molecule {Q, P } and the collective normal modes of crystal vibrations {q k , p k }. This allows for the elimination of the phonon degrees of freedom and the derivation of an effective Brownian noise stochastic evolution model for the molecular vibrations. We illustrate regimes of Markovian and non-Markovian dynamics and show that an equivalent approach tailored to two molecules can lead to collective vibrational relaxation strongly dependent on the molecule-molecule separation.
A. Fundamental vibron-phonon processes
Let us consider an initial state containing a single vibrational quantum |1 ν , vac ph ⟩ that evolves according to the vibron-phonon bilinear Hamiltonian of Eq. (3). We aim to follow the fundamental processes leading to the energy of the vibration deposited in superpositions of single or multi-phonon states. We move to the interaction picture by removing the free energy with U = e iH0t with the free Hamiltonian H 0 = νb † b + N k=1 ω k c † k c k . The time-dependent interaction picture Hamiltonian, thus, becomes The formal solution of the Schrödinger equation can then be written as a Dyson series |ϕ(t)⟩ = T e −i t 0 dτH(τ ) |1 ν , vac ph ⟩. We can proceed by evaluating the first term in the series which leads to (see Appendix D for details) resonant scattering (ω k = ν) into singlephonon states |0 ν , 1 k ⟩ at perturbative rate α k t as well as off-resonant scattering (ω k ̸ = ν) into states |0 ν , 1 k ⟩ with probability inversely proportional to the detuning ω k − ν. We note that for ν > ω max , only off-resonant transitions are possible. The next order of perturbation theory, however, leads to multi-phonon processes where resonant transitions to states containing three phonons |0 ν , 1 j1 , 1 j2 , 1 j3 ⟩ become possible. The resonance condition reads ω j1 + ω j2 + ω j3 = ν for j 1 ̸ = j 2 ̸ = j 3 , and its amplitude is a sum over terms α j1 α j2 α j3 t/(ω j2 +ω j3 )(ω j3 −ν). These terms are small with respect to the rates of the resonances starting in the first order for ν ≤ ω max and 0 20 40 60 in total are comparable to the single-phonon scattering off-resonant terms.
B. Effective Brownian noise model
Formal elimination of the phonon modes (see Appendix B for details) leads to an effective Brownian motion equation for the momentum of the vibrational modė while the displacement follows the unmodified equatioṅ Q = νP . The effect of the phonon bath is twofold: (i) it can shift the vibrational frequency toν = ν − ν s and (ii) it leads to a generally non-Markovian decay kernel expressed as a convolution Γ * P = ∞ 0 dt ′ Γ(t − t ′ )P (t ′ ). For the particular case considered in the Appendix, the crystal-induced frequency shift is expressed as where k 0 denotes the spring constant of the host crystal, k M represents the spring constant of the vibron, and ∆k is a measure for the coupling of the molecule's relative motion to the bulk. For a discrete system, the expression for the damping kernel Γ(t) = k α 2 k ν/ω k cos(ω k t)Θ(t) involves a sum over all phonon modes which can be turned into the following expression in the continuum limit (N → ∞) Here, J n (x) denotes the n-th order Bessel function of the first kind, Θ(t) stands for the Heaviside function and Γ m = 2νν s /ω max is the decay rate in the Markovian limit. A similar expression is known from the Rubin model [47], where one considers the damping of a single mass defect in a 1D harmonic crystal. The zero-average Langevin noise term ξ is determined by the initial conditions of the phonon bath and can be expressed in discrete form as . We can treat Eq. (6) more easily in the Fourier space where the convolution becomes a product and the Fourier transform of the non-Markovian decay kernel Γ(ω) generally contains a real and imaginary part Γ(ω) = Γ r (ω) + iΓ i (ω). Figure 2(a) shows a plot of Γ r (ω) and Γ i (ω) where we can interpret the imaginary part as a frequency shift which is largest around ω = ±ω max . Together with the transformed equation for the position quadrature −iωQ(ω) = νP (ω) we then obtain an algebraic set of equations which allows us to calculate any kind of correlations for the molecular vibration, both in time and frequency domains. This will be needed later on for computing the optical response of the molecule.
C. Markovian versus non-Markovian regimes
The Markovian limit is achieved when the vibrational frequency lies well within the phonon spectrum ω max ≫ ν and Γ(ω) becomes flat in frequency space: Γ(ω) = Γ m . In this case the memory kernel tends to a δ-function: with the convention Θ(0) = 1/2. In the continuum limit, the correlations at different times are where the noise spectrum is expressed similarly to the case of a standard thermal spectrum for a harmonic oscillator in thermal equilibrium S th (ω) = Γ r (ω)ω/ν[coth (βω/2) + 1] in terms of the inverse temperature β = (k B T ) −1 . The difference lies in the frequency-dependence of the real part of the decay rate function where the Heaviside functions provide a natural cutoff of the spectrum at ±ω max . While in the time domain, the noise is only δ-correlated at high temperatures and This property is helpful for analytical estimation of the molecular absorption and emission in the presence of non-Markovian vibrational relaxation. In frequency space, the response of the vibron to the input noise of the phonon bath is characterized by the susceptibility In Fig. 2(b), we plot |χ(ω)| 2 for the two cases ω max ≫ ν (Markovian limit) and ω max ≈ ν (non-Markovian regime) for identical Γ m . While in the Markovian regime, the susceptibility has two approximately Lorentzian sidebands with linewidth Γ m centered around ±ν, the finite frequency cutoff in the non-Markovian case leads to an unconventional lineshape with reduced linewidth and slight frequency shift. In the time domain, we can simulate the microscopic classical equations of motion for a large number of phonon modes and compare the results to the standard Markovian limit obtained from Brownian motion theory. This is illustrated in Figs which also includes the non-Markovian regime. At low temperatures β −1 ≪ ν the sideband at −ν is suppressed and the thermal spectrum can be approximated as S th (ω) = [2Γ r (ω)ω/ν]Θ(ω). This two-time correlation function of the momentum quadrature will be required later in the calculation of molecular spectra in section IV.
D. Collective vibrational effects
A collection of impurity molecules sitting close to each other within the same crystal will see the same phonon bath and can, therefore, undergo a collective vibrational relaxation process. This is similar to the phenomenon of subradiance/superradiance of quantum emitters commonly coupled to an electromagnetic environment, where the rate of photon emission from the whole system can be smaller or larger than that of an individual isolated emitter. In order to elucidate this aspect, we follow the approach sketched above, i.e. we eliminate the phonon modes to obtain a set of coupled Langevin equations for two molecules situated 2j sites apart from each other: The mutually induced (small) energy shift Ω = k α k,1 α k,2 /ω k and the mutual damping kernels Γ 12 = ν 2 k α k,1 α k,2 /ω k cos(ω k t)Θ(t) and Γ 21 = ν 1 k α k,1 α k,2 /ω k cos(ω k t)Θ(t) are strongly dependent on the intermolecular separation 2j (see Appendix C for full expression), whereas the individual decay terms Γ 1 and Γ 2 are given by the expressions derived previously. Importantly, now also the noise terms ξ 1 and ξ 2 are not independent of each other but correlated according to a separation-dependent expression specified in Appendix C. In the continuum limit N → ∞, the collective interaction kernels can be approximated with the aid of higher-order Bessel functions (assuming identical molecules ν 1 = ν 2 and consequently Γ 12 (t) = Γ 21 (t)): In Fig. 3(a), we plot the collective decay kernel as a function of time and intermolecular separations 2j. The collective effects do not occur instantaneously but in a highly time-delayed fashion [cf. Fig 3(b)]. We can interpret the collective interaction as an exchange of phonon wavepackets between the two molecules, where the wavepackets are traveling with the group velocity v g = ∂ω/∂q of the crystal (lattice constant a) at a maximum speed v max g ≈ aω max /2 (the high frequency components towards the band edge are slower). This leads to an approximate time of τ = 4jω −1 max for the wavepacket to propagate from one molecule to the other.
The collective interaction will also lead to a modification of the vibrational lifetimes of the molecules which we want to describe in the following. To this end, one can again proceed with a Fourier analysis of Eqs. (13). The expression for the non-Markovian collective interaction kernel in frequency space (between −ω max and +ω max ) reads where we introduced the Chebychev polynomials of first (T n ) and second kind (U n ). We are interested in the real part of the above expression which will give rise to a collectively-induced modification of the vibrational lifetime while the imaginary part corresponds again to a frequency shift. In Figs. 3(c) and (d) we plot the real and imaginary parts of Γ 12 (ω) for small distances j, respectively.
In the Markovian limit ω max ≫ ν everything becomes flat in frequency space and one can approximate Γ 12 (ω) = Γ 12 (0) = Γ m and consequently (Γ 12 * P i ) ≈ Γ m P i with i = {1, 2}. A diagonalization can be performed by moving into collective quadratures P + = P 1 +P 2 and P − = P 1 −P 2 (and identically for the positions) for which the equations of motion decouplė While one of the collective modes undergoes relaxation at an increased rate 2Γ m , the orthogonal collective mode can be eventually decoupled from the phononic environment. Of course, as the derivation we have performed is restricted to one-dimensional crystals, it would be interesting to explore this effect in three dimensional scenarios where both longitudinal and transverse phonon modes have to be considered with effects stemming from the molecular orientation as well as the influence of anharmonic potentials. A recent theoretical work also discusses phonon-bath mediated interactions between two molecular impurities immersed in nanodroplets with respect to the rotational degrees of freedom of the molecules [48].
IV. FUNDAMENTAL SPECTRAL FEATURES
Let us now consider a molecule driven by a coherent light field. We will make use of and extend the formalism used in Ref. [15] to compare the effect of Markovian versus non-Markovian vibrational relaxation, phonon imprint on spectra and temperature effects. To derive the absorption profile of a laser-driven molecule, one can compute the steady-state excited state population P e = ⟨σ † σ⟩ = η ℓ ⟨σ⟩ + ⟨σ⟩ * /(2γ). The average steadystate dipole moment can formally be written as (note that we are assuming weak driving conditions η ℓ ≪ γ such that the laser drive only probes the linear response of the dipole): The important quantity to be estimated is the correlation function for the displacement operators of the molecular vibration ⟨B(t)B † (t ′ )⟩, which is fully characterized by the Huang-Rhys factor λ 2 and the second-order momentum correlation functions: The stationary correlation ⟨P (t) 2 ⟩ = ⟨P 2 ⟩ = 1/2 +n includes the temperature of the environment and does not depend on the details of the decay process. The two-time correlations ⟨P (t)P (t ′ )⟩ (and consequently the vibrational linewidths of the resulting optical spectrum) are crucially determined by the details of the dissipation model derived in section III. In order to capture the non-Markovian character of the vibrational relaxation, we extend the method used in Ref. [15] by computing correlations in the Fourier domain and then transforming to the time domain.
A. The non-Markovian vibrational relaxation regime
Let us first consider the imprint of the particularities of the vibrational relaxation process onto the absorption and emission spectra when molecule-light interactions are taken into account. For the calculation of the momentum correlation function ⟨P (t)P (t ′ )⟩ one has to evaluate the integral in Eq. (12), where the susceptibility weighted with the thermal spectrum is given by the general expression non-Mark. with Γ i (ω) = Γ m ω/ω max between −ω max and +ω max . As discussed in the previous section, the real part of Γ(ω) determines the decay rate while the imaginary part leads to a frequency shift. Generally, performing the integral over the expression in Eq. (19) is difficult since the line shapes can be very far from simple Lorentzians. However, assuming a good oscillator (Γ m ≪ ν) and consequently a sharply peaked susceptibility that only picks frequencies around the vibrational resonance, we can obtain an effective modified frequency ν ′ and decay rate Γ ′ in the non-Markovian regime (however assuming ω max > ν) with ν ′ = ν 2 + Γ i (ν)ν 1/2 and Γ ′ = Γ r (ν ′ ). By expanding Eq. (19) around the poles of the denominator ω = ±ν ′ + δ and assuming |δ| ≪ |ν ′ |, one can then calculate the temperature-dependent momentum correlation function in the non-Markovian regime: with time delay τ = t − t ′ . This allows for an analytical evaluation of the integral in Eq. (17) (see Appendix G for detailed calculation) and leads to a steady-state excitedstate population of where we introduced L(n) = e −λ 2 (1+2n) λ 2n n! and B(n, l) = n l (n + 1) n−ln l . One can immediately obtain the result for the Markovian limit by replacing ν ′ → ν and Γ ′ → Γ m . Figures 4(a),(b) show a comparison between the momentum correlation function and the resulting steady-state population (normalized by the steady-state population of a resonantly driven two-level system P 0 = η 2 ℓ /γ 2 ) in the Markovian and non-Markovian regimes (for fixed Γ m ). We can see that non-Markovianity leads to modified spectral positions and linewidths of the vibronic sidebands while the ZPL is not affected by the dissipation process. The denominator of Eq. (21) contains a sum over Lorentzians with a series of blue-shifted lines with index n arising from the electron-vibration interaction which are weighted by a Poissonian distribution. Thermal occupation of the vibrational states can counteract this effect however by leading to red-shifted lines in absorption [see Figure 4(c)] with index l and weighted by a binomial distribution. However, as shown in Figure 4(d), for large vibrational relaxation rates Γ m ≫ γ the sidebands will be suppressed and absorption and emission of the molecule will mostly occur on the ZPL transitions |g, m ν ⟩ ↔ |e, m ν ⟩. While in the case of zero temperature the ZPL is solely determined by n = 0, for finite temperatures all terms with n = 2l can contribute to it.
An important quantity is the Franck-Condon factor f FC which measures the reduction of the ZPL intensity due to coupling to internal vibrations. This factor is given by the and does not depend on the vibrational relaxation of the molecule. Using the fact that (n + 1)/n = e βν , one can express Eq. (21) as a sum over just a single index in the limit 2λ 2 n(n + 1) ≪ 1 (see Appendix G for derivation) as with I n (x) denoting the modified Bessel functions of the first kind andN = n(n + 1). This expression is similar to the result known from the standard Huang-Rhys theory for emission and absorption [49], but it now additionally includes vibrational relaxations Γ ′ . The ZPL contribution (n = 0) at resonance is thus simply given by P e = η 2 ℓ f FC /γ 2 . The emission spectrum can be calculated from the Fourier transform of two-time correlations ⟨σ † (τ )σ(0)⟩. Considering the decay of an initially excited molecule, one finds that the emission spectrum is simply given as the mirror image (with respect to the ZPL) of the absorption spectrum which is why we restrict ourselves to the calculation of the absorption profile. and have neglected the effect of electron-phonon coupling. However, this can become a dominant mechanism at larger temperatures where all acoustic and optical phonon modes are thermally activated (> 50 K) and the probability of a ZPL transition is very small. To include electron-phonon coupling, the expression for the steadystate dipole moment [cf. Eq. (17)] has to also account for the displacement of the electronic excited state caused by the phonons Here, the coupling to phonons additionally leads to a renormalization of the electronic transition frequencỹ ω 0 = ω 0 − k λ 2 k ω k (polaron shift). The expression in Eq. (23) now jointly contains all of the effects: electron-phonon coupling, electron-vibron coupling and vibrational relaxation (through the correlation function ⟨B(t)B † (t ′ )⟩). Since we consider phonon modes to be independent of each other, the displacement correlation function of the phonons can be factorized When replacing the sum over k with an integral over ω in the continuum limit, this yields (neglecting phonon decay as it will not influence the spectra in the continuum limit): Here, we have introduced the spectral density of the electron-phonon coupling J(ω) = k |λ k ω k | 2 δ(ω − ω k ) = n(ω)λ(ω) 2 ω 2 where n(ω) denotes the density of states. In the one-dimensional derivation considered here we obtain for the spectral density The electron-phonon coupling constant λ 1D e-ph is derived in Appendix E and depends, among other things, on the displacement of the crystal atoms upon excitation of the molecule as well as on the spring constants between the molecule's atoms and the neighboring crystal atoms. Again, the cutoff at ω max arises naturally from the dispersion of the crystal. In the continuum limit considered here, this spectral density would lead to a divergence of the integral in Eq. (24) due to the high density of low-frequency phonons, which is a well known problem for 1D crystals [50,51]. This issue can be addressed by considering only a finite-sized 1D crystal with a minimum phonon frequency cutoff ω min > 0. However, one can instead also consider a spectral density stemming from a 3D density of states: where the electron-phonon coupling constant λ 3D e-ph now has units [s 2 ]. In Figs. 5(a) and (b) we plot the resulting absorption spectrum of the ZPL for 1D and 3D densities of states, whereby the exact shape of the phonon wing is determined by the spectral density function J(ω). While analytical expressions for the integral in Eq. (24) are difficult to obtain in the continuum case, we can express the absorption spectrum of the ZPL including phonon sideband in terms of discrete lines as Here the sum runs over all {n k } = n 1 , . . . , n N and {l k } = l 1 , . . . , l N . This can be seen as a generalization of the result in Eq. (21) for many modes where the N phonon modes are indexed by k and the function L k (n k ) accounts for the displacement of the excited state while the binomial distribution B k (n k , l k ) accounts for the thermal occupation of each mode. As one can see in Figs. 5(a) and (b), thermal occupation of the phonons leads to red-shifted phonon sidebands in absorption and eventually to a symmetric absorption spectrum around the zero-phonon line in the limit of large temperatures. Note that here we did not explicitly include phonon decay γ ph k as it does not influence the absorption spectra in the continuum limit (the phonon peaks overlap and are not resolved). However, one can easily account for a finite phonon lifetime by including it in the momen- Similarly to the Franck-Condon factor for vibrons, one defines the Debye-Waller factor f DW = ⟨D † ⟩ 2 = exp − ∞ 0 dωJ(ω)ω −2 coth(βω/2) which measures the reduction of the ZPL intensity due to the scattering of light into phonons. In Fig. 5(c) we show the behavior of the f DW in the 3D case for different coupling strengths at low temperatures, revealing a stronger temperature-dependence for larger couplings. The total reduction of the ZPL intensity as compared to the twolevel system case is then given by the product f FC · f DW .
C. Dephasing
Within the model we consider, where all interactions stem from a harmonic treatment of both intramolecular vibrations and crystal motion, the zero-phonon linewidth of the electronic transitions is largely independent of temperature. In reality, to account for higher temperature effects one needs contributions quadratic in the phononic displacements which has been theoretically pointed out and experimentally observed [52,53]. However, even in the linear regime, the fact that vibronic and electron-phonon couplings do not lead to significant dephasing is a non-trivial result. One could e.g. expect that the Holstein-Hamiltonian for electron-phonon coupling H H = ω 0 − k √ 2λ k ω k q k σ † σ + k ω k c † k c k which sees a stochastic shift of the excited electronic level should lead to a dephasing of the ground-excited coherence ⟨σ⟩. One reason is the similarity to the pure dephasing of a two-level transition subjected to a noisy laser undergoing evolution with the Hamiltonian [ω 0 +φ(t)]σ † σ where the frequency is continuously shaken by a white noise stochastic term of zero average and obeying ⟨φ(t)φ(t ′ )⟩ = γ deph δ(t − t ′ ). It is straightfoward to show that the time evolution of the coherence in this case becomes ⟨σ(t)⟩ = ⟨σ(0)⟩ e −iω0t e −γ deph t such that the correlations of the noise indicate the increase in the linewidth of the transition [54]. Similarly, one could expect that the zero-averaged quantum noise stemming from the shaking of the electronic transition in the Holstein-Hamiltonian would lead to the same kind of effect. However, computing the exact time evolution of the coherence in the interaction picture [with HamiltonianH H (t)] one obtains: where the time-ordered integral can be resolved by a second-order Magnus expansion, confirming the result already known from the polaron picture. The correlation (for a single mode ω k ) shows a cosine-term similar to the dephasing but which does not continuously increase in time. For small times t ≪ ω −1 k , the cosine-term can be expanded and the dephasing rate can be approximated by γ deph = λ 2 k (n k + 1/2)ω 2 k t while for larger times the rate goes to zero (the time scale is set by γ −1 ). In the continuum limit, the time-dependent dephasing rate γ deph (t) = −ℜ [φ(t)] expresses as In accordance with Figs. 5(a),(b) we can see that linear Holstein coupling can consequently lead to a temperaturedependent zero-phonon line if there is a large density of low frequency (long wavelength) phonon modes with ω k < γ which is the case in 1D but not in higher dimensions. This peculiarity of dephasing in 1D has already been discussed in the literature [50,51,55]. It is however also well established within the literature that the major contribution of the experimentally observed temperaturedependent broadening of the zero-phonon line is caused by a higher-order electron-phonon interaction of the form [52,[55][56][57] H quad el-phon = with the coupling constant of the quadratic interaction β kk ′ . This form of the interaction can stem either, within the harmonic assumption, from a difference in curvatures between the ground and excited state potential surfaces or from anharmonic potentials.
V. MOLECULAR POLARITONICS
It is currently of great interest to investigate the behavior of hybrid platforms containing organic molecules interacting with confined light modes such as provided by optical cavities [16,58,59] or plasmonic nanostructures [60,61]. Such light-dressed platforms have been studied both at the single-and few-molecule level [12,16,60] as well as in the mesoscopic, collective strong-coupling limit [62][63][64]. In these cases, the strong light-matter coupling leads to the formation of polaritonic hybrid states with both light and matter components. Experimental and theoretical works are currently exploring fascinating enhanced properties such as exciton and charge transport [65][66][67][68][69], superconductive behavior [70,71] and modified chemical reactivity [72][73][74][75][76]. There is also recent interest in the modification of nonadiabatic light-matter dynamics at so-called conical intersections leading to fast nonradiative decay of electronic excited states [42,77]. It has been recently shown that the Purcell regime of cavity QED can result in a strong modification of the branching ratio of a single molecule and suppress undesired Stokes lines [12]. Recent theoretical works account for the vibronic coupling of molecules by solving a Holstein-Tavis-Cummings Hamiltonian which leads to the occurence of polaron-polariton states, i.e. light-matter states where the hybridized states between the bare electronic transition and the light field additionally get dressed by the vibrations of the molecules [15,[78][79][80][81][82][83][84]. Many models rely on numerical simulations and are based on following the evolution of state vectors under simplified assumptions assuming only vibronic interactions and finite temperature effects. We employ here the approach of the last section and add a Jaynes-Cummings interaction of a molecule in the phononic environment with a localized cavity mode. A weak laser drive maps the intracavity molecular polaritonics effects to the cavity transmission profile, identifying polariton cross-talk effects at any temperature. Furthermore, we map the combined effect of vibronic and electron-phonon interactions onto the cavity output field.
A. Cavity transmission
We will consider a cavity mode which is driven with amplitude η c and start with a set of coupled Lagevin equations for the electric field operator a as well as the polaron operatorσ(t) = D † (t)B † (t)σ(t) in a rotating frame at the laser frequency ω ℓ : where we defined the effective cavity input A in = η c / √ 2κ+ a in with zero-average input noise a in but non-vanishing correlation ⟨a in (t)a † in (t ′ )⟩ = δ(t − t ′ ). The electronic transition is also affected by a white noise input σ in with non-zero correlation ⟨σ in (t)σ † in (t ′ )⟩ = δ(t − t ′ ). We can formally integrate Eq. (31b) and substitute it in Eq. (31a): where we took the averages and assume factorizability between optical and vibronic/phononic degrees of freedom which is valid if the timescales of vibrational relaxation and cavity decay are separated, e.g. Γ m ≫ κ. We notice that the second term in Eq. (32) represents a convolution since the correlation functions ⟨D(t)D † (t ′ )⟩ and ⟨B(t)B † (t ′ )⟩ only depend on the time difference ⟩, the normalized cavity transmission amplitude T (ω) = ⟨Aout(ω)⟩ ⟨Ain(ω)⟩ can be derived from input-output relations as where H(ω) is the Fourier transform of H(t) and describes the optical response of the molecule to the light field including electron-phonon, electron-vibron and vibronphonon coupling. If we neglect electron-phonon interactions (λ e-ph = 0) and assume, for the sake of simplicity, Markovian decay for the vibration (this can also be extended to the non-Markovian regime, see Section (IV)), H(ω) acquires the form Again, the above expression indicates a series of sidebands with strength determined by the Huang-Rhys factor λ 2 and dependent on the thermal occupationn. In the case of large vibrational relaxation Γ m ≫ γ (corresponds to typical experimental situation), however, those sidebands are suppressed and the cavity will mostly couple to the ZPL transition. We can then define an effective Rabi coupling for the ZPL which takes into account the reduction of the oscillator strength due to Franck-Condon and Debye-Waller factors. In Figs. 6(a) and (b) we plot the cavity transmission at resonance ω c = ω ℓ for increasing thermal occupation with and without the influence of phonons and find that the splitting of the polariton modes is well described by Eq. (35). This also manifests itself in the transmission signal in the Purcell regime characterized by weak coupling g < |κ − γ|/2, but large cooperativity C = g 2 /(κγ) ≫ 1 which is a more realistic regime in currently available single-molecule experiments [12,16]. In Figure 6(c) we compare the transmission of a pure two-level system (obtained by setting λ = 0, λ e-ph = 0) with a molecule in a solid-state environment.
Here the ZPL appears as a dip in the transmission profile with an increase in width γ = γ(1 + C eff ) proportional to the effective cooperativity C eff = g 2 eff /(κγ). As compared to the two-level system case, the coupling to vibrons and phonons leads to a reduction in both width and depth of the antiresonance. If the cavity bandwidth is comparable to the maximum phonon frequency ω max , the imprint of the phonon wing can also be detected in the transmission signal of the cavity, which is relevant for plasmonic scenarios characterized by large bandwidths [60] [see Fig. 6(c)]. The sidebands of vibrons typically lie at frequencies outside the bandwidth of the cavity ν ≫ κ and are consequently unmodified.
B. Vibrationally mediated polariton cross-talk
As shown in the previous sections, vibronic and electronphonon couplings reduce the oscillator strength of the molecule and lead to decoherence and are consequently considered as detrimental. However such couplings can also lead to interesting new physics: In Ref. [15] it was already shown that vibrations can couple upper and lower polaritonic states in a dissipative fashion, resulting in an effective transfer of population from upper to lower polariton and consequently an asymmetric cavity transmission profile with a suppressed upper polaritonic peak (at zero temperaturen = 0) and dominant emission occuring from the lower polariton (this can also be seen in Figs. 6(a) and (b)). We derive here a more general expression for the population transfer between polaritons showing that for finite thermal occupations of the vibrational moden also a transfer from lower to upper polariton can be activated.
This can be interpreted as an exchange interaction which is mediated by either the destruction or creation of a vibrational quantum. From this one can derive equations of motion for the populations of upper and lower polaritonic states:Ṗ with the hybridized decay rates of upper and lower polaritonic state γ ± = (κ + γ)/2 and one can see that the term ℑ ⟨U † L(b † + b)⟩ is the one responsible for population transfer between the polaritons. In the limit of fast textcolorbluevibrational relaxation Γ m ≫ κ this can be turned into a set of rate equations with an effective excitatation transfer from the upper polariton to the lower polariton κ + and a transfer from the lower polariton to the upper polariton κ − (for detailed calculation see Appendix J): Under the assumption of weak vibronic coupling as compared to the splitting between upper and lower polaritonic state λν ≪ 2g, the rates can be calculated to first order as (again we assume Markovian decay for the vibration for that sake of simplicity): Energy transfer between the polaritons can consequently occur if the Rabi splitting ω + − ω − ≈ 2g is roughly equal to the vibrational frequency. In the case of zero temperature (n = 0) the above equations reduces to the results presented in [15] using a Lindblad decay model for the vibration instead of a Brownian noise model. The ratio κ − /κ + =n/(n + 1) which can be inferred from the polariton heights (for normalized Lorentzians the height and width are connected) and which tends to unity in the limit n ≫ 1 can be seen as a direct measure for temperature as it does not depend on any other parameters. While for single molecules the condition ω + − ω − ≈ ν is difficult to achieve for vibrational modes in the THz range, this can be achieved in the collective strong coupling regime for many molecules where the coupling grows as g √ N or for single molecules with phononic modes in the GHz regime. We also note that, in a similar fashion to the linear coupling, also quadratic electron-phonon and vibronic coupling gives rise to a vibrationally-mediated polariton cross-talk with coupling H quad int = β(U † L + L † U )Q 2 /2 (for a single vibrational mode). To this end, one could again derive effective rate equations for the quadraticallymediated population transfer between the polaritons in a similar fashion as for the linear coupling case.
VI. DISCUSSIONS. CONCLUSIONS
We have provided a new approach based on quantum Langevin equations for the analysis of the fundamental quantum states of molecules and their coupling to their surroundings. These features, which lie at the heart of molecular polaritonics, go well beyond the electronic degrees of freedom and address phenomena such as electronvibron and electron-phonon couplings as well as vibronphonon interactions resulting in the relaxation of molecular vibrations. In particular, we have provided analytical expressions for spectroscopic quantities such as molecular absorption and emission inside and outside optical cavities in the presence of vibrations and phonons at any temperature. Moreover, we have presented a model of vibrational relaxation that takes into account the structure of the surrounding phonon bath and makes a distinction between Markovian and non-Markovian regimes. We have demonstrated that the vibrational relaxation of a molecule is crucially determined by the structure of the bath, especially by the maximum phonon frequency ω max . For two molecules embedded in the same crystalline environment, we have shown that the vibrational modes of the spatially separated molecules can interact with each other, resulting in collective dissipative processes that allow for weaker relaxation of collective vibrations. In the strong coupling regime of cavity QED, we have derived temperature-dependent transfer rates for vibrationally mediated cross-talk between upper and lower polaritonic states, i.e. hybrid light-matter states that are normally uncoupled in cavity QED studies of atomic systems. In this work, we based our model on first-principle derivations of the relevant coupling strengths between a single nuclear coordinate of a molecule embedded in a 1D chain. However, the calculations could be readily extended to 3D scenarios and compared with ab-initio calculations for real materials. We point out that our theory could also be relevant for vacancy centers in diamond where similar interactions between electronic degrees of freedom and both localized and delocalized phonon modes occur [85,86]. In the future we want to address the influence of higher-order interactions such as quadratic electronphonon and vibron-phonon couplings, which are known to play an important role at elevated temperatures. It could also be interesting to consider the cavity-modification of the nonradiative relaxation of molecules at conical intersections [42]. We also plan to investigate the collective radiation states of dense molecular ensembles in confined electromagnetic environments such as e.g. occuring in organic semiconductor microcavities.
Note added. Recently, we became aware of a related study [87].
VII. ACKNOWLEDGMENTS
We acknowledge financial support from the Max Planck Society and from the German Federal Ministry of Education and Research, co-funded by the European Commission (project RouTe), project number 13N14839 within the research program "Photonik Forschung Deutschland" (C. S, V. S. and C. G.). We are mostly interested in the coupling between the phonons and the relative motion of the molecule (which will give rise to vibrational relaxation) as well as the coupling between the electronic excitation and the phonons. In the expanded equations above we can see that the first term couples couples the relative motion of the molecule x rm to the motion of the lattice atoms x ℓ , while the second term couples x ℓ to the electronic excitation σ † σ and the last term is a constant energy shift. For left and right atom together we then arrive at the interaction Hamiltonians where for simplicity we assumed equal masses m L = m R and used that for symmetric molecules ∆x L = −∆x R .
Bulk solution
The contribution of the bulk adds as (just considering nearest-neighbour interactions) with the main assumption that the presence of the molecule does not significantly change the bulk modes. Newton's equations of motion for the displacements of the lattice atoms can be arranged in matrix form as where x = (x 1 , . . . , x 2N +1 ) and where we assume a periodic lattice and neglect the edge modes in our analysis. This is a trigonal Toeplitz matrix which can be easily diagonalized (D = T −1 MT) where the k-th eigenvalue is given by with k ∈ [1, 2N + 1] and the associated normalized eigenvectors read such that in the diagonal basis (u = T −1 x) the equation of motion for the k-th mode is given bÿ We can now express the couplings from Eqs. (A9) in terms of the normal modes of the crystal. Further more we will consider only nearest-neighbour coupling for the molecule, i.e. k Rℓ , k Lℓ = 0 for ℓ ̸ = {N, N + 2}. We obtain with and consequently where we used that ∆x N +2 = −∆x N and defined ∆k = k R(N +2 We now quantize the Hamiltonians by introducing the position operators (and analog momentum operators) where we have introduced the couplings We have further more introduced the dimensionless quadratures [q k , p k ′ ] = iδ kk ′ and the Hamiltonian of the bulk can be expressed as H bulk = k ω k p 2 Figure 8 shows a plot of the dispersion ω k as well as the vibron-phonon coupling α k (the electron-vibron coupling shows a similar behavior).
Quadratic interaction
In the previous sections we always assumed identical curvatures of electronic ground and excited states. Here we show that non-identical curvatures give rise to a quadratic electron-vibron interaction (similar for the electron-phonon interaction). Assuming different vibrational frequencies of electronic ground (ν) and excited state (ν) the molecular Hamiltonian [Eq. (A6)] becomes: Quantizing with x rm = 1/(2µν) b † + b and p rm = i (µν)/2 b † − b , this gives rise to an electron-vibron interaction of the form where the second part accounts for the squeezing of the vibrational wavepacket when going from ground to excited state with coupling strength β = (ν 2 − ν 2 )/(2ν).
Appendix B: Vibrational relaxation
The evolution of the molecule's vibrational mode can be calculated from Hamilton's equations of motioṅ The evolution of the phonon bath degrees of freedom is goverend bẏ We can derive an effective equation of motion for Q and P by eliminating the bath degrees of freedom. We can write Eqs. (B2) in matrix formv = Mv + v inhom with v = (q k , p k ) T and v inhom = (0, α k Q) T . The solution is given by with M = TΛT −1 , Λ = diag(iω k , −iω k ) and With this we can derive the solution for q k (t) and subsequently forṖ (after integration by parts): where we introduced the fluctuating force ξ(t) = k (α k q k (0) cos(ω k t) + α k p k (0) sin(ω k t)). One can easily check that the second term in the sum above sums to zero in the continuum limit while the first term is a renormalization of the vibrational frequency withν = ν − ν s . Using the expressionfor α k derived in the previous section we can calculate (for N ≫ 1). All together the vibrational relaxation can be written aṡ where * denotes the convolution (Γ * P )(t) = ∞ 0 dt ′ Γ(t − t ′ )P (t ′ ) and In the continuum limit this can be approximated by where J n (x) are the Bessel functions of first kind. The random force has zero average but non-vanishing correlations where we denote by ⟨•⟩ = Tr [• ρ th ] the thermal average. The commutator reads In the continuum limit N → ∞ (using the prescription k → L π dq → dωn(ω) where πk 2N +1 = qL 2N +1 = qa) we can rewrite the correlation function as The density of states in 1D is given by Multiplying this with the vibron-phonon coupling in frequency domain gives Using this we can write the correlation function as where we denote by Γ m the Markovian decay rate Γ m = 2ννs ωmax . We note that Γ m actually increases with ω max (in our derivation we assumed the reduced mass of the molecule to be equal to the lattice atoms µ ≈ m 0 ): Γ m = ∆k 2 We can see that we obtain an effective frequency-dependent decay rate Γ r (ω) = Γm √ ω 2 max −ω 2 ωmax Θ(ω max − ω)Θ(ω max + ω) (the real part of the Fourier transform of Γ(t)). In the limit ω max → ∞ this becomes the standard result for a harmonic oscillator in a thermal bath Appendix C: Collective vibrational relaxation We now consider two molecular impurities inside the 1D chain located at positions N + 1 + j and N + 1 − j (distance 2j). Again we assume that the presence of the two molecules does not significantly change the bulk modes. The Hamiltonian reads: From this we can calculate equations of motion for the molecular coordinateṡ as well as for the bath degrees of freedomq (C7) Following the same procedure as in Appendix B we can solve for q k to obtain Plugging this into the equations of motion for the system variables we obtaiṅ where we introduced the input noises ξ 1/2 (t) = k α k,1/2 (cos(ω k t)q k (0) + sin(ω k t)p k (0)). Again integrating by parts we findṖ The two noise terms are correlated according to We now still have to determine the coupling coefficients α k,1 and α k,2 . Assuming that the two molecules are placed at q N +1−j and q N +1+j , the couplings are then determined by the neighboring atoms q N +1−j±1 and q N +1+j±1 : where we denote the neighboring couplings for the first molecule by ±1 and for the second molecule by ±2. With this we can calculate the coupling coefficients (using that sin α − sin β = 2 cos( α+β 2 ) sin( α−β 2 )): With this we obtain for the product α k,1 α k,1 (using that cos(α) cos(β) = [cos(α + β) + cos(α − β)]/2): We can finally write Eqs. (C11) as (neglecting the terms that sum to zero in the continuum limit) where we defined Ω = k Let us for simplicity assume two identical molecules ν 1 = ν 2 . We can write the collective decay as In the continuum limit this can be approximated by Taking the Fourier transform of Eqs. (C17) we obtain a set of algebraic equations (again assuming identical molecules) The Fourier transform of Γ 12 can be obtained in two steps. The Fourier transform of the productf 4j (t)Θ(t) wherẽ f 4j (t) = f 4j (ω max t) = ω max (J 4j (ω max t)/(ω max t)), is given by where we have used that F(f 4j )(ω) = (1/ω max )F(f 4j )(ω/ω max ). Since the Fourier transform of the term containing the Bessel function results in where U n−1 is a Chebyshev polynomial of the second kind, we obtain for the collective decay component where y = ω ′ /ω max and x = ω/ω max . For −1 < x < 1 meaning that −ω max < ω < ω max we have with T n being a Chebyshev polynomial of the first kind [88], we obtain and with it In the case ω max → ∞, we obtain F(Γ 12 * P i )(ω) ≈ Γ m P i (ω) which results in the equations of motioṅ Considering the coordinates Q + = Q 1 + Q 2 , P + = P 1 + P 2 and Q − = Q 1 − Q 2 , P − = P 1 − P 2 , we obtain equations of motion for two independent oscillatorsQ where the first one is protected from vibrational decay.
Appendix D: Fundamental vibron-phonon processes
To investigate the evolution of a single vibron state we have to develop an optimal framework first. Starting with the Hamiltonian we go to the interaction picture following where U = e iH0t and H 0 = νb † b + N k=1 ω k c † k c k . With the initial state in the Schrödinger picture given by |1 ν , vac ph ⟩ we obtain |ϕ⟩ = U |1 ν , vac ph ⟩ = e iνt |1 ν , vac ph ⟩. Following the Schrödinger equation i∂ t |ϕ⟩ =H |ϕ⟩ we acquire the Dyson series Evaluating the first order of the Dyson series we obtain In the case where we have a resonant component ω j = ν with j ∈ {1, . . . , N } and since lim ω→0 (e iωt − 1)/ω = it we obtain which can emerge in the case where ν ∈ {0, . . . , ω max }. No resonance condition can be fulfilled in the case where ν > ω max and the off resonant terms of Eq. (D4) describe the dynamics. Besides the single phonon resonances in the case ν ≤ ω max also weak multi-phonon resonances can be found. For our initial state with one excitation in the vibrational state these terms can be found starting from the third order term of the Dyson expansion. For example for These terms are small with respect to the resonances starting in the first order in the case where ν ≤ ω max and in total are comparable to the off resonant terms.
We denote by J(ω) = n(ω)λ(ω) 2 ω 2 = k |λ k ω k | 2 δ(ω − ω k ) the spectral density of the electron-phonon coupling. From Eq. (A19c) we can derive for the 1D case: which leads to a divergence of the integral at small frequencies due to the 1/ω-term. (in 1D the spectral density will always be proportional to ω for low frequencies). Considering a 3D density of states instead: with group velocity v g = ∂ω/∂q and q(ω) the inverted dispersion relation in 3D (we make the Debye assumption and assume a linear dependence between wavevector and frequency). With this we obtain for the spectral density where we introduced the 3D electron-phonon coupling constant λ 3D e-ph = 16 which has units [s 2 ]. Figure 9 shows the schematic behavior of the spectral densities in 1D and 3D.
Let us first consider the purely vibrational part and ignore the electron-phonon part (corresponds to setting all λ k = 0). We can rewrite The expectation value of the coherence in steady state becomes Using that (n + 1)/n = e βν we can also rewrite the displacement correlation function Eq. (G3) as The generating function of the modified Bessel functions I n (x) is given by e which is similar to the expression known from Huang-Rhys theory [49]. Including the phonon modes, the collective phonon mode displacement correlation for N phonon modes can be expressed as (neglecting phonon decay): λ n k n k ! e −λ 2 k (1+2n k ) n k l k (n k + 1) Together with the the vibrational modes, the total absorption spectrum can then be expressed in terms of discrete lines as P e = η 2 ℓ γ ∞ n=0 n l=0 λ 2n n! e −λ 2 (1+2n) n l (n + 1) n−lnl ∞ n1,...,n N n1,...,n N l1,...,l N N k=1 L k (n k )B k (n k , l k ) γ + n Γm 2 γ + n Γm where we introduced L k (n k ) = λ 2n k n k ! e −λ 2 k (1+2n k ) and B k (n k , l k ) = n k l k (n k + 1) n k −l kn l k k . | 15,470 | 2019-12-05T00:00:00.000 | [
"Physics"
] |
Analysis and compensation for errors in electrical impedance tomography images and ventilation-related measures due to serial data collection
Electrical impedance tomography (EIT) is increasingly being used as a bedside tool for monitoring regional lung ventilation. However, most clinical systems use serial data collection which, if uncorrected, results in image distortion, particularly at high breathing rates. The objective of this study was to determine the extent to which this affects derived parameters. Raw EIT data were acquired with the GOE-MF II EIT device (CareFusion, Höchberg, Germany) at a scan rate of 13 images/s during both spontaneous breathing and mechanical ventilation. Boundary data for periods of undisturbed tidal breathing were corrected for serial data collection errors using a Fourier based algorithm. Images were reconstructed for both the corrected and original data using the GREIT algorithm, and parameters describing the filling characteristics of the right and left lung derived on a breath by breath basis. Values from the original and corrected data were compared using paired t-tests. Of the 33 data sets, 23 showed significant differences in filling index for at least one region, 11 had significant differences in calculated tidal impedance change and 12 had significantly different filling fractions (p = 0.05). We conclude that serial collection errors should be corrected before image reconstruction to avoid clinically misleading results. Electronic supplementary material The online version of this article (doi:10.1007/s10877-016-9920-y) contains supplementary material, which is available to authorized users.
Introduction
Electrical Impedance Tomography (EIT) images regional internal impedance changes related to physiological function using a series of surface electrodes measurement. It can achieve continuous, real-time, non-invasive, bedside monitoring of lung ventilation [1,2]. In recent years there has been a surge of interest in its potential for monitoring regional lung ventilation [3,4], and in particular in its application to management of acute respiratory distress syndrome in infants and adults (IRDS, ARDS). EIT has the potential of becoming a tool for optimizing ventilator therapy leading to reduced incidence of ventilator-induced lung injury. Commercial CE compliant systems are available for clinical use (www.Swisstom.com, www.draeger. com, http://www.timpel.com.br), but these are not currently configured for use with neonates.
The majority of EIT systems in clinical usage are functionally similar to the original Sheffield system [5] which collects data sequentially from different electrode combinations, a configuration with many practical advantages. However, it is assumed, during image reconstruction, that the physiological signal is quasi-static for the duration of each frame. This is often not the case. If we consider a typical system, with 16 electrodes, operating at 13 frames per second, the following consequences arise: for a neonate with a breathing rate of up to 60 breaths per minute and an even faster heart rate of 80-150 beats per minute, physiological changes will occur during the time it takes to collect one frame of data. 1 It has been demonstrated in a single case report [6] that this introduces error of up to 4 % in the reconstructed images, and that these errors are not uniformly distributed. It was calculated that a frame rate 50 times more than the frequency of interest would be needed to reduce this effect to less than the smallest difference the system could measure [6]. Alternatively, due to the systematic nature of the errors, they can be reduced using a mathematical correction [6]. Such a correction would enable the continued use of existing data collection hardware, and an improvement in the accuracy of existing data sets. It should be noted that the 'frequency of interest' may be higher than the normal repetition rate of the signal if it is not sinusoidal, e.g. during inspiration there may be rapid filling followed by a slower filling phase.
This paper presents a detailed study of the effect of this error on clinical parameters derived from EIT images and the implementation of this mathematical correction method as a standalone tool. It also defines the minimum specification required for future EIT systems to monitor infants and children's lung function.
Data correction
Three approaches have been proposed: linear interpolation, phase correction in the frequency domain (both on the boundary voltage data) [6], and use of a regularization prior within the reconstruction algorithm which takes into account the temporal and spatial effects in the sensitivity matrix used [7]. The intention of this work was to create a standalone tool, which researchers and clinicians could use with their existing data collection, image reconstruction and analysis software.
The use of a regularization prior approach did not meet this criterion as it would require changes to the image reconstruction software used. Therefore no attempt has been made to verify the accuracy of such an approach. The frequency domain phase correction is computationally more expensive than linear interpolation, but modelling shows it to be more accurate, particularly for those EIT applications most likely to benefit from this correction (data from neonates and animal models of neonates, collected using older EIT systems, with frame rates of 10-20 frames per second). 2 The signal from a single electrode combination was modelled as a 1 Hz sine wave, representative of neonatal breathing. This was sampled twice at 13 Hz (data1 and data2) with the sampling for data2 starting 1/26th of a second after that for data1. data1 represents the data values at the start of the data frame (the 'true' or 'target' values), and data2 represents the data values which would be measured by this representative electrode combination * half way through the data collection sequence. Linear and phase corrections were then applied to data2 to correct for this 1/2 frame delay, thus aligning the datasets. The point by point percentage difference between data1 and data2 was then calculated for each case.
Based on this modelling it can be seen that without correction, errors of up to 25 % are present ( Fig. 1), with linear correction, errors of up to 4 % remain, with phase correction, excluding the first and last second, errors are less than 0.5 %. Errors after phase correction are inversely related to the sample length, so data correction should be applied to complete data files, not individual breaths. The model was also evaluated with a range of sine wave frequencies from 0.5-6 Hz, and it was found that the frequency domain phase correction consistently produced the least residue error, with the most pronounced differences at higher frequencies. This indicates that the phase correction method gives the optimum reduction in the serial data collection error, and is the method used in this paper.
Reciprocity
To assess the quality of the data a reciprocity check is applied as part of the data correction software. Reciprocity theory states that if current is injected on electrodes a and b and measured on electrodes c and d the same results should be obtained as if measuring on a and b whist injecting on c and d [8]. However, this may be violated if there are nonidealities in the system, e.g. electrode contact impedances not perfectly matched. Changes in the physiological signal between the two measurements could also contribute to reciprocity errors. Therefore, since many EIT systems collect full data sets including reciprocity information, this theorem can be used to provide a check on the quality of data recordings, before images are reconstructed, and provide an indication of the error reduction achieved by the correction for serial data collection [6].
There is only limited research on the effect of reciprocity errors on the quality of reconstructed images. Hartinger et al. [8] demonstrated that a 'quadrature reciprocity error' of 0.19 (corresponding to a *44 % reciprocity error) adversely affected the quality of reconstructed images. No smaller errors were considered. At the lower limit, reciprocity errors of \5 % can be obtained from human subjects. However, several of the data sets used to produce images for journal articles, and latter contributed to a public repository (http://eidors3d.source forge.net/), have large reciprocity errors on a few electrode combinations. We can conclude from this that, until further research into this is undertaken, reciprocity errors should be minimized as much as possible, preferably to \5 %.
Filling index (FI)
FI indicates whether a lung region fills faster or slower than others [9]. It is calculated from the regional variations in shape of the inspiratory waveform, (Eq. 1). If all lung regions behaved the same (homogeneous) they would all have the value '1' for this index. If the value is \1, that portion of lung filled faster than the others, which could be symptomatic of hyperinflation. If [1 the region filled relatively slowly, which could indicate tidal recruitment [9]. Both of which clinicians wish to avoid. The effect of serial data collection is theorized to mimic these changes: using adjacent drive and measure, all measurements for one drive combination are recorded before moving on to the next, and if electrodes are placed anticlockwise starting on the sternum, the final measurements are most sensitive to the right anterior region (Fig. 2).
The data relating to the right anterior region is collected at a later time in the breathing cycle than that to which it is attributed, and would thus appear to indicate a faster filling (FI \ 1), i.e. lead the global average filling curve. Conversely the first data measurements in each frame are particularly sensitive to the left anterior quadrant, and so this region would appear to lag behind the global average filling curve (FI [ 1).
Tidal impedance change (DZ)
In common with papers previously mentioned [2,4], tidal DZ is the magnitude of the impedance change between end-expiration and end-inspiration, either global (DZ T ) or for a specific region-of-interest (ROI), e.g. the Left Anterior ROI (DZ LA ).
Filling fraction (FF)
Various measures of the relative tidal impedance change of one region relative to another are found in the literature. 'Fractional ventilation' is used by Heinrich et al. [10], who divided the thorax in two lung regions. Others, including Frerichs et al. [11] and Tingay et al. [12] used 64 lung regions (32 slices in each hemi-thorax) and used the phrase 'fractional ventilation' to represent the equivalent parameter. Regional change in tidal ventilation within each quadrant, calculated from the tidal impedance amplitude and expressed as a percentage of the summed global impedance amplitude for each recording has also been used [13] to compare the ventilation within each quadrant to the global value, and there are several variations of this [9,14,15]. These methods all use the same principle, but vary according to the number of regions considered and whether the answer is expressed as a percentage.
Data
EIT boundary voltage data sets were retrospectively analysed for this study. The first was obtained from 19 spontaneously breathing neonates (mean age 30 days (range: 5-128 days), mean body weight 3029 g (range: 2150-4300 g)), treated in a neonatal ICU. Four of the neonates were also examined during the application of mechanical ventilation. In addition, data was obtained from clinical trials and experimental studies performed on a spontaneously breathing neonate, an adult human subject (age 31 years, body weight 82 kg) [16] and mechanically ventilated pigs [18,19]. This data is on a public domain repository known as EIDORS ('Electrical Impedance and Optical Tomography Reconstruction Software', eidors3d.sorceforge.net).
Measurements were collected using a commercial EIT system (Goe-MF II EIT system, CareFusion, Hoechberg, Germany) with 16 electrodes placed in a single plane giving 208 electrode combinations per data frame using an adjacent drive protocol. Sections of data between 3 and 60 breaths duration of steady tidal breathing or equivalent were selected, i.e. the longest continuous section available for each subject. This variability is a consequence of the highly irregular breathing pattern in neonates, with sighs and apnoeic phases.
Lag correction
The EIT boundary voltage measurements were processed as follows, using software written in Matlab [www.maths works.com]: for each image frame (208 voltage Fig. 2 The blue waveform depicts the changes in lung volume occurring during ventilation. The shaded rectangle represents the duration of one frame of data collection. During image reconstruction, all this data is traditionally assumed to relate to the start of that data collection period (t 0 ) but most is measured later, after the physiological signal has changed. During the frame, the current application rotates around the chest, shown here schematically as three blue circles, at three different time points, and the voltages measured are most sensitive to the properties of the nearby tissue (depicted as red ovals). Hence, after image reconstruction, it appears that the lung volume waveform over the right anterior region leads that of the left anterior region J Clin Monit Comput measurements), a Fast Fourier Transform (FFT) was applied, then correction for lag was made by applying a frequency dependent phase adjustment and considering the relative delay on the nth data point to be (n-1)/208th of the frame rate, before doing an inverse FFT [6].
The percentage of electrode combinations with a median absolute reciprocity of [5 % was calculated using the lag corrected data. Additionally, the absolute percentage difference between original and corrected data as mean (±std) of all electrode combinations was calculated.
Image reconstruction
For each original and corrected data set separately, time points at peak inspiration and expiration were determined. The data from all electrode combinations was averaged and low-pass filtered to remove heart-beat related effects. The stationary points were identified. These were considered 'end-inspiration' if the impedance at that time was greater than the mean, and the immediately preceding minima was deemed 'end-expiration'. The points were reviewed for accuracy of breath detection, and regions of tidal breathing manually identified. 3 Images were reconstructed using the GREIT algorithm [17] with the included neonate or pig mesh, as appropriate, and using the mean of all selected breaths as the reference. The resultant images were rasterised on to a rectangular grid. The ''Breathing periodicity'' was calculated as the mean number of data frames between sequential end expiration times-that is the data collection rate normalized to the breathing rate.
Image parameters
Posterior and anterior regions of each lung were defined: left anterior (LA), right anterior (RA), left posterior (LP) and right posterior (RP) (Fig. 3). For each data frame, for images reconstructed from the original and corrected data separately, the sum of the pixel values (relative impedance, Z) of each lung quadrant was determined. The following parameters were then calculated, on a breath by breath basis for each of the 4 lung quadrants: (1) Filling index (FI): the inspiratory phase of each breath was fitted to the model of form: The vector Z is the relative impedance of the relevant lung quadrant from each image during inspiration and is assumed to be a function of vector Z g , the average of the 4 quadrants. FI, a and c are constants, derived during the fitting.
(2) DZ: Change in relative impedance between endexpiration and end-inspiration (arbitrary units). Where DZ T is the summed DZ of the 4 quadrants Paired t-tests were conducted between the original and corrected results for each ROI, for each subject. Null hypothesis were of the form:
Results
Of the 19 spontaneously breathing infants analysed (Online resource 1: Table 1) 12 showed significant changes in FI within at least one quadrant, 6 had significant changes in DZ and 7 significant changes in FF. The mean breathing periodicity was 18.5 ± 6.0 frames per breath, 49 ± 11 % of inspiratory breathing flags were shifted 1 data collection frame later by data correction, and 55 ± 13 % of expiratory flags. In all cases lag correction significantly reduced the frame by frame reciprocity error. Four infants (numbers 15-18) who were also mechanically ventilated (Online resource 1: Table 2), all showed a very highly significant increase in FI within the left anterior region, and 3 very significant decreases in FI within the right anterior and right posterior regions. Decreases were also seen in the 4 th data set, but these were not significant. One data set also showed significant decreases in DZ and FF within the left anterior region. The mean breathing periodicity was 32.9 ± 5.2 frames per breath, 51 ± 8 % of inspiratory breathing flags were shifted 1 data collection frame latter by data correction, and 58 ± 15 % of expiratory flags. In all cases lag correction significantly reduced the frame by frame reciprocity error. Ten other data sets were analysed, and similar results were found (Online resource 1: Table 3). Figure 4 illustrates the results for one of the mechanically ventilated neonates, number 16 in Online resource 1: Table 2. This shows; (a) the mean regional impedance as a function of time for the original and corrected data (inspiratory section only), (b-d) a boxplot of each parameter in turn: (b) FI, (c) DZ and (d) FF values. For each region, the FI of the corrected data shows a significant difference from that for the original data and is also nearer to 1, indicating less spatial inhomogeneity in the corrected data. Both right lung regions have a less negative FI, whilst both left lung regions show a less positive FI. Reconstructed images for the first 6 frames indicate earlier impedance changes in the right lung when using the original data than with the corrected data (Fig. 5). A colour version of this figure is included as supplementary material online (Online resource 2). For this data set, DZ and FF are significantly different in the left anterior region only. The other data sets show the same trends, with significant changes in FI in at least one region for 23 of the 33 data sets. Eleven showed significant changes in DZ and 12 had significant changes in FF.
Discussion
The results presented in this paper confirm that systems that measure data serially can significantly alter the interpretation of reconstructed EIT images if the frame rate per image is not sufficient to capture the change in physiology. In all the data analysed in this paper end-expiration and end-inspiration were found to be shifted for at least 33 %, and typically more than half of the breaths. The mechanism associated with the shift, and its variability, can be explained as follows: consider two sine waves of identical frequency and amplitude, that represent the volumetric changes in the left and right lungs respectively, but with a phase difference equivalent to one wave being recorded with a delay equivalent to 0.5 of the frame duration (Fig. 6). After correction, the right lung will have the same phase as the left (thin line with stars). However, before correction, end-inspiration (maximum amplitude of sine wave) is calculated from the global average (i.e. average of the two signals, thick line) and occurs earlier than it does in the corrected data. To complicate things further, breathing times are rounded to the nearest frame generating an additional timing error of up to ±0.5 of a frame in both cases. Thus, the original and corrected data end-inspiration times will sometimes synchronise, and sometimes be one frame apart, depending on the instantaneous phase relationship between the physiological signal and data acquisition.
This has implications for other parameters including DZ. All measurements will be subject to a variable error due to the measurements not being synchronised with end-inspiration and end-expiration. For example, if there are 15.7 frames per breath (Fig. 6), errors of up to *2 % could be present, with the errors always being a reduction. The slower the data acquisition, relative to the breathing rate, the higher this error becomes (e.g. if 4 frames per breath, the worst case scenario is when the data points are at p/4, 2p/4 etc., which gives a 15 % error ((1-sin(p/4))/2) rising to 100 % at the Nyquist limit).
In addition to this we have the serial collection effect. At the time when the mean global impedance change is maximum (frame 4.4, Fig. 6), data relating to the right lung is from the early expiration phase, whereas data from the left lung will be from late inspiration, so both DZ s will be less than the true value. Theoretically, the true DZ will be obtained after correction. However, if the peaks do not exactly coincide with the start of a data acquisition frame, the rounding effect will remain. So on average, DZ would increase following correction. However, the subtle interactions between serial collection effects and peak/data acquisition synchrony, combined with the non-sinusoidal shape of the breathing cycle, will cause these errors to partially counteract each other, and DZ may actually be lower for a particular quadrant. This is present in the results: in more than 60 % of cases DZ corrected was greater than DZ original (resulting in a negative difference), and the average difference was negative for all quadrants. For one of the spontaneously breathing neonates (age: 43 days, body weight: 3185 g) the median DZ changed from 79 to 88 (arbitrary units) and the mean change was -6.3 (original-corrected) this is *8 % change.
FI is the derived parameter most susceptible to errors due to serial collection timing errors. This is unsurprising as it is explicitly dependent on the timing of the most rapidly changing phase of the breathing cycle. Changes of 0.1-0.2 have been demonstrated in most of the data sets. These are of similar magnitude and direction to the differences between left and right lung regions reported in rats [9], and in adult humans [18]. Thus serial correction errors could either mask or be misinterpreted as physiologically significant effects. To prevent this, serial error correction must be applied before clinical diagnoses can be made.
Significant changes in FF were seen in 40 % of the data sets. Positive and negative changes were seen and there is no significant relationship between polarity and region. As discussed earlier there are two interplaying errors in the original data -serial collection related and rounding to the nearest frame. After correction only the rounding error Fig. 4, corresponding to the last inspiration within the analyzed time period, showing the apparently quicker filling of the right lung being reduced by the correction (white denotes increased impedance relative to baseline, arbitrary units, and is indicative of air entering lung): For both sets of images, the extent and intensity of the increase is greater over the right lung region than the left, from 0.23 s. However, the difference is less marked when the corrected data is reconstructed remains and should largely cancel out, e.g. if the time of end-inspiration is rounded down then all the data will be slightly less than the peak by the same fraction of peak intensity, unless the time course of the impedance change in each region is significantly different, i.e. FI = *1. However, if before correction the end-inspiration time was rounded down, the left-hand lung regions would have the largest errors, whereas if rounded up the right-hand regions would be the most inaccurate, thus the direction of the original rounding determines the polarity of the FF changes. The same holds true for end-expiration, and these errors are additive.
The data was inspected to look for predictors of significant changes in the parameters (FI, DZ & FF). Whilst no clear trends were found, the highest contributing factors are likely to be breathing periodicity and number of breaths averaged; there are hints of this in the results, but a much larger sample size, with tighter inclusion criteria, would be needed to confirm this.
Although the reconstruction algorithms may vary between different serial data collection systems, these results and the underpinning theory are universally applicable. Mathematical modelling shows that this serial error can only be safely ignored for data collection rates of more than 50 frames per breath (or 50 frames per cardiac cycle, if that signal is large or of interest) [6].
The correction tool 4 used for this paper is tailored for use with *.get files (Goe-MF II EIT system, CareFusion, Hoechberg, Germany) obtained with the standard 16 electrode, 208 measurements per frame protocol with any frame rate and number of frames. However, it could be modified for use with any serially collected EIT data. The software also should be used, where possible, to check the quality of data with respect to reciprocity before starting the main data collection runs. Further work is needed to establish the limit at which reciprocity errors significantly affect reconstructed images. In the meantime a conservative threshold for reciprocity error warnings has been set, which may be un-necessarily stringent. | 5,668.6 | 2016-08-17T00:00:00.000 | [
"Medicine",
"Mathematics"
] |
Extremely Fast pRF Mapping for Real-Time Applications
Population receptive field (pRF) mapping is a popular tool in computational neuroimaging that allows for the investigation of receptive field properties, their topography and interrelations in health and disease. Furthermore, the possibility to invert population receptive fields provides a decoding model for constructing stimuli from observed cortical activation patterns. This has been suggested to pave the road towards pRF-based brain-computer interface (BCI) communication systems, which would be able to directly decode internally visualized letters from topographically organized brain activity. A major stumbling block for such an application is, however, that the pRF mapping procedure is computationally heavy and time consuming. To address this, we propose a novel and fast pRF mapping procedure that is suitable for real-time applications. The method is build upon hashed-Gaussian encoding of the stimulus, which significantly reduces computational resources. After the stimulus is encoded, mapping can be performed using either ridge regression for fast offline analyses or gradient descent for real-time applications. We validate our model-agnostic approach in silico, as well as on empirical fMRI data obtained from 3T and 7T MRI scanners. Our approach is capable of estimating receptive fields and their parameters for millions of voxels in mere seconds. This method thus facilitates real-time applications of population receptive field mapping.
Introduction
The retinotopic organization of the human visual cortex has intrigued neuroscientists ever since the beginning of the nineteenth century when visual field maps were first discovered in soldiers suffering from occipital wounds (Fishman, 1997). With the advent of functional magnetic resonance imaging (fMRI) in the early 1990s (Rosen and Savoy, 2012), it became possible to map retinotopy non-invasively (Sereno et al., 1995;DeYoe et al., 1996;Engel et al., 1997). Sereno et al. (1995) pioneered a phase encoding procedure that allowed for the systematic investigation of polar angle and eccentricity distributions. More recently, Dumoulin and Wandell 2008 spearheaded the population receptive field (pRF) mapping approach which provided an expandable, parametric, model of receptive fields. This allowed researchers to study additional properties of receptive fields and their topography as well as relationships between receptive field properties.
The pRF approach has, for instance, enabled researchers to understand the relationship between eccentricity and the size of the receptive fields along the visual hierarchy (Dumoulin and Wandell, 2008;Amano et al., 2009;Harvey and Dumoulin, 2011;Silva et al., 2018), to investigate neural plasticity and visual development from childhood to adulthood (Dekker et al., 2019;Gomez et al., 2018) and to study the dynamic changes of receptive fields in response to attention (de Haas et al., 2014). Furthermore, pRF modelling has aided researchers' investigations of pathology such as Alzheimer's disease (Brewer and Barton, 2014), schizophrenia (Anderson et al., 2017), albinism (Ahmadi et al., 2019) and even blindness (Georgy et al., 2019). Additionally, the ability to estimate receptive field parameters is crucial for a number of applications. For instance, receptive fields can serve as a target for transcranial magnetic stimulation (Sack et al., 2009) or provide a spatial forward model for computational models (Peters et al., 2012). Furthermore, receptive fields can be inverted to provide a decoding model for reconstructing perceived, as well as imagined, visual stimuli (Thirion et al., 2006;Senden et al., 2019).
The latter has been suggested to pave the road towards pRF-based braincomputer interface (BCI) communication systems able to directly decode internally visualized letters from topographically organized brain activity (Senden et al., 2019). This is hindered, however, by the method's immense consumption of computational time and resources. This issue largely remains unaddressed, although some recent work (Thielen et al., 2019) has proposed a fast deep-learning based mapping algorithm (DeepRF). The DeepRF method deploys a deep convolutional neural network (ResNet) which receives a time-series response as input and predicts the corresponding pRF parameters. Once the network is trained, pRF parameters can be estimated simply using a rapid forward pass. This method is indeed faster than standard methods such as grid-search and achieves faithful estimation of pRF parameters with an average computational time of 0.01 to 0.03 seconds per voxel. However, the procedure requires the generated simulated data (for training) and the empirical data to have the same experimental design. Hence, for empirical data with a new experimental design, the network needs to be trained again and the training of the deep neural network can take up to several hours. Moreover, the fMRI data typically contains a large number of voxels. Therefore, despite achieving low computational time per voxel, the total computational time for all voxels is on the order of several minutes. This makes the approach unfeasible for real-time analysis. With the aim to enable estimation of receptive fields in real-time, we propose here a novel model-agnostic procedure which can be used offline (using ridge regression) as well as online (using gradient descent).
The method relies on regularized linear regression whose basis set is a hashed-Gaussian encoding of the stimulus-evoked response. Specifically, the stimulus space is exhaustively partitioned as a set of features where each feature uniquely encodes the stimulus by computing the overlap between the stimulus and a set of randomly positioned Gaussians. This type of encoding considerably reduces the memory requirements with a low performance loss and thereby accelerates the calculations.
Using two previously acquired datasets from 3 Tesla and 7 Tesla MR systems, we show that the proposed approach works extremely fast. It is able to estimate receptive field shapes of millions of voxels within seconds. This allows the selection of visually responsive voxels through cross-validation and subsequent estimation of receptive field parameters within about one minute even if the data consists of more than 4 million voxels.
Tile Coding and Hashing
To reduce computation time as well as to lower memory requirements, we encode the stimulus using tile coding and hashing (Albus, 1975(Albus, , 1981. Tile coding is a linear function approximation used in reinforcement learning (Sutton et al., 1998) to deal with large and continuous state spaces. In tile coding, the state space is exhaustively partitioned into subregions called tiles. Usually, the presence of an entity within a tile (in this case, the presence of a stimulus in a region of the visual field) is encoded in a binary fashion. However, it is also possible to encode features using radial basis functions which have the additional benefit of varying smoothly. Memory requirements can be reduced further by hashing a group of individual, non-contiguous, tiles into a single tile. Figure 1 depicts tile-coding and hashing of sample stimuli. The presence of a stimulus is encoded as the extent of overlap between the stimulus and hashed tilings. For our purposes, we use a 2-D isotropic Gaussian as the radial basis function. Subsequently, we hash by combining five randomly selected Gaussians into a single tile leading to a total of 250 tilings. The 5 Gaussian tiles within a tiling may or may not overlap. We normalized each tile to ensure that the area under its surface is equal to one. The code used in this paper is publically available at https://github.com/ccnmaastricht/real time pRF
Encoding Stimuli
Using hashed-Gaussians as tiles, it is possible to encode retinotopic stimuli. First, an overlap between a binary indicator function and a tiling matrix Γ(pixels-by-tiles) is computed. The binary indicator function marks the position of the stimulus aperture at each moment in time S (time-by-pixels). Subsequently, the computed overlap is convolved with a canonical two-gamma hemodynamic response function (HRF) function (h) to obtain the encoded stimulus φ φ = SΓ * h (1)
Ridge Regression
We use ridge regression for fast offline pRF mapping (i.e. after all functional volumes have been acquired). Specifically, the BOLD activity response is modeled by where θ are the estimated weights and denotes the residuals. Note that, prior to computing θ, both φ and the BOLD data B are z-normalized. In order to estimate θ, the discrepancy between the measured and predicted BOLD response (φθ) needs to be minimized. Therefore, we define the error or the loss as In order to avoid over-fitting, we use L 2 regularization and λ denotes the regularization factor. The gradient of the error with respect to θ can be computed as By setting ∂E ∂θ → 0 and solving for optimal θ, we get Receptive fields can now be straightforwardly obtained by multiplying the tiling matrix with the estimated θ: W = Γθ. These raw receptive fields are then subjected to post-processing. The raw receptive fields contain anomalous pixel intensities. These can be removed by first normalizing the raw receptive fields to the range [0, 1] and then shrunk by raising them to a power of some positive integer (shrinkage factor). This shrinks noisy pixel intensities close to 0 while leaving those close to 1 unaffected (figure 2), thus yielding cleaner receptive fields. Figure 2: The effect of shrinking a raw receptive field. a, Raw receptive field displaying undesirably large pixel intensities. b, The receptive field after shrinkage with a factor of 9. c, The corresponding ground truth receptive field.
Similarity Metric
In order to compare the receptive fields obtained from ridge regression with corresponding ground-truth/grid-search receptive fields, we use the Jaccard Index (or Jaccard Similarity). Since the Jaccard Index (JI) is a conservative metric, we derive a Null-model from a resampling procedure for a better interpretation. Specifically, for each estimated receptive field, we pair it with a random ground-truth/grid-search receptive field and compute the JI. The average over these pairs is the JI of one randomization. We repeat this procedure 1000 times to obtain a Null-distribution of randomized JIs. We refer to the mean of Null-distribution as the baseline.
Online Gradient Descent
For online pRF mapping we use gradient descent to iteratively update θ with each acquired volume. In this case, we define the loss function as The gradient of the loss function with respect to the parameter θ is At each time point, θ is updated by a factor (learning rate η) of the gradient. Note that, unlike ridge regression, a regularization term is not needed in this case, as gradient descent is effectively regularized by the learning rate (see Appendix A). Considering the n th time point, the update can be computed as Similar to the offline method, prior to tile coding and hashing, the stimulus needs to be convolved with the HRF. Furthermore, both the BOLD signal B and encoded stimulus φ need to be z-normalized. However, in an online setting this needs to be performed in real-time. Real-time z-normalization requires realtime estimation of the mean and variance of a signal which can be done using Welford's online algorithm (Welford, 1962). Once the current meanx(t) and variance σ 2 (t) have been estimated, the current z-score can be estimated as
Voxel Selection
Since not all measured voxels are visual, and hence may not carry significant information, a voxel selection procedure is desirable. We evaluate voxels in terms of the cross-validated Pearson correlation coefficient (fitness) between their predicted and measured BOLD responses. To account for temporal autocorrelation in the BOLD response, we use a blocked cross-validation procedure (Roberts et al., 2017). Specifically, the data is split into p windows along the time axis. Ridge regression is performed on window 1 and the estimated θ values are used to predict the BOLD response for the remaining p − 1 windows. This is followed by ridge regression on windows 1 and 2 and predicting the BOLD response in the remaining p − 2 windows. This procedure continues until ridge regression is performed on windows 1 to p − 1 and the BOLD response is predicted for the p th (last) window. The overall fitness for each voxel is then given by the mean of fitness values computed for each split. The data used in this paper has 304 time points. We split the data into 4 windows of equal length and retain voxels whose fitness falls within the top 1 %.
Fast pRF Parameter Estimation
Post-processed receptive fields obtained from our ridge regression and gradient descent methods can be readily used to estimate parameters of an isotropic Gaussian pRF model (i.e. the x-location, y-location and size) using a fast procedure. Since peak pixel intensity of a Gaussian receptive field is at its center, we estimate the x-and y-coordinate of pre-processed model-free receptive fields by finding the location of their peak pixel intensity. To estimate the size of receptive fields, our procedure utilizes the relationship between the standard deviation, eccentricity and the mean pixel intensity in an isotropic Gaussian embedded in a finite image. Specifically, given a Gaussian at a fixed location, mean pixel intensity increases as a function of its standard deviation. Furthermore, in a finite image and assuming a fixed size, mean pixel intensity decreases as the Gaussian is progressively moved toward the edge of an image. Therefore, for a given image size, we generate isotropic Gaussians with 25 different standard deviations, located at 25 eccentricities along an axis of 45 • , and compute their mean pixel intensities. This can be utilized to perform a linear regression with mean pixel intensity and eccentricities predicting the receptive field size. We then use the resulting regression weights together with previously estimated locations and mean pixel intensity of our receptive fields to obtain an estimate of their size.
Simulated Data
We simulate fMRI data for a V1-like cortical sheet extending 55 mm along and approximately 40 mm orthogonal to the horizontal meridian in both hemispheres. Since such a sheet is akin to a flattened cortical mesh, model units are referred to as vertices rather than voxels. Each vertex in the model is a 0.5 mm isotropic patch whose receptive field center is directly related to its position on the surface in accordance with a complex-logarithmic topographic mapping (Schwartz, 1980;Balasubramanian et al., 2002) with parameter values (a = 0.7, α = 0.9; Polimeni et al., 2005). The shape of model receptive fields is given by a 2-dimensional Gaussian with (µ x , µ y ) being the receptive field center and σ its size. Below an eccentricity of e = 2.38 all model vertices have a receptive field size of σ = 0.5 whereas they exhibit a linear relationship with eccentricity (σ = 0.21e) beyond this cutoff (c.f. Freeman and Simoncelli, 2011).
A simulated fMRI signal (sampled at a rate of 0.5 Hz) for each vertex is obtained by first performing element-wise multiplication between the receptive field of a vertex and the effective stimulus presented per time point, summing the result and subsequently convolving the obtained signal with the canonical two-gamma hemodynamic response function. Two sources of distortion are added to the signal. First, a spatial smoothing kernel is applied to simulate the point-spread function of BOLD activity on the surface of the striate cortex (Shmuel et al., 2007). Second, autocorrelated noise generated by an Ornstein-Uhlenbeck process with variance σ 2 noise = 0.5 is added. The smoothing kernel is independently applied to the clean signal and the noise before the two are combined. We simulate both 3T-and 7T-like signals by adjusting the fullwidth at half-maximum of the spatial smoothing kernel (3.5 mm and 2 mm for 3T and 7T, respectively; c.f. Shmuel et al., 2007) and the time constant of the Ornstein-Uhlenbeck process (2.25 s and 1 s for 3T and 7T, respectively).
Three Tesla Empirical Data
This dataset, previously described in (Senden et al., 2014), comprises a retinotopy run obtained from three participants (all male, age range = 27-35 years, mean age = 32 years). During this run a bar aperture (1.5 • wide) revealing a flickering checkerboard pattern (10 Hz) was presented in four orientations. For each orientation, the bar covered the entire screen in 12 discrete steps (each step lasting 2 s). Within each orientation, the sequence of steps (and hence of the locations) was randomized and each orientation was presented six times. Furthermore, within each presentation four bar stimuli were replaced with mean luminance images for four consecutive steps. These data were acquired on a Siemens 3T Tim Trio scanner equipped with a 32-channel head coil (Siemens, Erlangen, Germany) using a gradient-echo echo-planar imaging sequence (31 transversal slices; TR = 2000 ms; TE = 30 ms; FA = 77 • ; FoV = 216 x 216 mm 2 ; 2 mm isotropic resolution; no slice gap; GRAPPA = 2) and are publicly available (Senden et al., 2014). Preprocessing consisted of slice scan time correction, (rigid body) motion correction, linear trend removal, and temporal high-pass filtering (up to 2 cycles per run).
Seven Tesla Empirical Data
This dataset, previously described in (Senden et al., 2019), comprises retinotopy as well as passive viewing of letter stimuli obtained from six participants (2 female, age range = 21-49 years, mean age = 30.7 years). During the retinotopy run a bar aperture (1.33 • wide) revealing a flickering checkerboard pattern (10 Hz) was presented in four orientations. For each orientation, the bar covered the entire screen in 12 discrete steps (each step lasting 3 s). Within each orientation, the sequence of steps (and hence of the locations) was randomized and each orientation was presented six times. During the passive viewing run four letters ('H', T', 'S' and 'C') were presented in a 8 • by 8 • bounding frame for a duration of 6 s and their shape was filled with a flickering checkerboard pattern (10 Hz). These data were acquired on a Siemens Magnetom 7T scanner (Siemens; Erlangen, Germany) equipped with a 32 channel head-coil (Nova Medical Inc.; Wilmington, MA, USA) using high-resolution gradient echo echoplanar imaging sequence (82 transversal slices; TR = 3000 ms; TE = 26 ms; generalized auto-calibrating partially parallel acquisitions (GRAPPA) factor = 3; multi-band factor = 2; FA = 55; FoV = 186 x 186 mm 2 ; 0.8 mm isotropic resolution). In addition, this dataset includes five functional volumes acquired with opposed phase encoding directions to correct for EPI distortions that occur at higher field strengths (Andersson et al., 2003). Preprocessing further consisted of (rigid body) motion correction, linear trend removal, and temporal high-pass filtering (up to 3 cycles per run).
For visualization purposes, we also include anatomical data for subject 3. Anatomical data was acquired with a T1-weighted magnetization prepared rapid acquisition gradient echo (Marques et al., 2010) sequence [240 sagittal slices, matrix = 320 x 320 m, voxel size = 0.7 mm isotropic, first inversion time TI1 = 900 ms, second inversion time TI2 = 2750 ms, echo time (TE) = 2.46 ms repetition time (TR) = 5000 ms, first nominal flip angle = 5 • , and second nominal flip angle = 3 • . Anatomical images were interpolated to a nominal resolution of 0.8 mm isotropic to match the resolution of functional images. In the anatomical images, the grey/white matter boundary was detected and segmented using the advanced automatic segmentation tools of BrainVoyager 20 which are optimized for high-field MRI data. A region-growing approach analyzed local intensity histograms, corrected topological errors of the segmented grey/white matter border, and finally reconstructed meshes of the cortical surfaces (Kriegeskorte and Goebel, 2001;Goebel et al., 2006)
Real-time Processing
To mimic a real-time scenario, we limited the preprocessing to trilinear 3D rigid body motion correction which was applied in a simulated real-time setup using Turbo-BrainVoyager (TBV) (v4.0b1, Brain Innovation B.V., Maastricht, The Netherlands). The data was accessed directly from TBV using a network interface providing fast transfer speed suitable for real-time applications. The receiver was implemented in MATLAB TM (version 2019a, The Mathworks .inc, Natick, MA, USA) using JAVA based TCP/IP interfaces.
Results
All experiments were performed using MATLAB TM (version 2019a, The Mathworks .inc, Natick, MA, USA) running on an HP R Z440 workstation with an Intel R Xeon R Processor (E5-1650 v4, 32GB RAM) and an Ubuntu 20.04 operating system. The set of hyperparameters (learning rate η = 0.1, shrinkage factor = 6 and F W HM = 0.15) remain the same for all experiments, except for reconstruction of perceived letter shapes where a shrinkage factor of 9 was used. All the figures generated using MATLAB TM (including the parts of Figures 1 and 2) were generated using export fig (Altman, 2020).
Simulated Data
The fast, ridge-based, mapping procedure was first tested on simulated data to investigate whether it faithfully recovers known population receptive field shapes and their parameters. Overall, the mean Jaccard Similarity (JS) between the estimated and ground-truth receptive field shapes was 0.3452 (95 % CI [0.3409, 0.3495]) and 0.3920 (95 % CI [0.3877, 0.3963]), for simulated 3T and 7T data, respectively. For comparison, corresponding Null-model JS values were 0.0418 and 0.0410, respectively. There is thus good correspondence between estimated and ground-truth receptive field shapes which is also apparent from the sample receptive fields shown in figure 3. Next, we examined the correspondence between receptive field parameters obtained with the two methods. While receptive fields mapped using the ridge regression are not exactly Gaussian, estimated parameters nevertheless show an excellent correspondence with ground truth parameters for both simulated 3T and 7T data (see figures 4 and C.18 respectively as well as table 1). Please note that despite the high correlation, the receptive field size tends to be slightly overestimated by our method. The size of mapped receptive fields can be adjusted using the shrinkage factor. However, for the sake of comparison, we use a constant shrinkage factor across the datasets. Next, we evaluated the ridge-based mapping approach in terms of its computational performance. To that end we measured both memory consumption and the computational time required for the mapping procedure itself as well as for subsequent parameter estimation. Computational times were estimated using MATLAB TM 's stopwatch utility. The execution time measured using this utility can be affected by many unknown variables pertaining to memory, processor, caching in memory, MATLAB TM 's just-in-time compiler, etc. This may influence the execution time measurement each time a subroutine is executed. Therefore, we report computational times as a mean over 100 runs. Memory requirements were estimated using GNU/Linux's pmap command. The memory requirements reported here are calculated as mem max − mem 0 , where mem max is the maximum amount of memory consumed during the procedure and mem 0 is the memory occupied by MATLAB TM before starting the procedure (which includes loading of data into memory and other background processes occupying memory). Memory consumption during the procedure was logged every 0.1 seconds using GNU/Linux's watch command. Note that since here we are only interested in computational performance we test the mapping procedure on randomly generated data of the size 304 − by − voxels. Memory consumption was averaged over 100 repetitions of the procedure. As can be appreciated from figure 5 the ridge-based mapping procedure is extremely fast (less than 10 s).
The computational time only starts to increase as the needed memory exceeds the available memory. As a consequence, virtual memory gets consumed which slows down the mapping procedure. Memory consumption scales linearly with the number of voxels and allows for estimation of ∼ 1.75 and ∼ 3.5 million voxels on systems with 8GB and 16GB of RAM, respectively.
Empirical Data
Following up on simulation results, we tested the ridge-based mapping procedure on previously acquired empirical data. Similar to the simulated data, we asses our method in terms of its ability to estimate pRF shapes and their parameters as well as computational performance. Since ground truth receptive field shapes and parameters are not known for empirical data, we assess our method on its ability to produce estimates that are consistent with a grid-search pRF mapping procedure. Sample receptive fields estimated in the 3T and the 7T empirical data are shown in figures 6 and 7, respectively. Retinotopic surface maps for a representative subject in the 7T dataset are shown in figure 8. These results qualitatively indicate a good agreement between receptive field shapes and parameters between our method and the grid-search approach. Quantitatively, we observe that the Jaccard similarity between receptive fields estimated using the ridge-based and grid-search methods consistently exceed those expected based on the Null model. The JS is particularly high for subjects 03, 05 and 06 for the 7T empirical dataset. In terms of correspondence between the pRF parameters obtained from the fast procedure and those obtained from grid-search, the correlation coefficients shown in Table 3 indicate that correspondence is generally good. This is also apparent from scatter plots showing the correspondence between pRF parameters obtained from our method and grid-search in representative subjects (see 9a and 9b for the 3T and 7T dataset, respectively). We again evaluate computational performance in terms of computational times and memory consumption. We estimate both based on 100 runs for each dataset. Since each subject has a different number of voxels, for each run a subject was chosen randomly. The computational time is computed separately for ridge regression, cross-validation and parameter estimation. The mean computational times or execution times for both datasets are reported in tables 4a and 4b.
The computational times suggest that our algorithm is extremely fast in mapping receptive fields. The actual mapping procedure happens within a sec- Figure 7: Comparison of ridge-estimated and ground-truth receptive field parameters for 7T data. a) Small (top) and large (bottom) estimated and ground-truth receptive fields for subject 1. b-f ) Same as panel a for subjects 2 to 6, respectively Figure 8: Exemplary eccentricity and polar angle maps in both hemispheres of subject 3 in the 7T dataset. The upper row shows maps obtained using our fast parameter estimation procedure whereas the bottom row shows maps obtained using a grid-search procedure. In accordance with the correlation results between maps (see table 3b), the two polar angle and eccentricity maps are visually highly similar. Figure 9: Fast procedure vs grid-search estimated pRF parameters for a) 3T (subject 1) and b) 7T (subject 3) data, respectively. A line with a slope of 1 is included as a reference. ond for the 3T dataset and in a few seconds for the 7T dataset. Cross-validation, which selects the best voxels, finishes in a couple of seconds for the 3T dataset and takes less than a minute for the 7T dataset. The estimation of pRF parameters (for all voxels) also takes only a few seconds for both datasets. This means that receptive fields and their pRF parameters are readily available for further analysis.
Online Gradient Descent
To demonstrate the capability of online gradient descent to work in a realtime setting, we mimicked a real-time scenario using TurboBrainVoyager TM (as described in section 2.5). We show in Appendix A that ridge regression and online gradient descent yield similar receptive fields through hyperparameter sharing. Therefore, we do not provide an evaluation of the ability of the method to reliably estimate receptive field shapes and parameters. Instead, we evaluate its performance in terms of whether estimated receptive fields are suitable for projecting cortical activity back into the visual field. For that purpose we utilize data acquired as subjects passively viewed letter shapes previously described in (Senden et al., 2019). The reconstructions obtained from our approach (see figure 10) are recognizable and comparable those obtained from receptive fields resulting from grid-search. The mean computational times per time volume per subject for the 3T and 7T datasets are reported in tables 5a and 5b, respectively. MATLAB TM uses a just-in-time compiler, which has to be executed the first time and has to first load the subroutine into memory and compile. This often causes the first iteration to be slower. Therefore, we exclude the execution time of the first time volume while computing the mean and standard deviation and report it separately. The average computational time per acquired volume is less than the repetition time (2000ms for 3T and 3000ms for 7T), which means that the receptive fields are updated before the next time volume is acquired. This is especially useful in a real-time setting where analysis needs to be done as the data is being acquired. Figure 11 depicts how memory requirements scale with computational time. The computational only starts to increase when needed memory exceeds the available memory. Generally, up to 1 million voxels can be comfortably estimated within less than 1500ms and requiring less than 2GB of RAM.
Poorly estimated receptive field size
At larger eccentricities our approach shows poor correspondence with the grid-search algorithm in terms of receptive field size. This is surprising giving the good correspondence between estimated and ground-truth receptive field sizes for simulated data. One potential reason for the discrepancy between our (model-free) and the grid-search approach is that the latter assumes receptive fields to have a circular shape. If receptive fields are not circular, a grid-search method may estimate receptive field sizes inaccurately. Several studies have suggested that receptive fields become increasingly elongated at higher eccentricities (Greene et al., 2014;Silson et al., 2018;Lee et al., 2013) rendering this a viable explanation for the discrepancy. An alternative explanation, assuming receptive fields are generally circular, is that the model-based grid search procedure can accurately capture sizes of receptive fields located beyond the visual field of view (the region of the visual field covered by the stimulus) whereas our model-free procedure cannot. Indeed, our model-free procedure would produce a smaller, elongated, receptive field located within the field of view if a large receptive field is located outside the field of view. Below we explore both possibilities.
Anisotropic Model
We investigate the ability of our approach to capture elongated receptive fields by generating simulated data (similar to 2.4.1) using anisotropic Gaussians as ground-truth receptive fields: We vary σ y as a ratio of σ x such that the ratio between σ x and σ y increases with eccentricity. We first obtain σ x as described in section 2.4.1. We then compute σ y = ratio * σ x ; where ratio is σ x rescaled in the range [0.5, 3]. We generate simulated 3T an 7T data with this anisotropic model with the remaining simulation parameters remaining the same as described in section 2.4.1. We define standard deviation σ of such anisotropic receptive fields as the geometric mean of σ x and σ y , that is, σ = √ σ x σ y . Using the geometric mean ensures that the area of an ellipse with semi-minor axis σ x and semi-major axis σ y is the same as a circle with radius of σ.
To examine whether or not our approach reliably captures the shape of the receptive fields, we visually inspect them. Figures 12 and 13 show that our approach is able to generally capture anisotropic receptive field shapes and sizes rather well. The corresponding correlation coefficients are reported in Table 7. However, as receptive fields become more elongated, our method tends to slightly underestimate their size. Interestingly, the grid-search method assuming isotropic receptive fields tends to somewhat overestimate receptive field sizes at large eccentricities. In conjunction, these effects can account for the discrepancy between the ridge-based and grid-search pRF mapping procedure. In order to analyze our approach quantitatively, we compute the JS between estimated receptive fields, ground truth receptive fields and the receptive fields obtained from grid-search (see Table 6). Note that the grid-search method yields pure Gaussians containing no anomalous activations whereas our method yields anomalous activations surrounding the receptive field. Even slight anomalies get penalized in the JS thus accounting for overall better fit observed for the grid-search method.
Receptive Fields Beyond the Field of View
Next we examine to what extent our approach fails to effectively map receptive fields that (partially) lie beyond the field of view. For such receptive fields our approach maps the part that is within the field of view as an anisotropic receptive field with a relatively smaller size. The grid-search method, on the other hand, estimates these receptive fields correctly (see figure 14). This can account for the discrepancy between receptive field estimates between our method and the grid-search procedure.
In order to map larger receptive fields, we recommend using a stimulus that covers a larger field of view. Figure 15 shows the relationship between the field of view, receptive field eccentricity and the maximum reliable estimate of receptive field size using our method. We define a metric which allows us to determine the largest estimated receptive field which is also predicted with high accuracy. We first normalize the sizes of estimated (σ r ) and ground-truth standard deviations (σ t ) to the range [0, 1]. Then, we select the largest reliable standard deviation as argmax σ Σr (σ r,norm (1 − |σ t,norm − σ r,norm |)). We simulate receptive fields located at a range of eccentricities [0, 30] with a range of sizes [0.5, 30] and estimate them with stimuli covering a range of field of views [5,25]. As can be appreciated from the figure, accuracy of receptive field sizes obtained from our fast parameter estimation procedure depends on the field of view and on eccentricity. This should be taken into account when interpreting results obtained from our method. Figure 14: Fast procedure vs grid-search estimated pRF parameters for simulated 3T data. A line with a slope of 1 is included as a reference. Figure 15: Largest reliably estimated receptive field size as a function of eccentricity and field of view. The surface plot was smoothed using smooth2 (Hilands, 2020)
Discussion and Conclusion
We propose a fast and model-free approach for receptive field mapping and pRF parameter estimation that is suitable for real-time applications. A voxelto-pixel map typically is a huge data matrix which requires much computer memory in order to be stored and to be operated on; rendering operations slow. To reduce data by more than 90%, we encode the stimulus using tile coding and hashing. This lowers memory requirements and hence strongly reduces computational time. We evaluated our approach on simulated as well as real empirical data in terms of computational times, fidelity of estimated receptive field shapes and parameters and the suitability of estimated pRF shapes for projecting cortical activity back into the visual field.
We find that our approach is extremely fast at mapping pRFs and estimating their parameters with computational times in the order of seconds and ∼ 1 minute, respectively. Specifically, because our approach can successfully estimate receptive field shapes for large amounts of voxels in mere seconds, it is straightforward to identify the best performing (i.e. visually responsive) voxels by conducting a quick cross-validation procedure. This allows limiting parameter estimation to these voxels and thus to keep computational time low for this process as well. This also eliminates the need of using a pre-defined mask. Furthermore, cross-validation is performed in batches and we provide the option of adjusting the batch size which can further speed up parameter estimation.
In terms of fidelity, we observe excellent correspondences between estimated pRFs and ground-truth pRFs both in terms of shapes and parameters for simulated data. For empirical data, we observe excellent correspondence between pRF locations estimated from our procedure using grid-search on an isotropic Gaussian model. However, for pRF size (standard deviation) of the receptive fields results of the two methods correspond less well. In particular for larger eccentricities correspondence is poor. This is because our method finds anisotropic (elongated) receptive fields. Similar observations were also reported in (Lee et al., 2013) and it was suggested that the receptive fields tend to be anisotropic towards the edge of the stimulus space. The authors argue that when a receptive field partially lies outside the stimulus space, the part of the receptive field that lies inside may be incorrectly identified as having an oval shape. This would be in line with recent studies arguing that receptive fields are generally circular in shape (Lerma-Usabiaga et al., 2020). Other studies have suggested that receptive fields do become increasingly elongated at higher eccentricities. To investigate how both possibilities affect our mapping procedure we conducted additional, unplanned, analyses.
First, we simulated data based on elliptical receptive fields. We observe excellent correspondence between pRF parameters from our approach and groundtruth parameters. Our algorithm estimates the size of such elliptical receptive fields better than the grid-search method. This means that our method is flexible and freer in capturing the shape of the receptive fields than model-based methods. As such, our method is in principle able to capture the true shape of a receptive field. However, an analysis of how receptive field size estimates are effected by their eccentricity and the visual field of view revealed that estimates are only accurate within a certain region of the visual field of view. The flexibility of our method comes at the cost of an inability to deal with large, circular, receptive fields that lie beyond the field of view (i.e. outside the region of stimulation). This is in line with the observation that linear encoding methods (such as ridge regression) fail to reliably estimate large receptive fields (Lage-Castellanos et al., 2020); or rather the receptive fields that partially lie beyond the field of view.
From results shown figure 15 it is possible to derive up to which eccentricity receptive field size estimates are reliable given the field of view of a particular experimental setup. In order to map receptive fields outside of that region, we recommend either using a larger stimulus space or to use a model-based algorithm, such as a Levenberg-Marquadt algorithm or the grid-search method, to fit pRF parameters. To benefit from the fast procedures described here as well as the accuracy of grid-search, it is recommended to utilize the cross-validation procedure included in our method to identify visually responsive voxels and hence to reduce the total number of voxels for which grid-search needs to be performed.
Importantly, for the purpose of projecting cortical activations back into the visual field, the true shape of receptive fields at the edges of the visual field do not matter. Indeed, as can be seen from figure 10, our model-free approach faithfully reconstructs the letter shapes from their associated BOLD activity. Recognizable reconstructions of these shapes was possible even though data underwent real-time preprocessing which is generally considered being of lower quality than offline preprocessing. This highlights that our method is suitable for real-time applications such as content-based BCI letter-speller systems.
In that context it is also important to highlight that the results reported here were obtained using a single set of hyperparameters (learning rate, FWHM and shrinkage factor) except for reconstruction of mental imagery where we use higher shrinkage factor. While the choice of hyperparameters can affect mapping procedure and parameter estimation (refer to Appendix B), the set of hyperparameters used here produced robust results across participants, field strength and pre-processing procedures.
In conclusion, we present an extremely fast and flexible pRF mapping approach which can be either used in parallel with data acquisition (online gradient descent) or after the data has been fully acquired (ridge regression). This opens the door for real-time applications that rely on pRF estimates such as BCI speller systems. We also propose a fast method to estimate pRF parameters. A limitation of this method and model-free approaches in general is that receptive fields partially lying beyond the stimulus space are dealt with poorly. This can be remedied by combining fast estimation of receptive fields with a subsequent grid-search step.
Acknowledgements
recommend using a small value of learning rate (< 1). Shrinkage does not have any effect on mapping itself, since it is used after the mapping procedure to remove abnormal pixels surrounding the receptive fields. Using a large value of shrinkage will reduce the size of the receptive field. FWHM, however, has a direct effect on the mapping procedure. Figure B.17 shows how a combination of FWHM and shrinkage affect the shape of the mapped receptive fields. Using a large FWHM would result in a large overlap between the stimulus and hashed Gaussians, thereby over-encoding the presence of the stimulus. As a result, the mapping procedure would overestimate the size of the receptive fields. This effect is clear from figure B.17, where we visually compare the receptive fields (for the same voxel) obtained using different values of FWHM. The shrinkage and FWHM have an opposite effect on the receptive fields. Hence it is important to use a balanced choice of FWHM and shrinkage in order to obtain optimal receptive fields. Figure B.16: Estimated objective function model for a) the learning rate and the FWHM (the shrinkage was set to 6 b) the shrinkage and FWHM (the learning rate was set to η = 0.1. The hyperparameters were optimized for Jaccard Distance between mapped receptive fields and ground-truth receptive fields based on 3T-like simulated data. The optimization was performed using Bayesian Optimization. The optimization was stopped after 40 evaluations. Figure B.17: The effect of FWHM of hashed Gaussians on mapped receptive fields. The receptive field on the top left is the ground-truth receptive field based on 3T-like simulated data. The rest of the receptive fields were mapped using different FWHMs in the range [0.1, 1]. The learning rate was kept constant to 0.1 and shrinkage was not used. Note that, we use FWHM relative to resolution of stimulus space and hence it is restricted to the range [0, 1]. Figure C.18: Scatter plots between the pRF parameters (location and size) estimated using the fast estimation technique and the ground-truth pRF parameters for 7Tesla-like simulated data. The voxels lying beyond the radius of measured visual field (maximum eccentricity) were ignored for estimating pRF parameters. | 9,613.8 | 2021-03-25T00:00:00.000 | [
"Computer Science"
] |
Equine dendritic cells generated with horse serum have enhanced functionality in comparison to dendritic cells generated with fetal bovine serum
Background Dendritic cells are professional antigen-presenting cells that play an essential role in the initiation and modulation of T cell responses. They have been studied widely for their potential clinical applications, but for clinical use to be successful, alternatives to xenogeneic substances like fetal bovine serum (FBS) in cell culture need to be found. Protocols for the generation of dendritic cells ex vivo from monocytes are well established for several species, including horses. Currently, the gold standard protocol for generating dendritic cells from monocytes across various species relies upon a combination of GM-CSF and IL-4 added to cell culture medium which is supplemented with FBS. The aim of this study was to substitute FBS with heterologous horse serum. For this purpose, equine monocyte-derived dendritic cells (eqMoDC) were generated in the presence of horse serum or FBS and analysed for the effect on morphology, phenotype and immunological properties. Changes in the expression of phenotypic markers (CD14, CD86, CD206) were assessed during dendritic cell maturation by flow cytometry. To obtain a more complete picture of the eqMoDC differentiation and assess possible differences between FBS- and horse serum-driven cultures, a transcriptomic microarray analysis was performed. Lastly, immature eqMoDC were primed with a primary antigen (ovalbumin) or a recall antigen (tetanus toxoid) and, after maturation, were co-cultured with freshly isolated autologous CD5+ T lymphocytes to assess their T cell stimulatory capacity. Results The microarray analysis demonstrated that eqMoDC generated with horse serum were indistinguishable from those generated with FBS. However, eqMoDC incubated with horse serum-supplemented medium exhibited a more characteristic dendritic cell morphology during differentiation from monocytes. A significant increase in cell viability was also observed in eqMoDC cultured with horse serum. Furthermore, eqMoDC generated in the presence of horse serum were found to be superior in their functional T lymphocyte priming capacity and to elicit significantly less non-specific proliferation. Conclusions EqMoDC generated with horse serum-supplemented medium showed improved morphological characteristics, higher cell viability and exhibited a more robust performance in the functional T cell assays. Therefore, horse serum was found to be superior to FBS for generating equine monocyte-derived dendritic cells.
Background
Dendritic cells are antigen-presenting cells specialized in uptake and presentation of antigens to T cells [1]. They are the only antigen-presenting cells capable of inducing primary immune responses in naïve T cells and are thus pivotal for the development of T cell responses [2,3]. The function of dendritic cells is reflected in a number of specific properties. Their distinct shape with many cellular processes offers a large surface area for antigen recognition and uptake [4]. Furthermore, the high surface expression of MHC class II in connection with high levels of costimulatory molecules allows for optimal stimulation of T cells. Initially, studies using dendritic cells have been hindered by difficulties in obtaining sufficient numbers of these cells, as their frequency is very low (<1%). The discovery that granulocyte-macrophage colony stimulating factor (GM-CSF) was the key cytokine needed to differentiate viable dendritic cells from murine blood [5] allowed the development of standardized methods to generate large numbers of dendritic cells ex vivo from hematopoietic progenitors. In humans, dendritic cells can be generated from peripheral blood CD14 + monocytes by using GM-CSF and Interleukin-4 (IL-4) [6,7]. Due to the higher frequency of CD14 + cells, this method has been widely used to generate dendritic cells for experimental purposes in the human field and for immunotherapy. Monocyte-derived dendritic cells (MoDC) were shown to be homogeneous and could be fully matured using autologous monocyte-conditioned medium [8,9] or, alternatively, through a cocktail of inflammatory cytokines, namely IL-1β, Tumor necrosis factor-α (TNF-α), IL-6 and Prostaglandin E2 (PGE 2 ) [10]. The generation of MoDC has been described in a number of domestic animal species such as cattle [11], pigs [12], sheep [13] and horses [14][15][16][17].
Fetal bovine serum (FBS) represents an important source of nutrients for in vitro cell growth, metabolism and proliferation [18] and is widely used in cell culture media. Prior to the emergence of variant Creutzfeldt-Jakob disease as a result of the bovine spongiform encephalitis (BSE) crisis at the end of last century, FBS was considered reasonably safe for humans. In animals however, the safety of FBS was always more questionable with more animal diseases potentially being transmissible between species. The main advantage in using serum from unborn animals consists in the absence of interfering substances like inflammatory molecules, hormones or exogenous antigens, including feed-derived components. However, FBS batches are known to be heterogeneous in their performance and need to be batch-tested. Moreover, diluted and altered FBS has been sold recently in Europe, underlining the challenges to FBS selection [19]. Accordingly, FBS production is increasingly subject to regulations and restrictions, not least to protect animals under the 3R guidelines and reduce unnecessary pain, suffering, distress or lasting harm.
In recent years, the ex vivo generation of dendritic cells for the induction of anti-tumor responses has been a focus point for cancer immunotherapy research [20][21][22][23]. When generating dendritic cells for clinical applications, such as tumor vaccines, reproducibility and safety are of paramount importance. The use of FBS as a poorly defined cocktail of proteases and other active substances has always been less than ideal, and for both humans and animal species, the use of xenogeneic reagents needs to be avoided.
The utility of autologous serum or serum free media for the generation of MoDC has been questioned [24][25][26]. In horses, the use of homologous serum for cell culture has been described widely in systems other than DC generation or maintenance. Hamza et al. [27] used autologous serum in cell culture for functional assays involving equine peripheral blood mononuclear cells (PBMC). The use of horse serum for culture of primary equine bronchial fibroblasts [28] has also been described.
In the present study we examined the morphology, viability, phenotype and functional properties of eqMoDC generated under different serum conditions. For this purpose, three FBS batches from two different manufacturers were compared to horse serum produced in one of our laboratories. The data demonstrate that eqMoDC generated in the presence of heterologous horse serum perform equally well or better than dendritic cells generated with FBS.
Horses and blood samples
Blood samples were collected from the jugular vein of six healthy horses (4 geldings, 2 mares) using Sodium-Heparin containing vacutainers (Vacuette ® ; Greiner, St.Gallen, Switzerland). The horses were of diverse breeds (Warmblood, Freiberger, Icelandic horse) and spanned a large age range (4-25 years, mean age = 12.7 years). They had been vaccinated yearly against equine influenza and tetanus, and dewormed regularly. The horses belonged to the Swiss Institute of Equine Medicine, Vetsuisse Faculty, University of Bern.
Horse and fetal bovine sera
For preparation of horse serum (HS), blood was collected from a healthy horse into Serum Clot Activator containing vacutainers (Vacuette ® ; Greiner, St.Gallen, Switzerland). HS was separated by leaving the blood to clot for 2 h followed by centrifugation at 2684×g (Rotanta 46 RSC centrifuge, Hettich AG, Bäch, Switzerland) for 10 min at 4°C and inactivation for 30 min at 56°C in a water bath. Serum was then stored at -20°C until used.
Endotoxin contamination was assessed in all serumsupplemented media using a qualitative in vitro end-point endotoxin assay (ToxinSensor™, GenScript, Piscataway, NJ, USA). Lipopolysaccharide (LPS) levels were below 0.06 I.U./ml in all media. Differentiation into eqMoDC was induced by addition of 25 ng/ml recombinant (r.) equine GM-CSF and 10 ng/ml (r.) equine IL-4 (both Kingfisher Biotech Inc., St. Paul MN, USA) and cells were cultured for 3 days. Cells were monitored daily by light microscopy for changes in morphology.
Viability assessment
Viability of three day old immature eqMoDC was assessed using the Alexa Fluor 488 annexin V/Dead Cell Apoptosis Kit for Flow Cytometry (Invitrogen Life Technologies, Paisley, UK) according to the protocol provided by the manufacturer. Briefly, cells were harvested from the cell culture plates, washed and an aliquot of cells was suspended in 0.4% Trypan Blue for counting using a hemocytometer (Neubauer). The cells were resuspended in annexin-binding buffer, incubated with 5 μl AF488 annexin V and 1 μl of 100 μg/ml propidium iodide for 15 minutes at room temperature, then analysed immediately by flow cytometry.
Transcriptome analysis of eqMoDC populations
RNA was extracted from cell pellets of 1 × 10 6 cells using the RNAqueous micro Kit (Life Technologies, Qiagen, Crawley, UK) and stored at -80°C. RNA quality was assessed with the RNA 6000 Pico Labchip kit on the Agilent 2100 Bioanalyzer (Agilent Technologies, Berkshire, United Kingdom). The Ovation PicoSL WTA System v2 kit (NuGEN, Leek, The Netherlands) was used to amplify cDNA from 50 ng total RNA. The MinElute Reaction Cleanup Kit (Qiagen) option was used to purify cDNA, and 1 μg was then labelled using a one-color DNA labelling kit (NimbleGen, Madison, USA). For each sample, 4 μg labelled cDNA was hybridised to the NimbleGen 12 × 135 K custom equine arrays (Roche, Madison, USA). Three biological repeats were analysed for each data set. Hybridised arrays were scanned at 2 μm resolution with the Agilent High-resolution C Microarray Scanner (Agilent, Wokingham, UK). Microarray images were processed using DEVA v1.2.1 software (Roche, Madison, USA) to obtain a report containing the signal intensity values corresponding to each probe. The raw data was pre-processed using the DEVA v1.2.1 software by log2 transformation followed by RMA normalisation and summarisation to yield a signal intensity value for each probe set. The data set was then filtered by variance and Principal Component Analysis (PCA) was performed using Qlucore v2.0 software (Qlucore, Lund, Sweden). Variance levels were set using the δ/δ max method.
EqMoDC maturation and functional T cell stimulation assays
For antigen-presentation assays, immature eqMoDC were re-suspended at 2×10 5 cells per 150ul of RPMI 1640 complete medium containing FBS or HS and incubated for 90 min at 37°C either with 20 μg/ml tetanus toxoid (Schweizerisches Serum und Impfinstitut Bern, Switzerland) as a recall antigen, or 20 μg/ml of the primary antigen ovalbumin (OVA, kindly provided by the Swiss Institute for Allergy and Asthma Research, University of Zürich, Davos Switzerland), or with medium alone as a control. After antigen uptake, eqMoDC were washed and cultured in a 96-well round bottom tissue culture plate (Sarstedt, Nümbrecht, Germany) at 2×10 4 eqMoDC per well in quadruplicates.
Fresh CD5 + T lymphocytes were enriched by positive selection using MACS technology as above-mentioned, employing an anti-equine CD5 mAb (clone CVS5, Abd Serotec, Kidlington UK) and re-suspended in RPMI 1640 medium containing FBS or HS, respectively. Purity of CD5 + lymphocytes after bead selection was assessed by flow cytometry and was shown to be > 94%. 10 5 autologous CD5 + T lymphocytes were added to the MoDC in the 96-well plate and co-cultured for 5 days at 37°C/5% CO 2 . 5 μCi/ well [ 3 H] thymidine (Perkin Elmer, Waltham MA, USA) was added for the last 18 h of culture. DNA was then harvested onto a glass fibre filter plate and thymidine incorporation was measured on a scintillation counter (Inotech, LabLogic Systems Inc., Brandon FL, USA).
A mixed leukocyte reaction (MLR) was performed in addition to the autologous co-cultures by incubating matured eqMoDC with CD5 + T lymphocytes from another horse.
Statistical analysis
Statistical analyses were carried out using the software program NCSS 8 (NCSS, Kaysville, Utah 84037, USA). Descriptive statistics showed that the data were not normally distributed. Therefore, non-parametric tests were used. The proportion of non-viable immature MoDC (Fig. 2), as well as the yield of immature eqMoDC (Table 1), was compared between the four serum culture conditions by using a non-parametric paired sample Wilcoxon (signed rank) test. The same test was used to determine differences in surface marker expression between eqMoDC before and after maturation (Fig. 3). A one-way ANOVA with Tukey-Kramer test for multiple comparisons was used to assess significant differences between the individual serum conditions within one maturation state (Fig. 3).
Non-parametric paired sample Wilcoxon (signed rank) test was again used to assess differences in serum conditions with regard to induction of T cell proliferation by antigen-primed MoDC (Figs. 5 and 6). Overall, p-values ≤ 0.05 were considered significant.
Morphology of differentiating eqMoDC differs between cells generated in the presence of HS or FBS
During differentiation from monocytes, MoDC undergo a distinct change in morphology. Equine CD14 + monocytes presented themselves as round cells that adhere quickly to cell culture plates. After 24 hours of stimulation, a formation of tight cell clusters could be seen. These became larger and denser as differentiation progressed (Fig. 1). Also, cellular dendrites could be seen by that time protruding from the clusters. After another 24 to 48 hours, the differentiated eqMoDC detached from the clusters and could be identified as large, heterogeneously formed cells with distinctive long dendrites. Marked differences in the morphology of the differentiating eqMoDC between serum conditions were observed. Cluster formation seemed to be impaired in all cells cultured with FBS in comparison to cells cultured in the presence of HS, where the vast majority of cells had formed into very distinct dense clusters (Fig. 1a). Cluster formation was less clear with both FBS from Biochrom (Fig. 1b, c) and almost non-observable with FBS batch A15-101 (Fig. 1d). Also, in the clusters that were present in FBS treated cells, fewer dendrites could be seen in comparison to HS. With FBS S0113 in particular, several long spindly cells were seen that could be an indicator of excess IL-4 in the culture [7]. However, identical concentrations of IL-4 were used for all serum conditions. When looking at individual cells in the 40× magnification (Fig. 1, small inlaid pictures), no significant differences could be determined between serum conditions. An intriguing finding with FBS A15-101 were large round objects enclosed by a clearly visible membrane, as indicated by the arrow in Fig. 1d.
Immature eqMoDC generated in the presence of HS exhibit a higher viability
It is essential for dendritic cells generated in vitro to maintain viability and thus be able to perform their functional tasks in downstream assays without becoming saturated with apoptotic material. Table 1 exhibits relative cell counts of differentiated eqMoDC following 3 days of incubation, starting with an identical number of cells for all serum conditions. Cell yield was significantly lower with A15-101 serum in comparison to HS. To assess differences in viability between the individual serum conditions, immature eqMoDC were tested using annexin V and propidium iodide staining and analysed by flow cytometry for the presence of apoptotic and/or dead cells. Figure 2 shows that cells cultured in the presence of any of the three FBS exhibited a significantly higher percentage of dead cells than cells cultured with HS. While differences between the individual FBS were non-significant, the highest proportion of dead cells could be observed in the cells incubated with the A15-101 serum.
Surface marker expression of eqMoDC reflects maturation status and is comparable between serum conditions
Differentiation and maturation of eqMoDC in vitro is reflected by changes in the expression of surface markers. To compare our findings with previous results and established knowledge [17] CD14, CD86, and CD206 were selected for analysis by flow cytometry on eqMoDC before and after maturation and for comparison of eqMoDC generated under the different serum conditions. While CD14 is a monocyte marker that should be markedly down-regulated during the differentiation process from monocytes to MoDC, CD206 is a particular marker for immature MoDC to be down-regulated upon activation. CD86 was expected to be up-regulated during maturation/activation of MoDC. As expected, no significant difference could be observed in CD14 expression between immature and mature eqMoDC under any serum condition (Fig. 3a). Interestingly, CD14 remained significantly more highly expressed in immature eqMoDC obtained under the influence of the FBS batches (S0113) or (S0613) than in the presence of the A15-101 FBS batch. For CD86 expression, no significant difference between serum conditions could be observed (Fig. 3b). Nonetheless, eqMoDC incubated with the maturation cocktail showed an upregulation of CD86 under all serum conditions. This upregulation was statistically significant for FBS batches a b c d Fig. 2 Proportion of non-viable immature eqMoDC measured by flow cytometry. The AF488 annexin V/Dead Cell Apoptosis Kit (Invitrogen) was used for the analysis according to the protocol provided in the kit. Each symbol indicates a separate horse (n = 6), with red lines indicating the median value. A non-parametric paired sample Wilcoxon (signed rank) test was used to determine significant differences between HS and the respective FBS condition. P-values ≤ 0.05 were considered statistically significant A15-101, S0113 and S0613. The difference in median values between immature versus mature eqMoDC was particularly low for HS. This was due to a considerable expression of CD86 already on immature eqMoDC. CD206 was clearly expressed upon differentiation of immature eqMoDC under all culture conditions, albeit at significantly lower levels in the presence of FBS A15-101 (median, range. 65.2%, 55.1-79.6%) compared to FBS batch S0113 (83.8%, 76.7-87.2%), or FBS batch S0613 (84.5%, 73-89.6) and also to HS (82.1%, 76.6-86.9%), although this difference was not statistically significant. A significant down-regulation of CD206 upon maturation could be observed in all four serum conditions (Fig. 3c).
Transcriptome comparison of eqMoDC preparations and populations
While the above results demonstrate that HS provided results similar to FBS for the differentiation of equine eqMoDC, there is a considerable lack of antibodies in the horse system to perform a more comprehensive analysis. Aiming to test for hidden features, we resorted to gene expression profiling using equine-specific microarray analysis. Expression profiles of all three cell types were analysed by 3D PCA, which demonstrated that while monocytes (Mo), immature eqMoDC (iMoDC) and mature eqMoDC (mMoDC) are distinct populations, there was no detectable effect of the serum conditions on the gene expression profile of the different cell types (Fig. 4). were generated under the respective serum conditions and were incubated overnight with the maturation cocktail (mature eqMoDC; dark grey boxplots) or maintained as immature eqMoDC (light grey boxplots). A non-parametric paired sample Wilcoxon (signed rank) test was used to identify significant differences between the HS condition and the different FBS conditions, respectively, within the maturation state (◊ indicates differences between immature eqMoDC; □ indicates differences between mature eqMoDC). The same test was also applied to investigate significant differences between immature and mature eqMoDC within each serum condition (indicated by asterisks). P-values ≤ 0.05 were considered statistically significant
Induction of T cell proliferation by antigen-primed MoDC
We analysed the eqMoDC generated under the different serum conditions for their ability to induce proliferation of T cells following antigen presentation. After the uptake of either tetanus toxoid as recall antigen, or OVA as a primary antigen, MoDC were incubated with autologous or heterologous (MLR) CD5 + T cells and proliferation of T cells was measured. While no proliferation could be observed in T cells incubated with medium only for the HS condition (median (range) 107 (67-170) cpm), T cells incubated with FBS showed a notable background proliferation (median (range) 4384 (300-20667) cpm with A15-101; 10709 (1935-36199) cpm with S0113; 5385 (1049-57954) cpm with S0613, respectively) (Fig. 5a). Similarly, non-specific proliferation of T cells co-incubated with non-primed MoDC was significantly higher in all FBS conditions (median (range) 15924 (5843-55404) cpm with A15-101; 12545 (5657-58043) cpm with S0113; 17719 (14952-72847) cpm with S0613, respectively) than in the HS condition (median (range) 6993 (2844-24414) cpm) (Fig. 5b). As expected, the strongest induction of T cell proliferation was observed with heterologous MoDC in the MLR control (Fig. 6a), followed by autologous tetanus toxoid-primed MoDC as a recall antigen presentation (Fig. 6b). T cell proliferation was least pronounced when using the primary antigen OVA (Fig. 6c)
Discussion
Dendritic cells are key players in the immune system, particularly competent in modulating the immune responses [1]. Thus they are promising tools both for cancer immunotherapy and to limit immune responses to treat allergic reactions [30,31]. The ability to generate these otherwise scarce immune cells in large quantities from progenitors and in particular monocytes, reinitiated research in this field around 20 years ago [6,7,32]. The identification of distinct DC populations in vivo, among them DC specialized in cross-presenting antigens to CD8 + cytotoxic T cells [33] has widened the opportunities to study specialized subsets of DC. MoDC remain the first choice for personalized therapeutic approaches, such as loading DC ex vivo, as they were shown to substitute for all DC functions, including cross-presentation [34]. Fetal bovine serum (FBS) is widely used in cell culture media, but has come under more intense scrutiny in recent years: as well as being a possible source for disease transmission its use in therapeutic vaccines may lead to adverse immune reactions against FBS [35,36]. The problem of batch variability, including the contamination with LPS (which is detrimental to MoDC differentiation), was immediately recognised as an issue for therapeutic use of DC [37].
Early work to replace the 5-10% FBS commonly used to generate human MoDC by an equal amount of human sera (either autologous or batch tested) has not been successful (Steinbach et al., unpublished; various personal communications). This was explained by the plasticity of monocytes as uncommitted myeloid cells during differentiation allowing them to acquire a macrophage rather than a DC phenotype [38]. It was accordingly suggested to replace the 10% FBS by 1% autologous plasma, but while the phenotypical and functional data were analogous to FBS-derived MoDC, the DC yield obtained was very low (around 20% compared to FBS based protocols) with a significantly reduced enrichment [9]. This was offset by larger scale production [39], which, however, does not address the issue of cell debris from dead cells in such cultures.
In horses, previous studies to generate MoDC under the influence of GM-CSF and IL-4 used FBS [14,16,17] and pilot studies have also shown the potential use of such ex vivo generated MoDC for treatment of tumors [40]. However, previous experience where FBS-specific IgE was induced through MoDC application [36], eliminated such equine MoDC for the purpose of allergen immunotherapy.
In a preliminary experiment, we compared the use of autologous and heterologous equine serum with FBS from PAA (A15-101), which was used in one of our laboratories for the maintenance of cells lines. Intriguingly, while both equine serum preparation delivered encouraging results, the A15-101 FBS led to morphologically more heterogeneous populations with giant cells that we presumed to be the result of cell death followed by phagocytosis, not matching previous descriptions [14,17]. Since the results obtained with the two serum preparations from horses were very similar, we decided to generate a heterologous serum to achieve a better standardisation for subsequent experiments. In addition, it became necessary to expand our study to other FBS batches able to reproduce previous results Non-parametric paired sample Wilcoxon (signed rank) tests were used to determine significant differences between the HS and the three FBS, respectively. P-values ≤ 0.05 were considered statistically significant [17]. Thus, we decided to conduct a small study comparing three different FBS with HS during the generation of equine MoDC and their application in functional assays.
The aim was to determine if equine MoDC could be successfully generated using horse serum that was freshly prepared rather than commercial serum that previously failed to deliver equine MoDC (Steinbach et al., unpublished). The differentiation from monocytes to MoDC is characterised by the formation of tight cell clusters, where cells gradually become non-adherent and develop typical dendrites [6,41]. It was notable that equine MoDC differentiated with HS developed larger clusters faster than monocytes incubated with FBS, which showed only limited clustering by day 2 (Fig. 1). It is not known whether the formation of cell clusters and thus close cell-to-cell contact is necessary for differentiating monocytes to become fully functional, but eqMoDC cultured in the presence of HS were also significantly more viable. It is thus likely that cell-cell signalling within clusters promotes viability of cells in culture. Not surprisingly and in line with the preliminary data, FBS batch A15-101 showed the highest proportion of non-viable cells. These results, however, do not explain whether HS contains additional viability factors which are lacking in FBS, or whether the changes and additions made to FBS batch A15-101 were detrimental to eqMoDC differentiation. Interestingly, there seems to be an inherent variation between individual horses in the proportion of non-viable cells that was independent of the serum used. Regardless of the serum condition, the same horse delivered the highest as well as the lowest proportion of dead cells. Thus, it is reasonable to propose that individual disposition (from genetics to an individual's current health status) likely affects the generation of eqMoDC ex vivo. This resonates an earlier study showing that monocytes from Lupus Erythematosus patients required more GM-CSF and IL-4 to obtain viable DC [42].
As with the morphology and viability, clear differences were observed for the phenotype between the four tested sera: again, eqMoDC incubated with the two FBS from Biochrom displayed very similar phenotypes and maturation patterns, which were likely due to a similar FBS composition that should be observed using high performance FBS batches. Slightly surprising though was the relatively high level of CD14 remaining on MoDC generated with the Biochrom FBS. Since we tested all media for LPS this can be excluded as the causative factor. MoDC generated with HS expressed slightly higher levels of CD86 already at the immature stage. Overall, though all cells displayed a phenotype in accordance with MoDC differentiation and maturation and only trends were observed that made cells treated with HS preferable, i.e. is more in line with the published gold standard, to those treated with FBS. Accordingly, it was not surprising that in a whole transcriptome analysis the three differentiation stages were clearly separable, whereas the different sera clustered strongly together similar to previous results [17]. This is not astonishing, since across a whole population of cells of the same lineage, only minute shifts in gene expression will suffice to induce the changes in protein expression and morphology such as observed by flow cytometry. This emphasises that all four sera delivered equine MoDC of some quality.
It is important to consider that ultimately, DC are not defined by the presence or absence of certain markers, but by their functional ability to stimulate T cells. Here, HS clearly demonstrated an advantage through not inducing non-specific proliferation. This result, observed with all batches of FBS, may reflect their xenogenous and antigenetic nature, with T cells in adult horses reacting against foreign serum components or exogenous agents. To exclude the latter we tested the FBS batches for the presence of pestivirus RNA (a common contaminant of FBS) and can exclude this as a factor. However, as all horses in this study were regularly vaccinated, a sensitisation to foreign proteins present in these vaccines may well have occurred and has been described for equine vaccines before [43]. The strongest proliferation was induced in the heterologous mixed leukocyte reaction (MLR) followed by an antigen-specific recall response against tetanus toxoid. This was to be expected compared to a primary antigen like OVA, where the response relies on the activation of naïve rather than memory T cells. Using the stimulation index to determine the specific reaction above the background, the best performance was observed with HS.
Conclusion
It can be concluded that eqMoDC generated in the presence of HS showed improved morphological characteristics, higher cell viability and were superior with a more robust performance in the functional T cell assays. While HS did not perform significantly better in all assays, it is the mixture of results that favours HS for the generation of eqMoDC. While PAA's FBS (A15-101) did not perform worst in all experiments, its inferior performance in morphology and viability assays and the lack of clarity surrounding its composition and thereby functional reproducibility exclude this product completely from use [19]. While FBS can in general be considered further for in vitro research, the results here re-emphasize the need for batch testing. These results are very encouraging for the clinical application of equine MoDC and confirm a most recent report using cells generated with horse serum for recall responses [44]. However, the effect of autologous serum or different serum conditions on the phenotype and function of equine MoDC had not been systematically investigated previously. Prior to clinical application, though horse sera need to be tested extensively for extraneous agents, like the widespread equine hepaciviruses and or treated for inactivation of such. Thus, the serum free generation of MoDC would be desirable, but this has proven inefficient, resulting in very limited cell numbers [8,45] and is still a matter of debate [24-26, 46, 47]. With the recent progress in defining serum free media for various purposes (discussed in [48]) this goal can be achieved in the near future, but requires further studies to ensure good compliance with MoDC functionality as well. | 6,708.8 | 2016-11-15T00:00:00.000 | [
"Biology",
"Medicine"
] |
Cancer pain treatment during the COVID-19 pandemic: institutional recommendations
General Recommendations […] Cancer pain treatment during the COVID-19 pandemic: institutional recommendations
Pain is one of the most frequent and feared symptoms in patients with cancer. A recent metanalysis (1) revealed a pain prevalence of 39.3% after curative treatment, 55.0% during anticancer treatment, and 66.4% in advanced disease patients (1). Among cancer patients, 38.0% reported moderate to severe pain (1,2), causing functional status impairment and poor quality of life. Despite the marked improvement in the treatment of cancer (2), 30% of patients still develop refractory pain, requiring invasive procedures to achieve partial or complete relief (3).
Interventional pain procedures can be classified into neuromodulation and neuroablative. Neuromodulation is the functional interruption of pain pathways by intraspinal administration of drugs (4). Epidural infusion of opioids plus local anesthetics is a common example of neuromodulation in the postoperative period. In contrast, neuroablative procedures are used to treat chronic cancer pain and consist of physical interruption of pain pathways by surgical, chemical, or thermal means (4). Ablative procedures promote better pain control and quality of life than pharmacological treatment, but they require hospitalization and are performed with the use of fluoroscopy, computerized tomography, and/or ultrasound.
The COVID-19 pandemic forced the pain specialists and institutions to balance the risks of infection versus the benefits of pain procedures (5,6). Thus, the purpose of this recommendation is to establish institutional routines that may reduce the risk of contamination of patients and health professionals during the COVID-19 pandemic, in the event of performing invasive pain procedures.
General Recommendations
1. Outpatient appointments: full consideration should be given to minimize patients congregating in a waiting room. The use of telemedicine is recommended for follow-up of outpatients (5,6).
2. In-hospital visits: to preserve health resources and protective equipment, it is considered essential to reduce the number of people examining the patients. Specialists' consultations should be limited to the essential. When the interventional procedure is indicated, the pain specialist should examine the patient (5,6). 3. During clinical evaluation, the pain specialist should use surgical masks and gloves. For high-risk patients, professionals should protect themselves by wearing particulate-filtering respirators (N-95). 4. Individuals at high risk of COVID-19 infection should be tested before hospitalization (7) within three to five days before the procedure. Currently, with community dissemination, all cases can be suspected positive for COVID-19, even asymptomatic patients. 5. Unlike patients with non-oncological pain (5,6), interventional pain procedures can never be postponed. Cancer patients are usually at risk of worsening their clinical condition and sometimes such procedures are the best option to improve their quality of life. 6. During the interventional procedure, it is recommended that individuals ought to have full protective garments, including an N95 mask, ocular protection, and double gloves. Ensure that patients wear a surgical mask besides usual surgical gowns. Further, ensure that the fluoroscopy and ultrasound devices have protective covers. Additionally, reduce the number of people present during the procedure (5,6). A negative pressure operation room should be utilized for performing the procedure. 7. Pain procedures can be classified (5,6) as a. Urgent Procedures: intrathecal pump refill or malfunctioning of neurostimulators; intrathecal catheter infection. b. Semi-urgent procedures: refractory cancer pain; patients hospitalized due to pain; suspected opioid abuse.
8. If there is a need for hospitalization for an outpatient procedure, RT-PCR for SARS CoV2 and chest tomography should be performed (7). The patient should be kept hospitalized for the shortest duration as possible.
Therapeutic recommendations:
The injection of intra-articular steroids is associated with an increased risk for influenza infection (9). Following lumbar facet joint injections, cortisol levels are suppressed for an average of 4.4 days (10). Although COVID-19 induces an exaggerated immune response, steroids are only recommended for refractory shock (11). One should consider the risk/benefit of steroid injections and reduce the dose, especially in high-risk patients during the current COVID-19 pandemic (5,6). 2. Non-steroidal anti-inflammatory drugs (NSAIDs): At the beginning of the spread of the SARS-CoV-2 infection, European doctors indicated the non-use of ibuprofen or another NSAID, due to the risk of increasing levels of angiotensin-converting enzyme (ACE); thus, worsening COVID-19 (12). However, these results have not been proven. It should be noted that NSAIDs can mask some early symptoms of the disease, such as fever and myalgia (5). 3. Assess the risk/benefit of administering opioids. Opioids act on the hypothalamic/pituitary/adrenal (HHA) axis and activate the sympathetic nervous system (SNS). The SNS innervates lymphoid organs, such as the spleen, and this activation induces the release of biological amines that suppress the proliferation of splenic lymphocytes and the cytotoxicity of NK cells (13). Additionally, the prolonged use of opioids increases the activity of HHA and the production of glucocorticoids, which also decreases the cytotoxicity of NK cells (14). On the other hand, pain itself is immunosuppressive, and not prescribing opioids for the possibility of immunosuppression can be even more devastating. 4. Interventional procedures reduce opioid consumption and improve the quality of analgesia (4). However, most invasive procedures for cancer patients are performed in the inpatient regimen, which would increase their exposure to infection. The best option is to use common sense and evaluate case by case, especially during board discussions.
Cancer patients are immunocompromised and more susceptible to infections than the general population. These patients are older, have higher angiotensin-converting enzyme-2 (ACE2) expression, and more comorbidities (15). They are at higher risk of adverse outcomes (16), including intensive care admission, a requirement for mechanical ventilation, or death (16). Moreover, these patients are twice more likely to be diagnosed with COVID-19 than the general population (17,18). A pragmatic approach is required when deciding whether to offer interventional therapies for treating cancer pain. The potential benefits and possible risks need to be considered in a scenario where social isolation and confinement at home are guidelines established by global health entities (19). Neuroablation can provide long-term pain control and should be considered for treating severe cancer pain (3). Therefore, the implementation and optimization of the pain control protocol described above would intervene positively in the quality of life of our patients, minimizing the risks during the COVID-19 pandemic. | 1,421 | 2020-08-14T00:00:00.000 | [
"Medicine",
"Economics"
] |
An Analysis of Taboo Words in Blink 182’s Song Lyrics of “Enema of the State” Album
This paper describes the findings of the study about taboo Words used in the Blink 182’s song lyrics. The rationale was that taboo words are forbidden to be used in speaking and writing, but have recently become the subject of a specialized publication as they frequently appear in some contexts of speech, writing, speaking, and even in song. The objectives of the study were to find out what kinds of taboo words used in the Blink 182’s song lyrics and how often taboo words appear in the Blink 182’s song lyrics of Enema of the State album. descriptive qualitative method was used. All of the song lyrics in Enema of the State album (12 songs) were taken as the sample of the study. The data were then analyzed by using the procedures of data analysis by on Maleong: transcription, election, classification, interpretation, and conclusion. The findings indicated that the total words in all the song lyrics in Enema of the State album were 2.520 words. It consisted of 33 taboo words and 2.487 non-taboo words. Of 12 songs, 3 songs used non-taboo words. There are Aliens Exist, Adam’s song, and Wendy Clear. These findings implied that taboo words starts gaining their popularity in songs.
I. Introduction
Language as a tool of communication is essential among living creatures, particularly for human beings. The use of good languange or standard languange results in good relationship and helps the speaker to convey his feeling or his thought. On the other hand, the use of bad language may give bad impressions or even disgust in the society.
Bad impressions may derive from words called taboo words. Uttering of taboo words is still forbidden because many people think that they are deemed to be especially sacred, vulgar, and un culturally defined. In addition, most people think that taboo words may not be uttered at all because they are impolite, vulgar, and violate a moral code. As the results, a person who uses taboo words may be considered uneducated, immature, or immoral.
However, taboo words are favored. It is due to the fact that some taboo words are connected to the belief in the magical nature of the language and have power in themselves. some musicians like to use taboo words in song lyrics to make the songs more interesting and different from others. Such facts make some taboo words widely used.
In this study, the researcher chooses Blink 182 band because it is one of the famous American Group bands which uses some of taboo words. Its music is mixture among rock, rap, and hip hop in one composition. This group band comes from San Diego, California, 1990. All of the personals of Blink 182 are white American. They are Mark Hoppus (bass/ vocals), Tom De Longe (guitar/ vocals), and Scott Raynor (drummer). Nevertheless in 1998, the original drummer Scott Raynor left the band and was replaced by Travis Barker (drummer).
Blink 182 had released many albums. Its first full-lenght album was Cheshire Cat, on Grilled Cheese in 1995. Later in1996, it released Map of the Universe on Lime/ Parloplan. With 1997's Dute Ranch, a new dose of hard core thrash music, Blink 182 set out to conquer the heart ants soul of America. The new line up appeared on Enema of the State, which hit record store shelves in the summer 1999. A live album was released in late fall 2000 followed by Take Off Your Pants and Jacket in June 2001. Among those albums, Enema of the State is the most famous album, and the cassettes have been sold thousand copies. With such big release and fame, this study wanted to find out the kinds of taboo words found in Blink 182's song lyrics and its types.
II. Review of the Related Literature 2.1 Definitions of Taboo Words
Oxford (1995: 1213) defines taboos as laws, norms, or personal beliefs specifying situations in which behaviors or topics should not be performed or communicated about. In addition, Oxford defines "Taboo words are words that are often considered offensive, shocking, or rude because they refer to sex, the body or race." Furthermore, Matthews (1997:371) defines "Taboo words are words known to speakers but avoided in some, most, or all forms or context of speech, for reasons of religion, decorum, politeness." Moreover, Fronkim (1993: 303) states "Certain words in all societies are considered taboo. They are not to be used, or at least, not in "polite company". The word "taboo" was borrowed from Tongan, a Polynesian language, in which it refers to acts that are forbidden or to be avoided." Eschholz (1978: 230) explains "In every language, there seems to be certain 'unmentionablewords' of such strong connotation that they cannot be used in a impolite discourse. The taboo words are words which are not to be used and forbidden talking about" In short, taboo words are words that cannot be uttered at all whether it is in formal or informal situation because they against the norms. It also can give bad impacts to the users. It can be concluded that taboo words are forbidden to use in songs and conservations. In other words, it is forbidden to use or talk taboo words because they can violate a moral code. The people using them are considered uneducated or immature.
Kinds of Taboo Words
Fromkin (1990:269) says "Words relating to sex, sex organs, and natural bodily functions can be categorized as taboo words in many cultures". He further states three kinds of taboo words namely: 1. Words having to do with parts of the body.
The words having to do with anatomy or parts of the body usually have bad connotation and sound rough or dirty . Dealing with this, the use of such kinds of taboo words is prohibited in most society, especially among polite people. The examples of these words are breast, titties and motherfuckers. 2. Words having to do with sexual matters.
Sex or talking about sexual matters is also categorized as taboo, such as horny, pussy, mound. These kinds of words will create taboo meanings if they are used in sexual matters. As the consequency, in most communities this type of words cannot be spoken openly.
Swearing words
Someone tends to utter a particular term to express his anger or unpleasantness to someone else. Commonly, people will utter the word fuck. This word is a popular word of expressing anger. Other words of expressing the unpleasant feeling or anger can be bitch, ass, shit, and many others. These kinds of words are commonly called swearing. Eschholz (1978) classifies taboo words into five categories. They are: 1. Sex and anatomy word e.g.: ass, fuck. 2. Excretion and bodily function word e.g.: shit, piss, a lot of crap 3. Religion word e.g.: hell, damn. 4. Name word e.g.: animal, dog. 5. Interesting word e.g.: be hot, miss, love, making love, kiss. Based on the explanation above, taboo word classifications used in this study were based on Eschholz's theory because the explanation of Fromkin about kinds of taboo words has been included in Eschholz theory.
III. Review of the Related Findings
There were previous researchers who have analyzed it. First, it was conducted by Rihana (1997) from Prayoga School of Foreign Languages in her thesis "Taboo-Words in a Modern Novel : A Preliminary Investigation". She analyzed the common taboo words which were available in the novel. Second, Panjaitan, Lidik and Eric, Syahrial and Kasmaini (2013) from Fakultas Keguruan dan Ilmu Pendidikan Universitas Bengkulu in their journal, entitled "A Sociolinguistics Analysis of Taboo Words In " Kreayshawn'' Song Lyrics". They analyzed each taboo word based on its catagories and relation with sociolinguistics. Next, Yasa Febrianuswantoro and Emalia Iragiliati (2013) from State University of Malang on their article about "The Use of Taboo Words Between Main Characters seen In Conviction Movie". This article was aimed at identifying taboo words and linguistic forms of taboo words which were related to sexual organs, the supernatural, excretion, religious matters, and death. The data of this study were taken from Conviction movie and analyzed based on theory of impoliteness (Culpeper, 1996), taboo words and face work (Brown and Levinson, 1987). The result showed that not all taboo words indicated positive impoliteness superstrategy. These previous studies were as the guidance in this research in order to ease the witer in doing her analysis
IV. Methods of the Research
This research was conducted through descriptive qualitative research. Bobby (2004: 01) defines that "A descriptive study reports the way things are. It is also used to summarize, organize and simplify data". Gay (2005:208) also states: "Descriptive research involves collecting numerical data to test hypotheses or answer question concerning current status conducted either through self-reports collected through questionnaires or interviews... or through observation". The statement above means that descriptive method is a research method in collecting data through interviews, questionnaires, or observation.
Then, Patton (1990: 372) defines: "... qualitative analysis is a new stage of fieldwork in which analysis must observe their own processes even as they are doing the analysis. The final obligations of analysis are to analyze and report of actual findings. The extent of such reporting will depend on the purpose of the study. Equally, Picciano (2005: 01) defines that: Qualitative research is empirical research, in which the research explores the relationship using textual, rather than quantitave data, case study, observation, and ethnography. Results are not usually considered generalizable, but are often transferable. Gay (2005: 208) defines "Qualitative research is the collection and analysis of extensive narrative data to gain insights into a situation of interest not possible using other types of data." It can be concluded that descriptive qualitative research uses observation as one of the ways to collect data. Nonparticipant observation was used in the process of collecting data.
Population of this research was taboo words in Blink 182's song lyrics of "Enema of the States" album. The sample was Blink 182's song lyrics of "Enema of the States" album published in 1999. It was total sampling. It means that all of the songs were as the sample (12 songs). The data were collected by using the observation instrument. This technique was used to make easy in getting data directly. Therefore, the data gained were valid and objective..
Therefore, based on the explanation above, the researcher listened and read the text of the Blink 182's song lyrics of "Enema of the State" album in the tape recorder. The researcher then gave a sign the taboo words of the Blink 182's song lyrics of "Enema of the State" album in each category.
The data were analyzed based on Maleong's theory. There were five steps: transcribing, verifying, classifying, interpreting, and concluding as in the following: 1. Writing the transcription of the Blink 182's song lyrics of "Enema of the State" album. 2. Verifying the Blink 182's song lyrics of "Enema of the State" album. 3. Classifying the Blink 182's song lyrics of "Enema of the State" album into five categories, sex and anatomy words, excretion and bodily function words, religion words, name words, and interesting words by using tally to know the frequency of each type apperance. 4. Interpreting the Blink 182's song lyrics of "Enema of the State" album of each class. 5. Making the conclusion.
V. Findings
The data were picked up from Blink 182's song lyrics of "Enema of the State" album. In analysing them, the researcher used Eschholz's theory which classifies taboo words into five types namley sex and anatomy word, excretion and bodily function word, religion word, name word, and interesting word.
The following table exhibited the total of taboo words and non-taboo words in Blink 182's song lyrics of "Enema of the State" album. There were 12 songs of Blink 182 album entitled "Enema of the State" published in 1999, in 70:14 minutes duration of all songs. The total words were 2520;they are 33 taboo words and 2487 non-taboo words. From 12 songs, the writer found 3 songs lyrics without taboo words. They were Aliens Exist, Adam's song, and Wendy clear.
The frequency of the kinds of taboo words in Blink 182's song lyrics of "Enema of the State" album can be seen as follow: Interesting words Love 4 Kiss 1 Total 33 Based on the result above, the most dominant taboo words were sex and anatomy words (fucket/ fuck, ass,sex/sexual, sodomy, tit, and porn) followed by rigion words (hell and swer), excretion and bodily function words (bitch,suck and underwear), interesting words (love and kiss) and last name words ( dog and mutt) Almost all kids of taboo words were used and become the dominantin blank 182's song lyrics of " Enema of the State" album. The data were interpreted more detail starting from the dominant one.
Sex and Anatomy Word
Data 1: Dumpeed : She's a dove, she's a fucken nightmare Dissentery Gary : He's a fucken wesel.
Fuck the guy who took and run way Fuck this place,I lost the war,I hate you all, your mom's a whore.
The amount of the word fuck/ fucken in the song lyricks were six. Actually,there were some definitions of the word fuck/fucken. In dictionary (webster:2000) the researcher found the meaning of fuck/fucken. They are a person with whom one engages in sexual intercourse or used to express anger, disappointment, and frustration.
Based one the data above, the word fuck/fucken means expressing of anger, disappointment, and frustration. It cn be seen from the context that they show anger to a person to a place. This taboo word is often used people to show their anger.
Data 2: Don't leave Me : She said "don't let my door hiy your ass". Going Away to College : I acted like an ass.
The word ass means the buttocks or anus and a stupid or silly person. The taboo word ass in the song lyrics "Don't Leave Me" shows the part of our body that cannot be said in the context of speaking or songs except in certain situation like in medical context.
The taboo word ass in the song lyrics "Going Away to Collage" means a stupid or silly person. Using this word is not polite. It is not the problem for people who do not know English very well. They sing the songs whether they know meaning or not. Perhaps they use the word in daily life.
Data 3:
The Party Song : where I put on some porn or have a sex on the phone Mutt : He's not that old, I've been told strong sexual goal.
When the first time we hear the word sex/ sexual, the thing that comes to our mind is always negative preception even though sex has another meaning, that is, gender or state of being male or female. However, based on the data 3 above, sex/ sexual means anything connected with sexual grstification or reproduction or the urge for these especially the attraction of those of one sex for those of the other.
In this song lyrics, it is clear that this word is described as taboo because it is connected with sexual gratification or reproduction. The use of this word in song lyrics makes someone imagine what they should not imagine.
Data 4:
What's My Age Again? : This state looks down on sodomy. Sodomy means any sexual intercourse held to be abnormal (anal intercourse especially between two male). This taboo word used in the song lyrics brings teenagers in abnormal sexual intercourse. It can be homosexual or lesbian. Some people who do not know about this will try or practice to someone else that makes the number of criminals increase.
This crime often happens to children who are still studying in elementary school and junior high school. They do this crime by watching and listening the taboo things whitout knowing and understanding the meaning of what they do.
Data 5: The Party Song : Her volume of make up her fake tits were tasteless. Another word for tit is women's breast; this sense of vulgar. Using the word tit shows one of the woman's individual anatomy which cannot be said in this condition as the word has the vulgar meaning. It can be seen from the word tasteless that makes the meaning of tit become more and more vulgar.
Data 6:
The Party Song : Where I'd put on some porn or have sex on the phone.
Porn is a short form of pornography or Pornographic which means writings and pictures intended primarily to arouse sexual desire. It is quite dangerous for children and teenagers since it is the first step for them to know something taboo, vulgar and dirty things. They will start to write , see, and act for young generation and then gives influence to the country that will make the country destory.
Religion Word
Data 7: Going Away to College : But I'd go thought hell for you. What's My Age Again?
: what the hell is ADD What the hell is call ID What the hell is wrong is wrong with me? The word hell, related to religion, has something to do with a place. It is believed in some religions to be the home of devils and wicked people after death. The word, hell, becomes taboo because it uses in wrong event. In these song lyrics, it is used to express annoyance. Only uneducated person uses this word.
Data 8: Antheme: I'll pack my bags I swear I'll run-wish my friends were 21.
In dictionary, the meaning of the word swear is to say or promise something very seriously or to use offensive words, especially when being angry. In the text above, we can use both of the meaning of the word. Besides, the word swear also shows the feeling of a person's anger.
Interesting Word
Data 9: Going Away to Collage : To fall in love or break it off. And if young love is just a game. Dysentery Gary : Cause I love your little motions. The Party Songs : I wasn't out looking for love or affection. Love means a strong feeling of deep affection for something. There are many kinds of love namely love to ourselves, love to parents, love to friends, love to special friends and love to many others. In this case, the song lyrics talk about love to special friend. It can be a boy who loves a girl or a girl who loves a boy. It means that they have strong feeling to each other and the feeling is meaningful.
Data 10: Going Away to Collage : She kissed me after class. The word kiss is categorized into taboo word classified into interesting words. Looking at the meaning of the word, kiss, directly means to touch something with the lips to show love, affection, or respect, or as greeting. From the sentence, they are not spouse, but they kiss one another. When a woman kisses a man, it is forbidden to do this in some cultures. Kissing can be done o to parents, to children, or to someone else. In this sense kissing is not taboo because the meaning is just to say greeting or to respect.
Excretion and Bodily Function Word
Data 11: What's My Age Again? : And that's about the time that bitch hung up on me. Lexically, bitch is a woman, especially a cruel and unpleasant one. It means that a woman who has bad attitude and behavior that make people look down to her. The word bitch is used to insult someone especially woman, to complain something unkind and unpleasant.
Data 12: Dysentery Gary : Life just sucks, I lost the one, I'm giving up she found someone. All the Small Thing : Work sucks, I know. The word suck is often used when someone gets angry to someone or something. In the dictionary (Oxford: 2004: 1211) suck means an act of sucking (at something) or to squeeze or roll something with the tongue while holding it in the mouth.
In the phrase above, life just sucks has the taboo meaning of as the word gives negative meaning. This word influences their identity (good becomes bad), arranges their life (educated becomes uneducated), and influences their attitude (moral becomes immoral).
Data 13: The Party Song : she wasn't wearing underwear at least I prayed. She wasn't wearing underwear and you'll discover.
Clothes worn under other clothes and next to the skin are called underwear. They can be bra, pants, and tights. The word underwear become taboo affected the word before, she wasn't wearing underwear, the word can make people imagine something that should not imagined. It can cause many people to be an immoral and make the number of criminals and free sex increase fast.
Name words Data 14:
Dysentery Gary : where is my dog? Girls are you such a drug. Anthem : I think he humped the dog. Dogs, a common animal with four legs, are often kept by human beings as pets, hunters, and guards. they may also be wild. In the sentence above, the word dog means implicitly as male of female person, especially one who has done something unpleasant or wicked comparing bitch.
Using the word dog in the sentence above is very terrible because it has negative meaning to describe someone in wrong position. This influences the teenagers in choosing the right word to speak.
Data 15: Mutt (title of the song) Mutt means a breed dog. Using one of the names of the animals as the title of the song makes the song become unvalued for some people. They can look down to the song, to the singer, and to the creator of the song. Many people who have relation in making this song can also get bad influence since the quality of the song can be judge from the title.
VI. Conclusion
After analyzing the taboo words in Blink 182's song lyrics "Enema of the State" album, two conclusions can be drawn. Firstly, the total words in blink 182's song lyrics of "Enema of the State" album are 2520; they are 33 taboo words and 2488 non-taboo words. From 12 songs only three songs do not use taboo words. Secondly, the kinds of taboo words are sex and anatomy word, excretion and bodily function word, religion word, name word and interesting word. And the most dominant taboo words are sex and anatomy word then followed by religion word, excretion word and bodily function word, interesting word and last name words. | 5,262.6 | 2015-04-17T00:00:00.000 | [
"Linguistics"
] |
Optimal Solution for VLSI Physical Design Automation Using Hybrid Genetic Algorithm
In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms.The goal ofminimizing the delay in partitioning,minimizing the silicon area in floorplanning,minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length.This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.
Introduction
Physical design automation has been an active area of research for atleast three decades.The main reason is that physical design of chips has become a crucial and critical design task today due to the enormous increase of system complexity and the future advances of electronic circuit design and fabrication.Most commonly used high-level synthesis tools allow the designers to automatically generate huge systems simply by just changing a few lines of code in the functional specification.Nowadays, the open source codes simulated in open source software can automatically be converted to hardware description codes, but the automatically generated codes are not optimized ones.Synthesis and simulation tools often cannot hold with the complexity of the entire system under development.Every time designers want to concentrate on typical parts of a system to upgrade the speed of the design cycle.Thus the present state-of-the-art design technology requires a better solution for the system with fast and effective optimization [1].Moreover, fabrication and packing technology makes the demand for increasing smaller feature sizes and augmenting the die dimensions possible to allow a circuit for accommodating several millions of transistors; however, logical circuits are restricted in their size and in the number of external pin connections.
So the technology requires partitioning of a system into manageable components by arranging the circuit blocks without wasting empty spaces.The direct implementation of large circuit without going for optimization will occupy large area.Hence the large circuit is necessary to split into small subcircuits.This will minimize the area of the manageable system and the complexity of the large system.When the circuit is partitioned, the connection between two modules or say partitions should be minimum.It is a design task by applying a hierarchical algorithmic approach to solve typical combinatorial optimization problems like dividing a large circuit system into smaller pieces.Figure 1 shows the design flow for the proposed approach.
The method of finding block positions and shapes with minimizing the area objective is referred to as floorplanning.The input to the floorplanning is the output of system partitioning and design entry.Floorplanning paves the way to predict the interconnect delay by estimating the interconnect length.This is achieved because both interconnect delay and gate delay decrease as feature size of the circuit chips is scaled down-but at different rates.The goals of floorplanning are to (a) arrange the blocks on a chip, (b) decide the location of input and output pads, (c) decide the location and number of the power pads (d) decide the type of power distribution, and (e) decide the location and type of clock distribution.
Placement is much more suited to automation than floorplanning.The goal of a placement tool is to arrange all the logic cells with the flexible blocks on a chip.Ideally, the objectives of placement are to (a) minimize all the critical net delays, (b) make the chips as dense as possible, (c) guarantee the router can complete the routing step, (d) minimize power dissipation, and (e) minimize cross-talk between signals.The most commonly used objectives are (a) minimizing the total estimated interconnect length, (b) meeting the timing requirement for critical nets, and (c) minimizing the interconnect congestion.
Once the floorplanning of the chip and the logic cells within the flexible blocks placement are completed, then it is time to make the connection by routing the chip.This is still a hard problem that is made easier by dividing into smaller problems.Routing is usually split into global routing followed by the detailed routing.Global routing is not allowed to finalize the connections; instead it just plans the connections to achieve in a better way.There are two types of areas to global route: one inside the flexible blocks and another between the blocks.The global routing step determines the channels to be used for each interconnect.Using this information the detailed router decides the exact location and layers for each interconnect.The objectives are to minimize the total interconnect length and area and minimize the delay of critical paths [2]. Figure 15 shows the overall area minimized using hybrid evolutionary algorithm.
When the physical design components like partitioning, floorplanning, placement, and routing are combined and optimized in terms of area, then the cost increasing criteria like power and clock speed of each module can be controlled, and these subobjective criteria can also be optimized to a further extent.In the last three decades many interchanging methods have been used which also resulted in local optimum solutions.And later some of the mathematical approaches were also introduced with some heuristics models which resulted in better result but they have their own advantage and disadvantage.Since lots of solutions are possible for this kind of problem, hence stochastic optimization techniques are commonly utilized.Till today many techniques have been proposed like global search algorithm (GSA) which combines the local search algorithm (LSA) to produce a better result.
Global optimization technique like genetic algorithm (GA) which captured the context of generation from biological system had been used for physical design problems like circuit partitioning, floorplanning, placement, and routing.Genetic algorithm has been applied to several problems, which are based on graph because the genetic analogy can be most easily applied to any kind of problems.Lots of researchers have proposed their theories to minimize the feature size of the circuit using GA.Theodore Manikas and James Cain proposed that GA requires more memory but it takes less time than simulated annealing [3].Sipakoulis et al. confessed that number of enhancements like crossover operator mutation or choosing different fitness functions can still be made to achieve optimal solutions [4].This means that theory of GA still provides chances for new developments that can help in finding new optimal solutions for physical design problems.This work proposes hybrid evolutionary algorithm to solve the graph physical design component problems.This method includes several genetic algorithm features, namely, selecting population, performing crossover of the selected chromosomes, and if necessary mutation to get better stable solutions.This work tried to hybrid two evolutionary algorithms like genetic algorithm and simulated annealing to overcome the disadvantage of one another.Such type of algorithms with general iterative heuristic approach are called hybrid evolutionary algorithms or memetic algorithm in common.
This work addresses the problem of circuit partitioning with the objective of reducing delay, circuit floorplanning with the objective of reducing area, placement with the objective of minimizing the layout area, and routing with the objective of minimizing the interconnect length.The main objective of area optimization and interconnect length reduction can be achieved by incorporating hybrid evolutionary algorithm (HEA) in VLSI physical design components.
Graphical Representation of Physical Design Components
2.1.Partitioning.Circuit partition will reduce big circuits into small subcircuits and result in a better routing area for the layout.Circuit partitioning problem belongs to the class of NP-hard optimization problems [5].To measure connectivity, it is necessary to get help from the mathematics of graph theory.Figure 2 states this problem can be considered as a graph partitioning problem where each modules (gates, logic cells, etc.) are taken as vertices (nodes or points) and the connection between two logic cells represents the edges [6].
The algorithm starts with gates placed on the graph as vertex, and an initial population has to be chosen as the different permutations of various vertices of the given graph.Given is an unweighted connected graph = (, ) on set of vertices and edges .Let ≥ 2 be a given integer and find a partition 1 , 2 , 3 , . . ., set of vertices such that (i) = ( , ), for all values = 1, 2, 3, . . ., , are connected.(3) Vertex weight equals the width of the block but zero for ℎ and ℎ , similarly to the vertical constraint graph (VGH) as shown in Figure 6.
Floorplanning
Vertical constraint graph V (, ) is constructed for Figure 5 using "above" constraint and the height of each block.The corresponding constraint graphs ℎ (, ) and V (, ) are as shown in Figures 6 and 7.Both ℎ (, ) and V (, ) are vertex-weighted acyclic graphs so longest path algorithm can be applied to find the and coordinates of each block.The coordinates of the block coordinate of the lower left corner of the block.
2.4.
Routing.The classical approach in routing is to construct an initial solution by using constructive heuristic algorithms.A final solution is then produced by using iterative improvement techniques.A small modification is usually accepted if that makes reduction in cost; otherwise, it will be rejected.Constructive heuristic algorithms produce an initial solution from scratch.It takes a very small amount of computation time compared to iterative improvement algorithms and provides a good starting point for them (SM91).However, the solution generated by constructive algorithms may be far from optimal.Thus, an iterative improvement algorithm is performed next to improving the solution.
Although iterative improvement algorithms can produce a good final solution, the computation time of such algorithms is also large.Therefore, a hierarchical approach in the form of multilevel clustering is utilized to reduce the complexity of the search space.A bottom-up technique gradually clusters cells at several levels of the hierarchy.At the top level a genetic algorithm is applied where several good initial solutions are injected in to the population.
A local search technique with dynamic hill climbing capability is applied to the chromosomes to enhance their quality.The system tackles some of the hard constraints imposed on the problem with intermediate relaxation mechanism to further enhance the solution quality.
This problem is a particular example of graph partitioning problem.In general algorithms like exact and approximation run in polynomial time but do not exist for graph partitioning problems.This makes the necessity to solve the problem using heuristic algorithms.Genetic algorithm is a heuristic technique and the best choice that seeks to imitate the behavior of biological reproduction and its capability to collectively solve the given problem.GA can provide several alternative solutions to the optimization problem, which are considered as individuals in a population.These solutions are coded as binary strings, called chromosomes.The initial population is constructed randomly.These individuals are evaluated using partitioning by specific fitness function.GA then uses these individuals to produce a new generation of hopefully better solutions.In each generation, two of the individuals are selected probabilistically as parents, with the selection probability proportional to their fitness.Crossover is performed on these individuals to generate two new individuals, called offspring, by exchanging parts of their structure.Thus each offspring inherits a combination of features from both parents.The next step is mutation.An incremental change is made to each member of the population, with a small probability.This ensures that GA can explore new features that may not be in the population yet.It makes the entire search space reachable, despite the finite population size.The basic foundation of the algorithm is to represent each vertex in the graph as a location that can represent a logic gate and a connection is represented by an edge.
Global Optimization Using GA
Genetic algorithms are optimization strategies that imitate the biological evolution process.A population of individuals representing different problem solutions is subjected to genetic operators, such as selection, crossover, and mutation, that are derived from the model of evolution.Using these operators the individuals are steadily improved over many generations and eventually the best individual resulting from this process is presented as the best solution to the problem.
Consider the graph = (, ) with vertex |V| = and in integer 1 < < /4.Initialize a randomly generated population of elements.Population has 1 to elements.Assume each parent 1 to belong to the population .Perform two point crossover for and from population using the fitness function () = ⋅ ()/, where () is the number of node of partition with maximum cardinality among partitions.Assume () and () are the children from and , respectively.If () has not satisfied the fitness ( () is not in ) then choose randomly from > .Swap () and ().Copy the first element elements of in 1 , 3 .If () has not satisfied the fitness ( () is not in ) then choose randomly from ℎ > .Swap (ℎ) and ().
Copy the first element elements of Check the fitness of (), (), (), and Repeat the process again with () and (), () and () to get new offspring.The new offsprings can have more fitness value or less fitness value depending upon the parents.Less fitness offspring can be discarded then to reduce the number of cycles.In this work, the pure genetic algorithm is combined with simulated annealing to produce the optimal result.The algorithm starts with the initial random population generation.It is essential to set the initial population, number of generation, crossover type, and mutation rate.First step of the genetic algorithm starts with the selection process; the selection process is based on the fitness function through which the chromosomes were selected from the crossover.Crossover is the reference point for the next generation population.The crossover technique used in genetic algorithm is one-point crossover, two-point crossover, cut and splice crossover, uniform crossover, half uniform crossover, and so forth, depending upon the necessity.After crossover the mutation process, to maintain the genetic diversity from one generation to next generation.In this mutation the genetic sequence will be changed from its original sequence by generating a random variable for each bit sequence.After the mutation offspring with fitness are placed in the new population for further iteration.The next step is to apply the local optimization algorithm in between this genetic algorithm as told before; the local optimization is applied in three ways which are mentioned below: (a) before the crossover, (b) after the crossover, and (c) before and after the crossover.
Exhaustive Hybridization.Few solutions are selected from the final generation and improved using local search.Figure 16 shows simulated results for final generation.
Intermediate Hybridization.After a predetermined number of iterations by GA, local search is applied to few random individuals.This is done to have a better solution than the local maxima.This work deals with intermediate memetic algorithm.
Creation of Initial Population.
The initial population is constructed from randomly created routing structures, that is, individuals.First, each of these individuals is assigned a random initial row number ind .Let = { 1 , . . ., , . . ., } be the set of all pins of the channel which are not connected yet and let = { 1 , . . ., , . . ., } be the set of all pins having at least one connection to other pin.Initially = 0.A pin ∈ is chosen randomly among all elements in .If contains pins { , . . ., , . . ., V } (with 1 ≤ < V ≤ ) of the same net, a pin is randomly selected among them.Otherwise, a second pin of the same net is randomly chosen from and transferred into .Both pins ( , ) are connected with a so-called random routing.Then is transferred into .The process continues with the next random selection of ∈ until = 0.The creation of the initial population is finished when the number of completely routed channels is equal to the population size | |.As a consequence of our strategy, these initial individuals are quite different from each other and scattered all over the search space.
Populations and Chromosomes.
In GA based optimizations a set of trial solutions are assembled as a population.The parameter set representing each trial solution or individual is coded to form a string or chromosome and each individual is assigned a fitness value by evaluation of the objective function.The objective function is to only link between the GA optimizer and the physical problem.
Calculation of Fitness.
The fitness of the individual in partitioning is based on the delay of the module.The fitness of the individual is given by weighted evaluations of maximum delay (). identifies a particular subgraph, is a predetermined maximum value, and is the weighting factor.Delay can be measured using the difference between final time and initial time.The sum of the weighting factors equals one.The complete fitness function for partitioning is given in the following: Total Delay Assuming an individual is fully feasible and meets constraints, the value of ≤ 1, with smaller values being better.
At the beginning, a set of Polish expressions is given for floorplanning.It is denoted as , randomly generated expression to compose a population.The fitness for floorplanning of the individual is based on the area of the module.Area of a block can be calculated by the general formula = , where stands for length of the module and stands for width of the module: The fitness of the individual is given by weighted evaluations of maximum area (). identifies a particular subgraph, is a predetermined maximum value, and is the weighting factor.The sum of the weighting factors equals one.The complete fitness function floorplanning is given in the following: The fitness () of each individual ∈ is calculated to assess the quality of its routing structure relative to the rest of the population .The selection of the mates for crossover and the selection of individuals which are transferred into the next generation are based on these fitness values.First, two functions 1 () and 2 () are calculated for each individual ∈ according to where row = number of rows of , and where acc () = net length of net of net segments according to the preferred direction of the layer, opp () = net length of net of net segments opposite to the preferred direction of the layer, = cost factor for the preferred direction, ind = number of nets of individual , V ind = number of vias of individual , and = cost factor for vias.The final fitness () is derived from 1 () and 2 () in such a way that the area minimization, that is, the number of rows, always predominates the net length and the number of vias.After the evaluation of () for all individuals of the population these values are scaled linearly as described in order to control the variance of the fitness in the population.
In placement the cells present in the module are connected by wire.The estimation of interconnect length required for connection is calculated by where , is the weight of the connection between cell and , ( − ) is the distance between two cells in direction, and ( − ) is the distance between two cells in direction.
Interconnect length of each net in the circuit is estimated during Steiner tree and then total Interconnect length is computed by adding the individual estimates: where is the interconnect length estimation for net and denotes total number of nets in circuit.
Parents.
Following this initialization process, pairs of individuals are selected (with replacement) from the population in a probabilistic manner weighted by their relative fitness and designated as parents.
Children.
A pair of offspring, or children, are then generated from the selected pair of parents by the application of simple stochastic operators.The principle operators are crossover and mutation.Crossover occurs with a probability of pcross (typ.0.6-0.8)and involves the random selection of a crossover site and combining the two parent's genetic information.The two children produced share the characteristics of the parents as a result of these recombination operators.Other recombination operators are sometimes used, but crossover is the most important.Recombination (e.g., crossover) and selection are the principle way that evolution occurs in a GA optimization.Eventually local minima will be reached, whereby flipping any element in the solution will result in loss of object.Although these algorithms are simple, there have been many complex improvements for CAD tools which involve large dynamic memory and linked list usages.
For refining the solution obtained by GA, the local search (LS) is applied.This can be used before crossover or after crossover; it can also be used for parents selection and used before or after mutation to increase the number of fitness variables (Algorithm 1).
Optimization by Simulated Annealing
Simulated annealing algorithm is applied for local search process since SA is not a local search algorithm.Here simulated annealing method is performed on finally generated offspring to improve the fitness.This method is called intermediate MA.
Simulated annealing is a stochastic computational method for finding global extrema to large optimization problems.It was first proposed as an optimization technique by Kirkpatrick et al. in 1983 [15] and Cerny in 1984 [16].The optimization problem can be formulated by describing a discrete set of configurations (i.e., parameter values) and the objective function to be optimized.The problem is then to find a vector that is optimal.The optimization algorithm is based on a physical annealing analogy.Physical annealing is a process in which a solid is first heated until all particles are randomly arranged in a liquid state, followed by a slow cooling process.At each (cooling) temperature enough time is spent for the solid to reach thermal equilibrium, where energy levels follow a Boltzmann distribution.As temperature decreases the probability tends to concentrate on low energy states.Care must be taken to reach thermal equilibrium prior to decreasing the temperature.At thermal equilibrium, the probability that a system is in a macroscopic configuration with energy is given by the Boltzmann distribution.The behavior of a system of particles can be simulated using a stochastic relaxation technique developed by Metropolis et al. [17].The candidate configuration for the time is generated randomly.The new candidate is accepted or rejected based on the difference between the energies associated with states.The condition to be accepted is determined by Given a current state of the solid with energy level , generate a subsequent state randomly (by small perturbation).Let be the energy level at state .
(i) If − ≤ 0, then accept state as the current state.
where is the Boltzmann constant.One feature of the Metropolis way of simulated annealing algorithm is that a transition out of a local minimum is always possible at nonzero temperature.Another evenly interesting property of the algorithm is that it performs a kind of adaptive divide and conquer approach.Gross features of the system appear at higher temperatures; fine features develop at lower temperatures.For this application, it used the implementation by Ingber [18].Each solution corresponds to a state of the system.Cost corresponds to the energy level.Neighborhood corresponds to a set of subsequent states that the current state can reach.Control parameter corresponds to temperature.
Partitioning Based on Simulated
Annealing.The basic procedure in simulated annealing is to start with an initial partitioning and accept all perturbations or moves which result in a reduction in cost.Moves that result in a cost increase are accepted.The probability of accepting such a move decreasing with the increase in cost and also decreasing in later stages of the algorithm is given in (11).A parameter , called the temperature, is used to control the acceptance probability of the cost-increasing moves.Simulated annealing algorithm for partitioning the modules will be described here.The cells are partitioned using simulated annealing so as to minimize the estimated interconnect length.There are two methods for generating new configurations from the current configuration [19].Either a cell is chosen randomly and placed in a random location on the chip or two cells are selected randomly and interchanged.The performance of the algorithm was observed to depend upon , the ratio of displacements to interchanges.Experimentally, is chosen between 3 and 8.A temperature-dependent range limiter is used to limit the distance over which a cell can move.Initially, the span of the range limiter is twice the span of the chip.In other words, there is no effective range limiter for the high temperature range.The span decreases logarithmically with the temperature.Temperature span is given in the following: where is the current temperature, is the initial temperature, and ( ) and ( ) are the initial values of the vertical and horizontal window spans () and (), respectively.
The wirelength cost is estimated using the semiperimeter method, with weighting of critical nets and independent weighting of horizontal and vertical wiring spans for each net: where () and () are the vertical and horizontal spans of the net 's bounding rectangle and () and () are the weights of the horizontal and vertical wiring spans.When critical nets are assigned a higher weight, the annealing algorithm will try to place the cells interconnected by critical nets close to each other.Independent horizontal and vertical weights give the user the flexibility to prefer connections in one direction over the other.The acceptance probability is given by exp(−Δ/), where Δ (i.e., − ) is the cost increase and is the current temperature.When the cost increases, or when the temperature decreases, the acceptance probability ( 9) gets closer to zero.Thus, the acceptance probability exp(−Δ/) less than random (0, 1) (a random number between 0 and 1) is high when Δ is small and when is large.At each temperature, a fixed number of moves per cell is allowed.This number is specified by the user.The higher the maximum number of moves, the better the results obtained.However, the computation time increases rapidly.There is a recommended number of moves per cell as a function of the problem size in.For example, for a 200cell and 3000-cell circuit, 100 and 700 moves per cell are recommended, respectively.The annealing process starts at a very high temperature, for example, = 4,000,000, to accept most of the moves.The cooling schedule is represented by +1 = (), where () is the cooling rate parameter and is determined experimentally.In the high and low temperature ranges, the temperature is reduced rapidly (e.g., () ≈ 0.8).However, in the medium temperature range, the temperature is reduced slowly (e.g., () ≈ 0.95).The algorithm is terminated when is very small, for example, when < 0.1.Within each temperature range the number of moves has been built experimentally.Once the number of moves is set, fix the particular move for the remainder of the scheduling.
Floorplanning Based on Simulated
Annealing.This section describes an optimal floorplanning on simulated annealing algorithm.Assume that a set of modules is given and each module can be implemented in a finite number of ways, characterized by its width and height.Some of the important issues in the design of a simulated annealing optimization problem are as follows: (1) the solution space, (2) the movement from one solution to another, (3) the cost evaluation function.
The branch cells correspond to the operands and the internal nodes correspond to the operators of the Polish expression.Figure 3 shows the floorplan module.A binary tree can also be constructed from a Polish expression by using a stack as shown in Figure 4.The simulated annealing algorithm moves from one Polish expression to another.A floorplan may have different slicing tree representations.For example, the tree in Figure 4 represents the given floorplan in Figures 8, 9, and 10.There is a one-to-one correspondence between a floorplan and its normalized Polish expression.But this leads to a larger solution space and some bias towards floorplans with multitree representations, since they have It is obvious that the movements will generate only normalized Polish expressions.Thus, in effect, the algorithm moves from one floorplan to another.Starting from an arbitrary floorplan, it is possible to visit all the floorplans using the movement.If some floorplans cannot be reached, there is a danger of losing some valid floorplans in the solution.Starting from any floorplan, the modules can move to the floorplan based on the given conditions.The cost function is a function of the floorplan or equivalently the Polish expression.There are two components for the cost function, area and wirelength.The area of a sliceable floorplan can be computed easily using the floorplan sizing algorithm [21].The wirelength cost can be estimated from the perimeter of the bounding box of each net, assuming that all terminals are located on the center of their module.In general, there is no universally agreed upon method of cost estimation.For simulated annealing, the cost function is best evaluated easily because thousands of solutions need to be examined.Figures 8, 9, and 10 show a series of movements which lead to a solution.We have implemented the exponential function for the accept method.
Placement Based on Simulated
Annealing.Simulated annealing algorithm mimics the annealing process used to gradually cool molten metal to produce high quality metal structures: (i) initial placement improved by iterative swaps and moves, (ii) accept swaps if they improve the cost, (iii) accept swaps that degrade the cost under some probability conditions to prevent the algorithm from being trapped in a local minimum and can reach globally optimal solution given enough time.
The advantage of using SA are open cost function, wirelength cost, and timing cost.Along with the advantage it also has its disadvantage of slowness.The purpose of our algorithm is to find a placement of the standard cells such that the total estimated interconnection cost is minimized.The algorithm for placement if divide into four principal components [22].
Initial Configuration.
Initially the circuit decomposed into individual cells and found out the input and output cells for each cell.
Then it starts the annealing procedure by placing the cells on the chip randomly.And finally it calculates the total area of the circuit and places the cells accordingly so that they are placed at equal distances from each other.Since the cells are placed randomly, thus the distances between them and the length of their interconnection will be huge.Next the algorithm uses three different functions to get the optimal placement for the chip.(b) swap the position of two cells.This algorithm uses both the strategies randomly.50% of the move generation is done through random move (a) and the rest of the generation is done through swapping (50%).
Cost Function.
The cost function in algorithm is comprised of two components: 1 is a measure of the total estimated wirelength.For any cell, we find out the wire-length by calculating the horizontal and the vertical distance between it and its out cell: where the summation is taken over all the cells in a circuit.When a cell is swapped it may so happen that two cells overlap with each other.Let indicate the overlap between two cells.Clearly this overlap is undesirable and should be minimized.In order to penalize the overlap severely we square the overlap so that we get larger penalties for overlaps: In ( 14) 2 denotes the total overlap of a chip.Thus when we generate a new move we calculate the cost function for the newly generated move.If we find that the new move has a cost less than the previous best move, we accept it as the best move.But if we find a solution that is nonoptimal, we do not reject it completely.We define an Accept function which is the probabilistic acceptance function.It determines whether to accept a move or not.We have implemented an exponential function for the accept method.We are accepting a noncost optimal solution because we are giving the annealing schedule a chance to move out of a local minimum which it may have hit.For example, if a certain annealing schedule hits point (local minima) and if we do not accept a noncost optimal solution, then the annealing cannot reach the global minima.By using the accept function we are giving the annealing schedule a chance to get out of the local minima.As a nature of the accept function used by us, the probability of accepting noncost optimal solution is higher at the beginning of the annealing schedule.As temperature decreases, so does the probability of accepting noncost optimal solutions, since the perturbations of a circuit are higher at higher temperatures than lower temperatures.
Routing Based on Simulated
Annealing.Let and be copies of the mates and be their descendant.x-axis y-axis First, a cut column is randomly selected with 1 ≤ < ind , where ind represents the number of columns of the individuals.The individual ( ) transfers the routing structure to which is located to the left (right) of the cut column and not touched by .Assume that the part of (or ) which has to be transferred into contains rows not occupied by any horizontal segments.Then the row number of (or ) is decremented by deleting this row until no empty row is left.The initial row number ind of is equal to the maximum of ( ind , ind ).The mate which now contains fewer rows than is extended with additional row(s) at random position(s) before transferring its routing structure to .
The routing of the remaining open connections in is done in a random order by our random routing strategy.If the random routing of two points does not lead to a connection within a certain number of extension lines, the extension lines are deleted and the channel is extended at a random position add with 1 ≤ add ≤ ind .If the repeated extension of the channel also does not enable a connection, is deleted entirely and the crossover process starts again with a new random cut column applied to and .This process of creating is finished with deleting all rows in that are not used for any horizontal routing segment [23,24].
Reduction strategy simply chooses the fittest individuals of ( ) to survive as into the next generation.The selection strategy is responsible for choosing the mates among the individuals of the population .
According to the terminology of our selection strategy is actually stochastic sampling with replacement.That means any individual ∈ is selected with a probability The two mates needed for one crossover are chosen of each other.An individual may be independently selected any number of times in the same generation.
Experimental Results
This work compares the performance of combined physical design automation tool for different benchmarks of physical design components.This iterative heuristic technique involves the combination of GA and SA.This work measures the speed of execution time of all levels on an average of 45% when compared to simple genetic algorithm.The objective of individual physical design components is discussed below as results.The results in this section were obtained by the simulation of each individual element of physical design components using general iterative heuristic approach.The experiments were executed on the Intel Core i3 processor with the clock speed of 3.3 Ghz machine which runs in Windows XP.
6.1.Partitioning.Delay (ps) is the delay of the most critical path. (s) is the total run time, and best (s) is the execution time in seconds for reaching the best solution.Thus the objective of area minimization can be achieved by reducing the delay in circuit partitioning (see Table 1 and Figure 11).
Floorplanning.
In floorplanning, wirelength and CPU time are compared.This heuristic approach can reduce the wirelength on an average of 0.5 mm when compared with fast simulated annealing.When the wirelength gets reduced, obviously the area for floorplan will also get reduced (see Table 2 and Figure 12).
Placement.
The initial population is generated to evaluate the fitness function.Based on that fitness parents were selected for the crossover; after this process the normal mutation and inversion operation take place.In addition to this process for each subpopulation local search is applied to refine the fitness of each individual to get the optimal solution.The cells describe the number of elements in the circuits, nets describe the interconnection (see Table 3 and Figure 13).
6.4.
Routing.The method is also surprisingly fast, even when compared to tools that perform pattern routing.Improving routing congestion is significant concern for large and dense designs.If routing fails, the design team must modify the placement or possibly increase the chip size to introduce additional routing resources.In fixed-die design, if there is additional space available, the impact of increased routing area will generally be limited to increased wirelength and power consumption.If additional space is not available, routing failure may increase the cost of a design substantially.
The results obtained for the popular benchmarks reduce the final interconnect length (see Table 4 and Figure 14).
Conclusion
By reducing the wirelength, cost of very large scale integrated chip manufacturing can be reduced.This paper explores the advantage of memetic algorithm which can be proven to be 45% faster than the simple software genetic algorithm by reducing the delay and area in partitioning and floorplanning, respectively, that would indefinitely reduce the wirelength.In hybrid approaches, local search techniques explore the solution space close to the sample points by applying specialized heuristics.When including problem specific knowledge during creation of individual, like in our approach, it is possible to identify unfavourable or redundant partial solutions and consider only the most promising ones.Therefore, each individual in our hybrid genetic algorithms encodes a set of high quality solutions, the best of which is a local optimum.The implementation of multiobjective in the algorithm enables getting the near optimal solution.After a predetermined number of iterations by GA, local search is applied to few random individuals to get the optimal solution by simulated annealing.In the future the performance of the algorithm has to be tested on different benchmarks.
Figure 1 :
Figure 1: Design flow for the proposed approach.
Figure 11 :
Figure 11: Comparison of GA delay and hybrid delay.
Figure 13 :
Figure 13: Comparison of GA interconnect length and hybrid.
Figure 14 :
Figure 14: Comparison of GA layout area and hybrid.
Figure 15 :
Figure 15: Overall area minimized using hybrid evolutionary algorithm.
Figure 16 :
Figure 16: Simulated results for final generation.
Table 1 :
Partitioning optimization of GA compared with hybrid algorithm.
Table 2 :
Floorplanning optim of GA compared with hybrid algorithm.
Table 3 :
Placement optim of SA compared with hybrid algorithm.
Table 4 :
Routing optim of SA compared with hybrid algorithm. | 8,915.2 | 2014-04-09T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Hot electrons in water: injection and ponderomotive acceleration by means of plasmonic nanoelectrodes
We present a theoretical and experimental study of a plasmonic nanoelectrode architecture that is able to inject bunches of hot electrons into an aqueous environment. In this approach, electrons are accelerated in water by ponderomotive forces up to energies capable of exciting or ionizing water molecules. This ability is enabled by the nanoelectrode structure (extruding out of a metal baseplate), which allows for the production of an intense plasmonic hot spot at the apex of the structure while maintaining the electrical connection to a virtually unlimited charge reservoir. The electron injection is experimentally monitored by recording the current transmitted through the water medium, whereas the electron acceleration is confirmed by observation of the bubble generation for a laser power exceeding a proper threshold. An understanding of the complex physics involved is obtained via a numerical approach that explicitly models the electromagnetic hot spot generation, electron-by-electron injection via multiphoton absorption, acceleration by ponderomotive forces and electron-water interaction through random elastic and inelastic scattering. The model predicts a critical electron density for bubble nucleation that nicely matches the experimental findings and reveals that the efficiency of energy transfer from the plasmonic hot spot to the free electron cloud is much more efficient (17 times higher) in water than in a vacuum. Because of their high kinetic energy and large reduction potential, these proposed wet hot electrons may provide new opportunities in photocatalysis, electrochemical processes and hot-electron driven chemistry.
INTRODUCTION
The possibility to generate free electrons in water has been attracting interest for several decades in many different fields of chemical and physical sciences because of their extremely high reactivity 1 . Indeed, free electrons can be considered the most powerful and simple reducing agents in chemistry, showing huge reduction potentials, exceeding more than À5 eV with respect to the normal hydrogen electrode 1 . Free electrons have a fundamental role in many photochemical or electrochemical processes, and they participate as a trigger or an intermediate state in an extremely wide variety of chemical, biological or physical processes. However, although the intense studies have been dedicated to them since the 1970s 2 , many aspects remain unclear. The difficulties originate from the wide range of energy and time scales involved in these processes. For example, the time landscape can span from femto-to micro-seconds, thus making computation methods such as molecular dynamics less effective. Most of the current knowledge comes from experiments of the radiolysis of water produced by a high-energy electron beam or intense laser radiation, which usually result in very different and complex outcomes.
Recently, electron driven processes gained further attention because of their favorable combination with plasmonic nanostructures 3-5 , opening the path to plasmon driven photo-electrocatalytic processes 6,7 . The latter appears to be greatly promising because it can combine the capability of plasmonic nanostructures of harvesting optical energy and producing the so-called hot electrons [6][7][8][9][10] , namely energetic electrons that are not in thermal equilibrium with their environment. As a result, plasmonic hot spots are emerging as an ideal tool for triggering electro-photochemical reactions that otherwise present very low efficiencies 6 . The injection of hot electrons into vacuum [11][12][13] , solid 9,14-16 or liquid [17][18][19][20][21] media is being extensively investigated for many applications. However, we notice that, whereas the physics of the injection in solid devices is largely understood 9 , the physical and chemical behavior of hot electrons in liquids is far more complex [17][18][19][20][21] and still presents many issues that must be clarified for this field to evolve. Towards this goal, it would be extremely useful to have a controllable and effective source of free electrons in water, more precisely; electrons that are not transferred to molecules adsorbed onto the metal surface but are directly injected into the water environment. Furthermore, it would be important to have the capability of increasing the kinetic energy of injected electrons above the thresholds for water excitation (4-6 eV) and water ionization (10-12 eV) 1 .
In this work, we report experimental and theoretical results of hotelectron injection in water, and the ponderomotive acceleration of the electrons by means of plasmonic nanoelectrodes illuminated by femtosecond laser pulses. Namely, a plasmonic hot spot is exploited to enhance multiphoton absorption from a nanotip-like electrode that causes electron injection from metal to water. We show that once free electrons are injected in water, they can be accelerated by the plasmonic field through the ponderomotive process 13 . Under proper conditions, free electron kinetic energy can exceed tens of eV, that is, above the threshold for water excitation, ionization and secondary electron avalanche generation. As will be described below, we implemented a comprehensive multiphysics model adopting an electron-by-electron simulation approach that yields good agreement with the experimental data and provides clear and visual insight regarding the whole process. Importantly, we show that the elastic collisions of free electrons with water molecules makes the free electron cloud more confined, thus enhancing the ponderomotive energy transfer. Therefore, the latter results tend to be more effective in water than that occurring in a vacuum (a factor 17 of enhancement).
The experimental setup is sketched in Figure 1. As shown in the figure, we exploited a 3D vertical plasmonic nanoantenna protruding from planar electrodes [22][23][24] . This configuration shows the following different important advantages: (i) the injection of electrons in water can be verified and quantitatively measured by following the electrical current flow through the electrodes; (ii) heat generated at the antennas tip is dissipated into the electrode and substrate without damaging the antennas, as often occurs when nanoparticles are used in similar experiments 18 ; and (iii) the electrode acts as a metal reservoir, thus compensating the charging effects via hot-electron emissions 25 . In such a manner, there is no decrease in the efficiency of the system, in contrast to plasmonic nanoparticles, where charge carrier recombination should be counterbalanced to preserve the efficiency of the emission phenomenon 26 . In other words, free electrons can be steadily generated at each optical cycle and then accelerated without being affected by the restoring force that typically rules the exciton dynamics.
Importantly, the optical setup enables the observation of cavitation bubbles for laser powers higher than a certain threshold and demonstrates the effectiveness of this electron acceleration, as described below.
MATERIALS AND METHODS
Gold planar electrodes were evaporated on quartz samples and then connected to external pads by means of gold tracks. Arrays of gold vertical nanotubes (1800-nm-tall, 90-nm outer radius, 60-nm inner radius, 3-μm pitch) were fabricated by means of secondary electron lithography 22,27 , a technique based on focused ion beam milling of a silicon nitride membrane coated by an S1813 resist layer. Gold tracks finally bring the contact outside the sample. In this way, it is possible to measure the electron current at the nanoantenna/water interface during laser excitation. To electrically insulate the conductive tracks from the deionized water, a 2-μm SU-8 photoresist passivation layer was deposited on the whole sample. This passivation layer was patterned by optical lithography to expose the planar electrodes with the nanoantennas to the water. A scanning electron microscope image of an antenna array is shown in the inset of Figure 1, which also shows a schematic of the electro-optical measurement setup.
In the experiment, a tunable laser (Coherent MIRA900 with Coherent Verdi G10 pump, Coherent Inc., Santa Clara, CA, USA) was used as the light source, for which the wavelength was tuned in the near infrared at 850 nm, and the emission was in pulses with 200-fs pulse width at 76-MHz repetition rate. The pulsed beam is then chopped at 780 Hz with a chopper wheel and fed to an upright WiTec microscope. The laser is focused onto the nanoantenna tip by means of a 60× immersion objective (NA = 1) that produces a laser spot with a beam waist of~700 nm, as estimated experimentally by a Gaussian fit of the intensity profile. The sample with the nanoantennas is immersed in MilliQ grade deionized water and is electrically connected to a transimpedance with 10 7 V/A gain (Femto DHPCA-100, FEMTO Messtechnik GmbH, Berlin, Germany). A platinum wire immersed in the deionized water acts as counter-electrode for the current measurements; all measurements of photocurrent are made without the application of a bias between the platinum counterelectrode and the sample with nanoantennas. The transimpedance output is fed to the input of a lock-in amplifier (Stanford Research Systems SR830, Stanford Research Systems, Inc., Sunnyvale, CA, USA), which locks the signal to the chopper frequency. The SU-8 passivation on the sample ensures that there are no leakage currents between the platinum wire and the other gold surfaces on the sample. A rotating gradual filter wheel was used to change the laser intensity while the current at the nanoantenna/water interface was measured. By means of a photodiode connected to the same oscilloscope where the output of the lock-in amplifier was connected, we were able to simultaneously measure the laser intensity and the relative generated current. For each laser intensity setting, we measured the generated current while the excitation spot was moved between nanoantennas and the planar gold substrate to allow the corresponding measured currents to be compared.
To simulate the complex physics involved in our system, we developed a model implemented in the COMSOL Multiphysics simulation environment. Its description in full detail is reported in the Supplementary Information.
The electromagnetic field distribution around the nanoantenna (an example is reported in Figure 2b) is obtained by solution of the time harmonic Helmholtz equation, assuming a normally impinging linearly polarized plane wave with unitary amplitude. The timedependent electromagnetic field produced by the focused pulsed beam, E(x,t), is then obtained by properly renormalizing the field to the space and time maximum of the pulse and by multiplying by the temporal pulse shape provided by the laser datasheet (in our case a sech 2 -like time dependence). In particular, the range of laser powers considered is 1-6 mW. All other parameters of the model are fixed to match the experimental ones.
The electron photoinjection in water is modeled by adopting the experimentally found photocurrent functional behavior with respect to the impinging power, I = AP 3 (see also the 'Results and Discussion' section and 'Discussion' therein). The input to the charged particle tracing simulation is the photoemission current density jðx; tÞ ¼ A 0 jEðx; tÞj 6 , where the constant A′ is obtained from the fit constant A upon proper renormalization (Supplementary Information). This procedure avoids the explicit modeling of the photoemission process, which critically depends on the local work function of the gold-water interface and, in turn, depends on the gold roughness and the space-dependent temperature distribution. The correct space and time dependence of the current density is given by the electric field enhancement distribution. Electrons are injected according to j(x, t) with an initial kinetic energy equal to ε initial ¼ 3_o À WE 0.65 eV.
With respect to the current literature, here, we do not use macroscopic plasma-fluidodynamics equations to model the effect of the free electrons in water. Instead, we consider a more fundamental level, explicitly modeling the electron-by-electron injection and dynamics in the water environment in the presence of the plasmonic hot spot. The electron trajectories are obtained by solution of the equation of motion subject to the time-dependent force produced by the plasmonic electromagnetic field and to stochastic deviations produced by collisions with water molecules. Based on a very recent paper in the literature 28 , we consider the details of the integral and differential cross section in the angular deviation for the elastic collisions by interpolation/extrapolation of the reported experimental data. For inelastic collisions, we take into account the ionization and excitation differential inverse mean free paths, calculated from evaluation of the electron loss function, as has been described extensively in the references [29][30][31][32][33] .
We observe that the model considers the water properties at a molecular level as follows: the ionization pathways for the five molecular orbitals (1a 1 , 2a 1 , 1b 2 , 3a 1 , 1b 1 ) of the H 2 O molecule in the liquid phase, five excitation levels ( Ryd C+D, diffuse bands), exchange effects and semi empirical lowenergy corrections to improve the reliability of the model at low energies 29 . Recombination is neglected in the model because it occurs in times much longer than the pulse duration 21 . We highlight the importance of the developed model because the overall number of electrons produced by primary and secondary emission is not very large at the considered impinging light intensities, ranging from few tens to some hundreds of thousands of electrons. Therefore, at the lowest powers, it is expected that macroscopic transport equations do not provide a realistic picture. However, the limited electron number allows an explicit electron track simulation to be conducted at reasonable times. Figure 2a reports the measured currents (i) read from the electrode as a function of the laser power (P) impinging onto the sample. The measured photocurrent is the result of the charge transfer between the gold antenna and water when hot electrons are ejected from the former and transferred to the latter during laser excitation. We compare the typical current read from the electrode by on-axis illumination of one antenna (blue circles) with the one obtained from flat gold. The antenna yields a current larger by a factor 40, for P43.5 mW. Clearly, this is related to the plasmonic hot spot produced at the antenna termination, as shown in Figure 2b, where we report a finite element simulation of the electric field norm around the antenna. The left color scale reported refers to the case P = 5 mW at the peak of the pulse, and the right scale shows the enhancement with respect to the impinging beam field at focus. Two different regimes are clearly found within the spanned power range. Below a certain threshold of P = P*≈3.5 mW, the current follows a power law dependence on the input power, reaching maximum values of 12 nA. From a fit of the data with a functional form of i(t) = AP n , we obtain A = 0.231 ± 0.029 nA/(mW) n and n = 3.023 ± 0.11. Such a dependence suggests that the main emission mechanism in this power range can be identified as 3-photon absorption 32 . This absorption is expected because the impinging photons at λ = 850 nm have an energy of 1.46 eV, that is, the simultaneous absorption of three photons is required to exceed the gold-water work function, whose value is W = 3.72 eV 34 .
RESULTS AND DISCUSSION
Above P*, the current shows an irregular oscillatory behavior around the saturation value of 11 nA. A visual inspection by optical microscopy reveals the formation of a cavitation bubble for powers P48 mW (Figure 2a, inset). However, the refresh time of our camera is relatively long, 160 ms. According to the recent literature, the cavitation bubble dynamics at threshold conditions is much faster (~100 ns); thus, we expect the actual threshold for cavitation to be at Po8 mW. On the other hand, the sudden departure from the power law behavior observed at P* = 3.5 mW suggests that this is the threshold for nanobubble formation. Indeed, the bubble partially screens the nanoelectrode ends and changes the local refractive index, producing both a decrease in the number of photons reaching the antenna apex and a reduction of the expected field enhancement [17][18][19][20][21]35 . These effects likely compensate the increase of the impinging light power, thus determining the observed saturation in the i-P curve.
The calculated peak fluence corresponding to the threshold power of P = 3.5 mW is~5.5 mJ cm À2 . As a comparison, for an 800-nm wavelength, 200-fs-long pulse, the fluence required for optical breakdown in pure water has been reported to be~800 mJ cm À2 (Refs 18,21), while that yielding a relevant plasma-related bubble formation in off-resonant gold nanoparticles is~200 mJ cm À2 (Refs 17,18). The fluence causing bubble formation in a resonant gold nanoparticle has been reported to be much lower,~9 mJ cm À2 ; however, in this case, the effect has been related to the huge energy absorption and consequent temperature increase within the gold nanoparticle, leading to damage or fragmentation of the particle itself 17,35 . This effect can be reasonably excluded in the present case because of the efficient heat dissipation provided by the gold baseplate. Indeed, no alteration of the structures or the current response was observed in the considered power range (for more details, see the Supplementary Information). Nano-and micro-bubble generation by resonant gold nanoparticles has been investigated in the literature in the case of continuous-wave laser excitation 36 . Here, the plasmonic electric fields due to a continuous-wave laser are orders of magnitudes lower and cannot extract electrons from the gold-water interfaces; bubble formation is shown to be produced by the large temperature increase originating from energy absorption.
When plasmonic nanostructures are excited with laser pulses in the visible/near infrared range in the femto-/pico-second regime, they can emit electrons in free space by means of photoelectric emission. Such a process has been extensively studied in vacuum for the generation of highly energetic electron bunches [11][12][13] . Explicit modeling of the photoemitted electrons from illuminated plasmonic sources has been presented by Dombi et al. [11][12][13] In those papers, the authors propose a model consisting of the following three separate steps: electromagnetic absorption into the plasmonic structure, electron injection by multiphoton or field emission and ponderomotive acceleration by the plasmon-enhanced electromagnetic field. According to that wellestablished model, once the electron is emitted into the vacuum, its energy is equal to its initial kinetic energy plus its potential energy arising from the fact that it is immersed in an electric field potential. Because the electron is free to move, the potential energy will be converted into kinetic energy. That process is usually called ponderomotive acceleration and has been largely investigated in vacuum conditions. In this work, we invoke the same description to explain the free electron dynamics in the water environment, showing that the presence of elastic collisions makes the process much more effective.
Let's now consider the generation of free electrons in water, which is usually achieved by using strongly focused femtosecond pulse without the use of plasmonic structures. A widely used schematic description 21 considers water on the femtosecond landscape to behave as an amorphous semiconductor with a band gap Δ = 6.5 eV 37 . Once a free electron is produced in water by laser ionization (excited to the conduction band according to the formalism of the field of semiconductors), it can gain kinetic energy through a process called inverse bremsstrahlung. If the electromagnetic field is strong enough, then the electron gains sufficient energy to produce secondary electrons through impact ionization and, above a proper laser power threshold, it may result in avalanche generation and plasma formation. Under these conditions, a strong energy transfer from the plasma to the water produces a vapor bubble. In particular, Vogel et al. 21 determined that, for λ = 800 nm, a density ρ*≈0.236 nm À3 defines the cavitation threshold in pure water. The free electron density is, therefore, a crucial parameter. More recently, it has been shown that the insertion of noble metal nanoparticles within the focus of a high-energy laser pulse in water leads to strong reduction in the threshold laser fluence for bubble nucleation [17][18][19][20]38 .
This phenomenon has been explained by considering the intense electromagnetic field enhancement surrounding the metal nanoparticles, which, depending on the light intensity, may either induce the water ionization and subsequent generation of a highly absorptive plasma [17][18][19][20] or simply determine a strong light absorption into the nanoparticle, with consequent heat transfer to the water environment 38 .
In Figure 3, we report the results of the simulation of 200-fs pulses, for different values of the focused light power. The time dependence of the impinging electric field at the tip apex is reported in Figure 3a in The simulation allows for the direct study of the evolution of the electron cloud that develops around the antenna. In Figure 3b, we report four snapshots at t = 200, 250, 350 and 500 fs for P = 5 mW. Primary and secondary electrons are colored in blue and red, respectively. As mentioned above, the crucial parameter to be monitored during the simulation is the free electron density, ρ. We calculate the free electron density by counting the number of electrons within the local mesh elements. In Figure 3b the volume plots superimposed to the electron cloud plots encloses the mesh elements where ρ exceeds 0.1 nm À3 . The maximum above-threshold electron-density is found to be close to the plasmonic hot spot on the metal surface and is confined to a distance of a few nm. Figure 3c reports the time evolution of the spatial maximum of ρ for impinging powers ranging from 1 to 6 mW. The figure shows that ρ rapidly fluctuates with the time-varying electric field and strongly increases with the impinging power in a non-linear way. Figure 3d reports the time maxima of the curves reported in Figure 3c as a function of P. According to the calculations, the maximum density exceeds the critical density in literature of 0.23 nm À3 that is required for breakdown at P = 4 mW, and further exponentially increases to almost 10 times this value at P = 6 mW. The predicted threshold value excellently matches the experimental result. A minor quantitative mismatch can be reasonably attributed to charge accumulation effects that grow pulse after pulse and likely occur at the considered pulse repetition rates (76 MHz) 21 but are neglected in the simulation.
In Figure 4a, we show the number of primary and secondary electrons generated by the pulse as a function of time, while Figure 4b reports the electron numbers at the last simulation time versus power. These plots reveal that the growth of the secondary electrons is much faster than that of the primary, namely, than the growth of the net current. From the plots, at the pulse end, the secondary electrons number exceed the primary ones for P42 mW and rapidly become the large majority of the total number of free electrons in water with increasing power. From the log-plot, it is clear that the growth is approximately exponential for P higher than 3.5 mW. This result is consistent with the experimental observation of a very clear power threshold in the i-P curve of Figure 2a. At this power, we count 790 emitted primary electrons per pulse, while the secondary electrons exceed 2000 in number.
We remark that the ionization number explosion is entirely produced by the plasmonic hot spot, which at the same time determines an enhanced photoemission from the metal surface and enables the free electron acceleration up to energies high enough to ionize the water molecules. Importantly, we found that a key role is played by the elastic scattering in water, which prevents the electron cloud from moving far away from the metal surface and keeps it close to the hot spot. This process is shown in Figure 4c-4j, where we compare the distributions of electrons distances from the gold surface (Figure 4c and 4g), in the presence of elastic collisions only (Figure 4d and 4h) and including both elastic and inelastic collisions (Figure 4e and 4i). The distributions are shown in the case of P = 5 mW for three time instants, t = 250 (green), 350 (blue) and 500 fs (black). The red line reports a normalized plot of the plasmonic electric field norm at the hot spot as a function of the distance from the metal surface. As shown in the figure, in the absence of collisions (this is the case for vacuum environment 13 ) the electrons gain kinetic energy only during the first optical cycles, quickly moving away from the hot spot location and reaching a maximum kinetic energy of just 10 eV. The presence of elastic collisions dramatically changes the electron spatial and energetic distributions, keeping the electron cloud extremely close to the hot spot with a consequent strong and repeated acceleration of electrons. We notice that the effectiveness of the acceleration depends on both the strong field and the strong-field gradient produced by the hot spot. In fact, in the presence of a strong but uniform oscillating field, electrons experience a zero average acceleration. The inclusion of ionization events in the simulation results in an even stronger confinement of the electron cloud within a maximum distance of 50 nm at t = 500 fs. This is consistent with the literature data of electron penetration range 1 . The overall energy transfer to the water for P = 5 mW turns out to be 35 fJ, whereas in case of absence of collisions, we obtain 2 fJ, namely a factor 17 lower. Thus, despite its extreme localization, the plasmonic field enhancement is efficiently exploited for the energy transfer to the electrons for most of the pulse duration, with the result being that the critical laser fluence required for breakdown is two orders of magnitude lower than that one required to produce it in pure water (5.5 mJ cm À2 and 800 mJ cm À2 , respectively, as reported above) 21 . Moreover, it is a very important point that the energy transfer from plasmonic field to free electrons is much more effective when the process occurs in water (or liquid) rather than in vacuum. Note that, unlike the case of nanoparticles floating in water, in the proposed structure, electron photoemission from the metal-water interface has a significant role. In fact, at the threshold power P = 3.5 mW for example, the experimental current of 12 nA corresponds to the injection of~1100 electrons per pulse, which spread at an average distance of 30 nm from the metal surface, as shown in Figure 4c. As can be easily calculated, if an analogous number of electrons were photoemitted by a gold nanoparticle, an electric field of the order of 10 8 V m À1 or higher would arise between the electron cloud and the positively charged particle (assuming roughly a particle radius of 100 nm and a uniform particle-cloud distance of 30 nm). The actual value is likely to be higher because of the strong localization of the electron emission at the particle poles, where the field is higher. These field values are comparable to the impinging light field (Figure 3c), thus canceling out the plasmonic field enhancement responsible for the electron injection.
We remark that the interest of our study is not related to bubble generation, which was used just to prove the existence of ponderomotive acceleration. We also remark that here the primary electrons are ejected and accelerated by the plasmonic field and must be distinguished from hydrated electrons or solvated electrons, which refer to electrons that are slowing down (hydrated) or have already thermalized (solvated) with the water environment, that is, trapped in a cavity formed by water molecules 1 .
The proposed plasmonic architecture turns out to be a localized, efficient and well-controlled source of accelerated free electrons that are not in thermal equilibrium with water molecules. In analogy with hot electrons in solid media, they could be defined as wet hot electrons. Interestingly, such electrons are rapidly separated from the emitting electrode. In fact, these electrons can travel across water environment, where they can react with the solute without being affected by the metal surface. This simplified configuration may help to clarify those reaction mechanisms in which the participation of the electrode material is undesired. Furthermore, the spatial separation between the hot carriers and the emitting electrode prevents both hot carrier recombination and electrode degradation. Being extremely reactive because of their kinetic energy and huge reduction potential, these electrons can be very useful for triggering and investigating many different chemical and physical processes currently poorly understood or otherwise extremely inefficient. These hot electrons can give important contributions in many different fields in which free electrons have a major role, such as photocatalysis and electrochemical process 39 , hot-electron driven chemistry 6 , water radiolysis 40 , hydrogen generation 41 (including that coming from nuclear waste) 40 , fundamental studies on hydrated electrons and solvated electrons 1 , hyperthermia with gold nanoparticles and plasmonic photothermal therapy 42 , DNA damaging 1 and others not reported here for brevity.
CONCLUSIONS
In this work, we presented experimental data and a numerical model that describe hot-electron injection and acceleration in water by photoexcitation of 3D plasmonic nanoelectrodes. The injection was experimentally monitored by measuring the electric current flowing into the water, whereas the ponderomotive acceleration of the electrons was confirmed by the observation of cavitation bubbles.
We implemented a multiphysics model that considers the electromagnetic field distribution around the antenna, the electron photoemission, the ponderomotive acceleration of electrons and their interaction with water molecules through elastic scattering, inelastic scattering and secondary electron generation by means of ionization events. The model results were found to be in very good agreement with the experimental data, and the model can be useful for further investigations of electron injection in liquids, leading to more efficient generation and exploitation of hot electrons in various fields of chemistry, physics and biology. Interestingly, the model directly reveals how the elastic scattering helps to maintain the electron cloud overlapped to the plasmonic hot spot, thus determining a much more efficient (a factor 17 higher) energy transfer to the electrons than in the case of emission in vacuum. Moreover, the use of 3D plasmonic antennas connected to flat electrodes offers an infinite reservoir of electrons that would allow the long-term and steady generation of wet hot electrons. The latter result may be of great importance when continuous-wave illumination or sunlight is used to trigger electron injection, thus opening a path toward energy production without the requirement of catalysts or reducing agents. | 6,997.2 | 2017-01-17T00:00:00.000 | [
"Physics"
] |
Vortex algebra by multiply cascaded four-wave mixing of femtosecond optical beams.
Experiments performed with different vortex pump beams show for the first time the algebra of the vortex topological charge cascade, that evolves in the process of nonlinear wave mixing of optical vortex beams in Kerr media due to competition of four-wave mixing with self-and cross-phase modulation. This leads to the coherent generation of complex singular beams within a spectral bandwidth larger than 200nm. Our experimental results are in good agreement with frequency-domain numerical calculations that describe the newly generated spectral satellites.
Introduction
As an intriguing phenomenon in nature, vortices have become an important topic in many fields of physics, spanning from fluid dynamics [1], optics [2] to Bose-Einstein condensates [3].Extensive research on both linear and nonlinear singular waves has been performed, illustrating the universality of vortices in the physical domain: Topological charge conservation using the concept of pseudo angular momentum was demonstrated in the harmonic generation of acoustic vortices [4], leading to the formation of angular shock waves [5].Transfer of angular momentum from light to excitons in GaN was demonstrated by Ueno et al. [6].Particularly interesting is the analogy between optical vortices and their atomic counterparts, coherent vortex wave functions in Bose-Einstein condensates [7], especially because angular momentum can be transferred between both [8].Due to the close analogy between the Gross-Pitaevskii equation that governs BEC dynamics and the Nonlinear Schroedinger equation of nonlinear optics, the results presented here are also applicable to BECs and superfluids: While four-wave mixing of wave functions in BECs [9] and the generation of vortices by means of different methods [3,10,11,12] have been demonstrated, the combination of both has not yet been observed.In this paper we present results of the analogous process in optics, and show, for the first time, cascaded nonlinear angular momentum mixing and coherent transfer of phase singularities over multiple orders.
In the optical domain, vortices are identified as helical phase profiles within a light beam, with a characteristic dependence exp(imφ ) on the transverse angular coordinate φ [13].The central singular point of this helix possesses no defined phase and therefore the intensity must vanish, leading to a characteristic cusp-like vortex core [14].Such beams carry photon angular momentum, which can also be transferred to matter [15].The angular momentum is proportional to the topological charge (TC) m of the optical vortex, associated with the total phase change m • 2π after one revolution around the core.Optical vortex beams have found various useful applications, namely in optical tweezers [16], coronagraphs [17] or as potential information carriers in data processing [18].Of particular interest are nonlinear processes involving vortex beams, where conservation of the total orbital momentum determines particlelike dynamics of the filaments resulting from the modulational instability (MI) induced vortex break-up [19].Conservation of the total orbital momentum also plays a profound role in second harmonic generation [20], parametric down-conversion [21] and stimulated Raman scattering [22] involving optical vortices.
Because of their high peak power, short laser pulses are highly beneficial for nonlinear optics.However, most nonlinear vortex experiments to date use relatively long-pulses or cw-lasers.This is because most methods for vortex generation suffer from chromatic aberrations.Nevertheless, depending on the spectral extent of the incident pulses optical vortices can also be imprinted on femtosecond laser beams using spectrally compensating techniques [23,24,25], or spiral phase plates [26].Such generation of dispersion-free high-intensity vortex beams enables studies of nonlinear vortex propagation in a much wider range of (even weakly) nonlinear materials.As such, nonlinear vortex beam filamentation in air [27] and water [28] has been investigated.
An important benefit of using high intensity short pulses is the possibility to observe cascaded nonlinear processes, such as cascaded Raman scattering [29,30] and nondegenerate four-wave mixing [31,32,33].To date, no cascaded nonlinear four-wave mixing process with singular optical beams as predicted in [29] has been experimentally demonstrated.While the generation of white-light supercontinuum from vortex beams has been attempted in glasses [34], the breakup of the vortex ring into single filaments fully destroyed the spatial coherence of the beam.An important practical issue to address here is a precision control of the nonlinearity strength required to reduce the effect of filamentation and improve the coherent transfer of the Fig. 1.Algebra of vortex beams with increased topological charge due to cascaded fourwave mixing.The experimental double-peak input spectrum is shown in red, together with simulated intensity (top) and phase (bottom) profiles after nonlinear propagation for the mixing of a Gaussian beam with a vortex of unit TC.The magnitude of the TC changes by 1 with the order of the cascading process, which can be seen in the steeper phase spirals further away from the pump beams.The intensity profile in 3rd order already shows distortion due to strong intensity dependence.phase throughout the nonlinear cascade.
Vortex four-wave mixing
Here we demonstrate the generation of broad spectrum singular beams through cascaded fourwave mixing (FWM).By careful intensity control and the use of dual frequency pump pulses we are able to identify the process of cascaded FWM and reduce the (multi-)filamentation of the vortex beam.Experiments performed with vortex beams of different TC show cascaded nonlinear TC mixing up to 3rd order and are in excellent agreement with frequency-domain numerical simulations.Starting with pump pulses of bandwidth of 43nm, vortices can be observed within >200nm after nonlinear propagation.FWM is a third-order nonlinear process, where four optical fields interact in a nonlinear Kerr medium: ω d = ω a + ω b − ω c .In our case, initially only two distinct pump beams of frequencies ω 0 > ω 1 are present.Energy conservation dictates the resulting photon energy when combining any three photons of those pump beams, generating spectral satellites ω n = ω 0 − n∆ω.This process becomes cascaded if the intensity of the generated side-bands is sufficiently high, leading ultimately to a frequency comb, where side-bands are spaced by the difference frequency ∆ω = ω 0 − ω 1 , similarly to what has been observed in ring microcavities, see, e.g.[35].The TC conversion in the cascaded process has to obey the transformation law analogous to the one for the frequency: m n = m 0 − n∆m with ∆m = m 0 − m 1 [29,31].Labelling the pump beams ω 0 ="blue" and ω 1 ="red" (for obvious reasons) with TCs m 0 , m 1 , the TC of the spectral satellites n = −1, ±2, ±3, . . .can thus be cal- culated.Each spectral satellite carries a defined TC, which represents an arithmetic progression of the topological charge in the spectral satellites, as shown in Fig. 1.
Experimental setup
In order to generate pump beams with sufficient spectral separation, an 11.5mJ, 38fs, 1kHz Ti:Sapphire amplified pulse is split with a dichroic beam splitter (cut-on wavelength 800nm) (see Fig. 2).The resulting spectral peaks are centered at 775nm ("blue") and 805nm ("red").A helical phase with TC m = ±1 can be imprinted on either or both beams with 16-step, ARcoated spiral phase plates (also called vortex lenses / VLs).After additional spectral filtering and recombination with a low-dispersion broadband beamsplitter, the remaining average power is 2.0W, with approximately equal intensity in both beams.
The vortex beams are then focused with a f=2m spherical mirror (FM1 in Fig. 2) into a gas cell.The Fresnel reflection from the entrance window is used to ensure spatial overlap of the pump beams in the focus.Temporal overlap is achieved by maximizing the white light emission when the cell is filled with Argon.For the actual measurement, the gas cell is evacuated and nonlinear wave mixing is observed in the 3mm thick fused silica entrance window only.In this way, deteriorating effects especially due to plasma in the focus are avoided, and the peak intensity within the window can be controlled by adjusting the distance between the focusing mirror FM1 and the gas cell.Nonlinear effects in the exit window play a negligible role due to large distance from the focus and smaller window thickness (1mm).The diffraction length of the beam is L D =4cm, while the nonlinear length L NL (corresponds to a nonlinear phase shift of 1) can be tuned from few hundreds of microns to several centimetres.By adjusting the ratio L NL /L D we can control the amount of spectral broadening and delay development of the modulational instability across the beam profile.With the setup adjusted for low fluctuation at the output, the peak intensity inside the entrance window is estimated to be 1.1 • 10 10 W/cm 2 .This results in a nonlinear phase shift B = k 0 L 0 n 2 I peak (z)dz ≈ 0.1 inside the entrance window glass.After the gas cell, the beam is recollimated and interfered with a reference beam from a gas-filled hollow-core fibre (see Fig. 2).This beam, usually used for few-cycle pulse generation, possesses a near-Gaussian spatial profile and covers the full visible spectral range, thus making it a suitable reference for interferometric spatial phase measurements.
The interference pattern and intensity profiles are observed on another CCD camera in slightly focused geometry.As a signature of the vortex helical phase, the dislocation is visible as a fork-like splitting of the interference stripes.The direction of the splitting ("fork up"/"fork down") corresponds to the direction of the angular phase slope (clockwise/counter-clockwise), whereas the number of fork rakes indicates the topological charge m + 1 (for integer values of m).
The images are recorded after spectral edge-pass filters with cut-on wavelengths corresponding to the gaps between the expected central wavelengths of the spectral satellites.This method is justified because the generation efficiency is expected to decrease rapidly with the cascading order of the process.This way, only the dominant part of the spectrum close to the filter edge is contributing to the interference pattern.The signal vanishes when either of the two arms in the first interferometer is blocked, as expected for a nonlinear generation mechanism.For the outermost spectral regions, multiple shots (5-10) had to be integrated.This leads to blurring of the interferograms, but the TC can still be determined from the different number of fringes on top and bottom of the vortex ring.Apart from that, the data shown are all single-shot measurements.
Experimental results
Fig. 3 shows the obtained phase-dependent interference patterns and intensity profiles for three different fundamental scenarios: The top row [Fig.3(a)] shows pump beams with equal TC m 0 = m 1 = +1, ∆m = m 0 − m 1 .This is an important case because all spectral satellites have the same topological charge (m n = +1), which allows the generation of white-light vortex continuum in the multiply cascaded process.Although a single vortex beam of sufficient bandwidth could be used in this case, we keep the two peak spectrum for consistency.Indeed vortices of charge m n = +1 are observed throughout the entire accessible spectral bandwidth of the beam, which is only limited by increasing disintegration of the intensity profile for very remote wavelengths.The respective numbers of the spectral satellites together with their theoretical central wavelengths are denoted in the top row.
The mixing of a vortex with a Gaussian beam is shown in the second row.In this case, the topological charge increases/decreases by one with the order of the satellite peak.Fig. 3(b) shows only the case for a "red" Gaussian mixed with a "blue" vortex, because the results are qualitatively similar for the reversed case.Yet interesting to note is the fact that we observe stronger generation of satellites on the spectral side adjacent to the vortex pump.The measured beam profiles for this case are also given in Fig. 3(c).Within the outermost spectral satellites, disintegration of the vortex ring by the modulational instability is observed [28], due to larger sensitivity to shot-to-shot fluctuations.Finally, we examine the case of two counter-rotating vortices of charge m 0 = +1 and m 1 = −1 [fourth row -Fig.3(d)].In this case, since ∆m = 2, we observe the expected increase (decrease) of the vortex charge by 2 with the order of the cascaded process.We are able to record interference patterns up to 3rd order (charge +7) on the blue side, and up to 2nd order (charge -5) on the red side of the spectrum.For this largest TC difference between the two pump beams, we observe the cleanest beam profiles with very low disintegration of the vortex intensity ring.This effect can partially be attributed to the almost identical divergence of the two pump beams due to the same modulus of the TC.It is known that vortices of TC |m| > 1 are unstable against perturbation even in the linear regime.In all cases where we generate higher-order vortex charge states |m| > 1, we observe decay into singly-charged vortices.This decay is a general feature of vortices generated from nonlinear processes, where the nonlinearity acts as a perturbation of the background beam.In self-focusing Kerr media as well as attractive BECs this leads to break-up of the vortex ring into spiralling filaments, which finally can collapse [36,37].In most cases however, the fundamental vortices remain closely together, so that a dark core can be observed.Since the relatively stable vortices are observed in our experiment and the frequency spectrum is equidistant, we can conjecture that the spatiotemporal spiralling predicted in [29] is a likely to be present, but not yet characterized, feature of our observations.
Numerical Simulations
The theoretical model describing the interaction between the different vortex beams is a set of 10 coupled nonlinear Schroedinger-type equations in the spectral domain.The slowly-varying amplitudes A n with respective wavenumbers k n follow the equations The model accounts for linear dispersion and diffraction, as well as nonlinear self-and crossphase modulation, four-wave mixing and saturation of the nonlinearity via γ = γ 0 /(1 + I/I sat ).For reasons of simplicity, we show here the terms H n and corresponding phase mismatches ∆k i for a 4 wave model only (2 pump beams n = 0, +1 and 2 signal beams n = −1, +2): The initial values for the simulation were chosen as follows: All spectral components except for the central pump beams (n = 0, +1) are set to an effective zero (ten orders of magnitude weaker than the pump components).The pump beams are modeled as either fundamental rvortices with a continuously varying azimuthal phase or as tanh-vortices with a 16-level stepped phase profile, as in the experiment .
In the tanh case, the core width r 0 is chosen to reflect the experimentally measured beam profiles and r 0 < 10r BG .The longitudinal coordinate is normalized to the diffraction length L 0 D = k 0 r 2 0 of the OV beam at frequency ω 0 , and the intensity in the simulations is kept iden- tical to the one needed to form a one-dimensional dark spatial soliton of the same width r 0 (calculated from inverting the sign of γ).All relevant FWM nonlinear terms H n in the model equations 2 were generated by using a program for symbolic computations and were subsequently exported to a program code written in Objective-C, which realized a modification of the split-step Fourier method.The computational grid for each wave spanned over 1024x1024 grid points.The numerical results obtained after 2L 0 D free-space propagation from the vortex lens (VL) to the entrance of the nonlinear medium of length 1L 0 D , followed by one diffraction length free-space propagation to the observation plane, are summarized in Fig. 4. Cases a/ and c/ correspond to pump vortices of equal and opposite topological charges, respectively, whereas case b/ presents the results for vortex and Gaussian pump beams.The pump beams are chosen to have central wavelengths of 770nm and 800nm.All necessary refractive indices (and wave numbers) are calculated according to the revised Sellmeier equations [38] at the indicated pump and signal wavelengths, which are also used as separator in Fig. 4. The second row of numbers in the same figure shows the estimated conversion efficiencies (100% initial signal in each pump wave and 0.1% integration accuracy).For better visibility we present in Fig. 4 only six newly generated spectral components, in which the vortices are clearly formed.Inspecting the phase profiles of the generated spectral satellites one can see that their TCs follow the expected relation m n = m 0 − n∆m with ∆m = m 0 − m 1 , where ω 0 corresponds to 770nm, ω 1 to 800nm, and n is the (cascading) order of the process.In Fig. 5 we present results obtained after 1L 0 D free space propagation of the pump beams carrying r-vortices from the VL to the entrance of the nonlinear medium (NLM) of length 0.5L 0 D followed by 1L 0 D free space propagation to the observation plane.Because for r-vortices the widths of the vortex core and the background beam are coupled (see Eq. 4) the length of the NLM was chosen to be shorter in order to keep the slowlyvarying envelope approximation of the model equations 2 valid by keeping the pump-induced satellite-beam's focusing reasonably weak.Qualitatively, the results for the FWM TC transfer with r-vortices confirm these for tanh-vortices.We also developed a 28 component model able to more accurately account for the broadband pump.Since it confirms the main predictions of the presented 10-component model, we would like to only mention that its results for the output supercontinuum spectrum fairly well reproduce the measured one when starting the simulations with the measured input pump spectra.These results confirm that the generation of ultra-broad spectrum vortex beams takes place mainly through cascaded four-wave frequency mixing process, whereas spectral broadening due to nonlinear self-and cross-phase modulation remains relatively weak.While our model neglects the full spatiotemporal dynamics of the process and accounts in a simplified way for the influence of the generated plasma in the gas cell, in view of the presented experimental data it accurately captures the spectral reshaping and the spatial structure of the newly generated spectral satellites in the output beam.
Discussion and Conclusions
The presented results agree well with the performed frequency-domain numerical simulations.In contrast to the numerical simulations, the experiment is done with pump beams of finite bandwidth, which gives rise to competing four-wave mixing between frequencies of the same pump pulse.This effect increases the bandwidth of the pump pulses (and satellites of sufficient intensity), yet it only generates TC already present in the respective spectral peak.The basic interaction scenario between adjacent spectral peaks is therefore unaffected, as long as spectral overlap (interference) remains small.To account for the finite bandwidth of the pump pulses, the numerical simulation is extended to use more than one frequency per pump beam.This quasi-pulse regime however qualitatively preserves all features predicted in the dual-frequency pump model.
The presented method is applicable if the acquired nonlinear phase remains reasonably low, so that self-focusing of beam inhomogenities remains limited.Since nonlinear interaction is stronger in the most intense parts of the beam, a homogenous background beam (super-Gaussian or flat-top) appears to be desirable here.Obviously a trade-off between achieved bandwidth and vortex-ring integrity has to be made due to modulational instability [28].An approach to limiting deteriorating effects appears to be the use of saturable nonlinear media, e.g.via competing higher-order nonlinearities, which would make the process more controllable [39].
In conclusion, we have demonstrated broadband cascaded mixing of vortex beams in a selffocusing Kerr medium.The nonlinear generation process, although not phasematched, is efficient enough to allow for observation of vortices over a bandwidth larger than 200nm.This constitutes the first measurement of topological charge for a multiply cascaded four-wave mixing process with vortex beams.Topological charge conservation for the nonlinear wave mixing process is found to be fulfilled, and decay of higher-order vortices into fundamental vortices has been observed due to instability arising from the nonlinear self-focusing.The presented results constitute basic scenarios for the interaction between fundamental topological modes, which can be seen as basic "building blocks" to generate complex coherent broadband wave fields with a defined phase structure, which could be used e.g. as elaborate pump/probe beams in coherent control applications or excitation and manipulation of BECs.In the case of identical pump beams (which can be seen as a special case of a single broadband vortex pump beam), the four-wave mixing even preserves the TC, thus rendering the method suitable for the generation of supercontinuum white light vortex beams.Unlike Raman scattering, FWM can be employed in a collinear geometry, eliminating the need for additional angular dispersion compensation.
Fig. 3 .
Fig. 3. Experimental interferograms and beam profiles for different pump beams after nonlinear four-wave mixing.The pump beams are highlighted in grey.The respective central frequencies and wavelengths of the spectral satellite/pump beam along with the cascading order of the FWM process are denoted on top.a) Two vortices of equal TC m 0 = m 1 = +1, b) Vortex and Gaussian TCs m 0 = +1 and m 1 = 0, c) corresponding intensity profiles for case b), d) Two vortices of opposite TCs m 0 = −m 1 = +1.Contrast has been enhanced slightly for the -3rd order interferograms.The number in each box indicates TCs of the generated vortices.
Fig. 4 .
Fig. 4. tanh-vortices: Intensity (odd rows) and phase (even rows) of the pump waves at 770nm and 800nm and of 6 of the newly generated waves in the observation plane (one diffraction length L 0 D away from the exit of the nonlinear medium of length L 0 D ).The outermost spectral components are left out due to already too strong diffraction.Case a/ -pump vortices with identical charges; Case b/ -pump vortex and Gaussian beams; Case c/ -pump vortices of opposite charges.Separators -central wavelengths of the simulated waves and estimated conversion efficiencies.Some 16% of the total computational window is shown in each frame.See text for further details.
Fig. 5 .
Fig. 5. r-vortices: Intensity (odd rows) and phase (even rows) of the pump waves at 770nm and 800nm and of the newly generated waves in the observation plane (one diffraction length L 0 D away from the exit of the nonlinear medium of length 0.5L 0 D ).Case a/ -pump vortices with identical charges; Case b/ -pump vortex and Gaussian beams; Case c/ -pump vortices of opposite charges.Separators -central wavelengths of the simulated waves and estimated conversion efficiencies.Some 16% of the total computational window is shown in each frame.See text for further details. ∆k . thanks the Institute of Optics and Quantum Electronics, FSU-Jena, Germany, for the warm hospitality during his research stay.This work was partially supported by the National Science Foundation (NSF)-Bulgaria and the Australian Research Council.P.H. acknowledges funding by the DFG.D.N.N. thanks A. Desyatnikov and Y.S. Kivshar for useful discussions. | 5,227.6 | 2014-05-05T00:00:00.000 | [
"Physics"
] |
Load-Balancing Method for LEO Satellite Edge-Computing Networks Based on the Maximum Flow of Virtual Links
With the increasing number of satellites in orbit, traditional scheduling methods can no longer satisfy the increasing data demands of users. The timeliness of remote sensing images with large data volumes is poor in the backhaul process through low-earth-orbit (LEO) satellite networks. To address the above problems, we propose an edge-computing load-balancing method for LEO satellite networks based on the maximum flow of virtual links. First, the minimum rectangle composed of computing nodes is determined by the source and destination nodes of the transmission task under the configuration of the 2D-Torus topology of LEO satellite networks. Second, edge computing virtual links are established between computing nodes and users. Third, the Ford-Fulkerson algorithm is used to obtain the maximum flow of the topology with virtual links. Finally, a strategy is generated for computing and transmission resource allocation. The simulation results show that the proposed method can optimize the total capacity of the multi-node information backhaul in the remote sensing scenario of LEO satellite networks. The effectiveness of the proposed algorithm is verified in several special scenarios.
posed a game-theory-based approach to optimize compu-84 tational offloading in satellite edge computing networks. 85 Wang et al. [21] proposed a joint offloading and resource The third is research on performance evaluation. Kim and 95 Choi [24] studied the propagation and queuing delay per-96 formance of satellite edge-computing networks under the 97 uplink/downlink packet error rate. Existing methods are 98 mainly based on mixed-integer programming, which has 99 high time-complexity. Satellite networks with high mobility 100 are different from terrestrial networks. Satellites are in the 101 process of periodic high-speed motion and must be solved 102 quickly. 103 In this paper, we study an edge computing load-balancing 104 method for LEO-satellite-network backhaul tasks, which has 105 low time complexity and engineering achievability. The main 106 contributions of this study are summarized as follows: 107 • We designed an LEO satellite networks edge-computing 108 architecture that combines the optimization of transmis-109 sion and computing. The architecture models the rela-110 tionship between transmission and computing resources. 111 • We proposed a 2D-Torus network minimum rectangle 112 computing node selection method. The method selects 113 the calculation offload of sensing information back to 114 the ground station.
115
• We proposed a computational load-balancing algorithm 116 based on the maximum flow of virtual links. The algo-117 rithm determines the size of data processed by each 118 routing node. 119 The reminder of this paper is organized as follows. 120 In Section II, the application scenario, network model, 121 transmission model, and calculation model are presented. 122 In Section III, a problem model that needed to be optimized 123 was formulated. An edge-computing load-balancing method 124 based on the maximum flow of virtual links is proposed. Sim-125 ulation results and discussions are provided in Section IV. 126 Finally, Section V concludes the paper.
128
For the aforementioned scenario description, a real-time 129 information acquisition and transmission LEO constellation 130 with Earth observation, onboard processing and routing is 131 modeled as follows.
132
A. CONSTELLATION SCENARIO 133 We consider an application scenario in which the Earth obser-134 vation satellite obtains image information and transmits the 135 data back to the ground station through LEO satellite net-136 works. This scenario is illustrated in FIGURE 1.
137
The space segment consists of a single-layer Walker con-138 stellation. The constellation configuration is Walker-Delta, 139 where the number of orbital planes is M p , and the number 140 of satellites per orbit is M s . It has a relatively stable topology. 141 The main function of the system is to monitor global disaster. 142 After the detection information is generated by the Earth 143 observation satellite, transmission and computing resources 144 are called within a predetermined time window so that the 145 detection information is processed and transmitted to a lim-146 ited area in real time.
147
The Walker-Delta constellation configuration is repre-148 sented by adjacency matrices A Sat , where the element 149 VOLUME 10, 2022 a Sat i,j ∈ A Sat is the capacity of the crosslink from node i to node j at the instant t, which can be expressed as
152
where v i , v j represents the crosslink where the first item called a task. The ratio between different tasks is called the 173 weight. It is assumed that all tasks originate from the set of 174 sending nodes V T , and the task eventually flows to the set of 175 receiving nodes in a limited area. This forms the task weight 176 where β i,j ∈ B represents the proportion of traffic sent by 179 the observation node v i ∈ V US to the ground node v j ∈ V UD . 180 The i-th row represents all tasks sent by v i . The j-th column 181 represents all tasks received by v j , satisfying: This indicates the N US number of the transmission satel-184 lites that simultaneously access the observation task at the 185 same time. Binary k = 1 indicates that the node has been 186 accessed by the observation task.
188
We consider that the processing of observation information 189 mainly involves preprocessing a large number of Earth obser-190 vation images. Data processing can reduce the size of back-191 haul data by extracting feature information from the data. 192 For the information received by a single satellite node S i , 193 D i represents the size of the original data and F i represents 194 the size of the processed data. If the satellite S i performs 195 edge computing, we define the calculation transfer ratio ρ i = 196 (D i − F i ) F i . At the same time, the decision variable is 197 defined as the selected calculation mode. l i = 1 indicates that 198 edge computing processing is performed on the data, whereas 199 l i = 0 represents no calculation processing. The data size 200 of the information generated after the original information 201 passes through the satellite S i is The processing time at the satellite S i is In this study, it is assumed that the set of all low-orbit satellites 207 is S and the set of ground stations is G. The set of all nodes 208 in the network is called A ∈ N × N .
where N = N S + N G . The matrix A S ∈ N S × N S represents 211 the crosslink connectivity matrix of LEO satellite networks. 212 A S (i, j) = 1 indicates that there is a connected crosslink 213 between satellite i and satellite j, Similarly, A R ∈ N S × N G and A T ∈ N G × N S represent the 215 connected downlink between the satellite and ground station, 216 and A G ∈ N G × N G represents the connection relationship 217 between the ground stations.
218
The channel capacity is the maximum data rate for reliable 219 transmission. The power and bandwidth-limited Gaussian 220 channel capacity is given by 223 C l limits the maximum data rate R i of information trans-224 mitted over the channel. Then, the communication delay can 225 be defined as where T comm i is the propagation time between the i-th node 228 and the (i + 1)-th node. 2) The size of the data transmitted during the task is less 252 than or equal to the size of the data available on the satellite 253 at the instant of the task.
III. PROBLEM FORMULATION AND PROPOSED
The problem is NP-hard. Optimization 9 indicates that 270 the optimization goal is to maximize the capacity of the 271 information backhaul per unit time. Constraint 10 indicates 272 that the data entering the node are conserved with the data 273 processed by the node and the data flowing out of it. Con-274 straint 11 indicates that the data transmitted or received in 275 a single task are less than or equal to the data generated by 276 the task. Constraint 12 indicates that when the task data are 277 backhauled, the transmission decision variable I a is set to 278 one. Constraint 13 limits the maximum data capacity that 279 can be transmitted per unit time in a single link. Constraint 280 14 limits the data capacity transmitted on a single link for 281 a single observation task. Constraint 15 indicates that the 282 difference between the computing resource occupancy of any 283 two nodes participating in the calculation cannot exceed the 284 constraint C 0 . We propose a method for selecting routing nodes. First,
302
LEO satellite networks topology is generated according to 303 the constellation position and adjacency relationship per unit 304 time. Second, the source and destination nodes of the task 305 are determined. A minimum routing rectangle is generated. 306 If the minimum rectangle does not exist, the routing neighbor-307 hood is adopted to generate the extended minimum rectangle. 308 Finally, all nodes in the minimum rectangle are selected as 309 the path nodes for the information backhaul. The specific 310 algorithm is shown in Algorithm 1.
Algorithm 1 Multiple Shortest-Path Nodes Selection Algorithm
Input: source node position P sn , destination node P dn , network topology T L Output: set of selected nodes N r Begin 1 Calculate the network topology T L 2 Bring in the source node position P sn , destination node P dn 3 Find the shortest path R sp of P sn and P dn on T L 4 if R sp is a line segment do 5 Find the extended minimum rectangle R sd of P sn and P dn according to Definition 2 6 else do 7 Find the minimum rectangle R sd of P sn and P dn according to Definition 1 8 end if 9 Output the set of selected nodes N r in R sd 10 End
312
After selecting the routing nodes for the information 313 backhaul, it is necessary to allocate the computing and trans-314 mission resources of each node according to the observation 315 tasks and resource occupancy. We propose a resource allo-316 cation method based on the maximum flow of virtual links. 317 First, according to the computing nodes and node adjacencies 318 selected by Algorithm 1, a routing topology of the source 319 and destination node is generated. Second, according to the 320 computing resource occupancy of each node, a virtual link 321 between each node and the user is established. The routing 322 topology is updated. Third, all routing nodes traverse to the 323 full-load state in equal proportions using the available com-324 puting resources of each node as the independent variable. 325 A maximum flow search is performed to obtain the maximum 326 capacity of the network topology that satisfies the constraints. 327 Finally, the flow result is output as the allocation strategy 328 for transmission and computing resources. The specific algo-329 rithm is shown in Algorithm 2. 330 The solution of the maximum flow from the source node 331 to the destination node is based on the Ford-Fulkerson algo-332 rithm. The Ford-Fulkerson algorithm aims to find an aug-333 mented path to increase the flow. It determines the path with 334 positive tolerance that can reach the source node. In this 335
Algorithm 2 Resources Allocation Algorithm Based on the Maximum Flow of Virtual Links
Input: set of selected nodes N r , task weight matrix B, minimum rectangle R sd , the computing difference constraint C 0 Output: information backhaul throughput C d , resources allocation strategy Begin 1 for each node C p = 1: floor f CPU z do 2 Find occupied computing resource R u in N r , calculate processing rate Establish the virtual link between the node and the user, the link capacity is R p 9 Add the virtual link to the minimum rectangle R sd 10 Update the topology R sd 11 Calculate the maximum flow of the topology It should be emphasized that the above process of selecting 338 and allocating computing resources is only for a snapshot. Determine the computing difference constraint C 0 6 Find occupied computing resource R u 7 Calculate processing rate R p 8 Calculate C d and using Algorithm 2 end for 10 End The parameters of the simulation are set as fol-356 lows. Walker-Delta LEO satellite networks composed of 357 220 satellites are used in the simulation, with a total of 358 20 orbital planes. Each orbital plane has 11 satellites. The 359 orbital height is H = 1000km. The orbital inclination angle 360 is 60 • . The sampling interval of the simulation snapshot is 5s. In this section, the performance of the algorithm is 373 characterized using three metrics. They are the backhaul 374 throughput, delay of information backhaul, and average CPU 375 occupancy rate. The strategy given by the algorithm in this 376 paper is compared with the always-transmission strategy 377 and the always-computing strategy. The always-transmission 378 strategy involves transmitting all the data back to the user 379 through LEO satellite networks. The always-computing strat-380 egy involves sending the processed feature information of 381 all data to the user. There is no difference in the time com-382 plexity of the three methods. In the simulation, it is assumed 383 that all nodes are in an idle state. After selecting a fixed 384 source node, we select different destination nodes to verify 385 the performance of the algorithm under different numbers of 386 computing nodes. We select a 2D-Torus network topology 387 ranging from 2×2 to 5×6. The network is simulated as shown 388 VOLUME 10, 2022 node is far from the destination node in the topology, a large 407 number of optional routing nodes can meet the computational 408 requirements of the task. There is an intersection between 409 the always-transmission curve and always-computing curve. 410 In this case, the processing capability of the multi-node 411 computing network and the downlink of the last hop for 412 information backhaul have reached a dynamic balance.
FIGURE 8. Delay of information backhaul with different numbers of routing nodes.
We use the delay of information backhaul to characterize 414 the time consumption from information generation to the 415 user acquiring the information. FIGURE 8 shows results of 416 information backhaul delay for different numbers of routing 417 nodes. The x-axis represents the number of routing nodes 418 occupied by information backhaul. The x-axis represents 419 the delay of information backhaul. It can be seen that the 420 delay of information backhaul obtained by our strategy in 421 this study is better than the other two strategies in the 422 same scenario. Under the condition of a certain amount of 423 remote sensing image data, the delay of information back-424 haul is inversely proportional to the information backhaul 425 throughput. 426 We use the CPU average occupancy rate to represent the 427 computing resource occupancy of routing nodes in a single 428 task. FIGURE 9 shows the average CPU occupancy rate of 429 the routing nodes with different numbers of routing nodes. 430 The x-axis represents the number of routing nodes occupied 431 by the backhaul information. The y-axis represents the aver-432 age CPU occupancy rate. It can be seen that the always-433 transmission strategy only needs to perform packet routing 434 table lookup and forwarding. It requires almost no computing 435 resources. With an increase in the number of computing 436 nodes, the curve of our strategy in this study and the curve of 437 the always-computing strategy both have an inflection point 438 that decreases from the full load state. Because our strategy 439 balances the occupancy of the computing resources well, the 440 drop point appears earlier. algorithm when the computing resources of some nodes are 478 occupied. FIGURE 11 shows the resource allocation strategy 479 for a multi-node information backhaul when the computing 480 resources of some nodes are occupied. The green nodes are 481 the source nodes where the tasks are initiated. The orange 482 node is the destination node for the information backhaul. 483 The red nodes represent nodes occupied by 30% of the com-484 puting resources. The purple nodes are those occupied by 485 50% of the computing resources. The blue links represent 486 crosslinks. The yellow link represents the downlink. The 487 arrow represents the transmission direction of information 488 flow. The value of R p on the node represents the data pro-489 cessing rate of the node per unit time. The value of R t C l 490 on the link represents the current transmission rate R t of 491 the link and the maximum available transmission capacity 492 C l of the link. It can be observed that the algorithm in this 493 VOLUME 10, 2022 study can quickly provide an optimal strategy under complex 494 constraints. 496 We study the load-balancing problem of transmission and | 3,988.6 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
The effectiveness of Japanese public funding to generate emerging topics in life science and medicine
Understanding the effectiveness of public funds to generate emerging topics will assist policy makers in promoting innovation. In the present study, we aim to clarify the effectiveness of grants to generate emerging topics in life sciences and medicine since 1991 with regard to Japanese researcher productivity and grants from the Japan Society for the Promotion of Science. To clarify how large grant amounts and which categories are more effective in generating emerging topics from both the PI and investment perspectives, we analyzed awarded PI publications containing emerging keywords (EKs; the elements of emerging topics) before and after funding. Our results demonstrated that, in terms of grant amounts, while PIs tended to generate more EKs with larger grants, the most effective investment from the perspective of investor side was found in the smallest amount range for each PI (less than 5 million JPY /year). Second, in terms of grant categories, we found that grant categories providing smaller amounts for diverse researchers without excellent past performance records were more effective from the investment perspective to generate EK. Our results suggest that offering smaller, widely dispersed grants rather than large, concentrated grants is more effective in promoting the generation of emerging topics in life science and medicine.
Introduction
Emerging topics (ETs) in basic research, covering emerging technologies, methodologies, issues, and scientific concepts, are reported in scientific articles and become fundamental resources for innovation [1][2][3]. Meanwhile, in research and development fields, new topics are constantly and cyclically emerging, maturing, converging, and fading out [2,4,5]. In the face of such a synergistic and dynamic situation, funding strategies to support efficient generation of ETs, especially successful and high-impact varieties, is critical for policy making.
For industries, outcomes and knowledge from research activities undertaken by universities and public research institutions supported by public funds are an important source of information for both generating R&D and completing existing projects [6,7]. For example, patents, especially in life science and medicine, tend to cite scientific articles supported by public funds [8,9]. Here, outside of published articles, the success of industrial innovation based on public scientific outcomes also requires well-managed collaborative research projects and communication between industry and researchers in public institutions [10,11]. However, scientific articles supported by public funds remain a significant resource for innovation [10,12,13]. Diverse studies have reported the effectiveness of public funds on productivity and citation impact of scientific articles as systematically reviewed by Aagaard et al. [14]. A major discussion point in past studies is whether funds should be concentrated only on excellent researchers or be distributed equally among all researchers. In other words, is big science or small science better? In empirical studies, both "too small" and "too large" research grants have been reported as inappropriate to guarantee balanced productivity/impact and funding streams [15][16][17]. At the same time, the issue of investing solely in researchers with excellent track records remains controversial [18][19][20].
While studies focusing on citation impact have well demonstrated the association between funds and high-impact research outcomes, the evaluation of associations between funds and generation of novel or emerging topics has been poorly undertaken. This is due to extensive lag between publication and recognition of research articles reporting highly novel or emerging topics [21]. Indeed, articles containing novel topics tend to be produced on the rareness of prior work combinations [22,23] and tend to appear in lower-impact journals, increasing lag time between publication and citation [24]. However, these articles are eventually cited at a higher rate than articles containing less-novel topics [24]. When we previously compared journal impact factors with the frequency of emerging keywords (elements of ETs) per article in the journal, a slight correlation could be found only in the ranges where impact factors were less than 20 [25]. Thus, any evaluation focusing on articles with high citations over short time periods hardly uncovers the effectiveness of funding on generating novel and ETs over the medium or long term.
Another missing viewpoint in past studies is overall return on investment. Many studies have reported the average or median number of publications/citations per awarded researcher as well as correlations between funding and productivity per awarded researcher [19,20,[26][27][28][29][30][31][32][33]. While these analyses have clarified the effectiveness of funding on the awarded researcher side, the effect on all researchers remains unclear since it is well known that about 15% to 20% researchers produce 50% of publications, thus ignoring a significant portion of researchers with regard to total investment efficiency [34][35][36][37]. Ideally, investor agencies (public and private) need to analyze the total amount of their investment versus the total productivity reported by the awarded researchers as a group while excluding bias generated by a minority of hyperproductive scientists.
In this study, we aim to clarify the effectiveness of public funding on the generation of ETs by analyzing emerging keywords, which are the elements of emerging topics, (EKs; see Conceptual framework), across major academic/industrial fields, in life science and medicine within Japan. We specifically targeted Grants-in-Aid (GiAs) offered by the Japanese Society for the Promotion of Science (JSPS), categorized into life science and medicine-related fields starting between 1991 and 2013 (see Materials and methods). Several previous studies investigated the associations between funding and scientific outputs in Japan, analyzing awarded GiAs [19,33,[38][39][40]. However, how the sizes and categories of grants affect the generation of scientific novelty and impact are still poorly addressed. In addition, overall return on investment is rarely mentioned in published studies. Thus, in this study, we investigated the effectiveness of these funds on the generation of EKs. Furthermore, we included highly successful emerging keywords (HS-EKs; see Materials and methods) by comparing the number of EKs reported by the awarded PI before and after the start of funding (EK and HS-EKs reported from 1988 to 2018). At the same time, we also analyzed the overall effectiveness of funding on the generation of EKs and generated differential conclusions from the viewpoints of both PI and investor sides.
Conceptual framework; emerging keywords as elements of emerging topics
Scientometric publications centered on ETs research have increased notably within the last 10 years and are now generating interest in policy circles [2,41,42] as reported in Science and Technology Studies [43,44]. The study of ETs requires a rigorous definition, provided by Rotolo et al. (2015), in which the operational definition of emergence is defined by scientometric methodologies grouped into 5 main categories: 1) indicators and trend analysis, 2) citations analysis, 3) co-word analysis, 4) overlay mapping, and 5) combinations of these methodologies. Recent studies by Xu et al. (2019) and other research groups have shown a shift from citation-based to machine learning-based approaches in the methodologies used for analyzing emerging topics to predict future emerging topics [45-47]. However. a crucial challenge is to develop a method for identifying emerging topics at their early stages, without machine learning-based analyses, to clarify the role of investment for fostering scientific advancements and novel technologies [48,49].
We previously demonstrated that the retrospective study with co-word analysis for vast datasets was still effective to comprehend the mechanisms that generate emerging topics [4,50,51]. Our in-house scientometrical method for quantifying past and current ETs in life science and medicine via PubMed, currently the main repository for these types of articles [5], was listed as a representative method for co-word analysis by Rotolo et al. [2]. For ETs, scientific specialties, first reported by Braam et al. [52], are defined by discreet sets of subject-related issues, problems, methods, and concepts that draw focus from researchers regardless of background (Fig 1). In this manner, research topics become an aggregation of specific keywords that best represent those specific topics.
With this model, we previously defined 'Emerging Keywords (EKs)' from the Medical Subject Headings (MeSH) [53] attached to PubMed articles as terms included in the top 5% by incremental rate in a given year [5]. With this operation, we can select MeSH terms as EKs which should be considered emergent at a particular time in the past. We can then finally cluster EKs that co-appeared in the same articles to generate ETs [5]. Our definition to identify emerging topics is frequently used [41,45,54,55].
By using the number of EKs as an indicator, it becomes possible to assess the level of developing new ideas. The accumulation of new EKs together with dropping pre-existing EKs from the keyword clusters finally generates new ETs (Fig 2), with more than 70% of total EKs generated in this manner [4]. EK is a component of ET and, by using the number of EKs as an indicator in this study, it becomes possible to specifically assess where new ideas are developed and who develops them.
Additionally, by tracking the temporal changes of EKs, we can quickly grasp changes in technology and research subjects related to innovation from changes in topic elements such as subject-related materials, phenomena, techniques, devices, etc. (Fig 1). For instance, the advent of the post-genomic era after 2000 due to the development of large-scale analysis techniques in the life sciences and medical science fields is well-known to have been a significant innovation. By utilizing our EK tracking method, we clarified that this movement was already happening in the mid-1990s [4]. We also successfully grasped the development of nanotechnology and the advancement of RNA technology in the respective fields at their early stages [5]. There, we can trace the trend from the co-occurrence of EKs in a few papers at the early stages of topic development (the budding stage of the topic by a few researchers) to the co-occurrence of the same EKs in many papers as a result of topic development (its spread among researchers).
In summary, this keyword-based approach is more granular than citation-based metrics as it allows for the identification of specific keywords that are associated with innovation. This approach also has an advantage in its timeliness to capture the early stages of innovation. Finally, by using a keyword-based approach, we can capture the process from the formation of ET at the initial budding stage to igniting and establishing innovation in the research community at the level of topic elements.
JSPS grants and PI datasets
The Japan Society for the Promotion of Science (JSPS), now supervised by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), has been offering competitive and peer-review based GiAs for decades to all types of scientific researches in Japan. The original GiAs were initiated in 1939 and remain the largest source for public curiosity-driven science grants in Japan covering all academic fields from humanities and social sciences to the natural sciences [33,38,56]. Notably, GiAs weight basic research more heavily than applied ones (National Institute of Science & Technology Policy 2004). Researchers belonging to institutions officially approved by MEXT can apply for GiAs as Principal Investigators (PI) and these awards include both direct costs paid to promote research activities and indirect costs paid to the institutions to support research activities. Unlike grants in US, the GiAs do not cover salaries of the PIs themselves but they can pay personnel expenses for researchers and staff.
GiAs offer diverse curiosity-driven funding programs according to project purpose and targeted researchers by manipulation of the grant categories. For example, GiAs for Scientific Research types (A), (B), and (C) were initiated in 1996 to target creative/pioneering research conducted by one researcher or jointly by multiple researchers. Currently, type (A) is for 3 to 5 years with 20 million to 50 million yen total, type (B) is for 3 to 5 years with 5 million to 20 million yen total, and type (C) 3 to 5 years with 5 million yen or less total (https://www.jsps.go.jp/ english/e-grants/grants01.html). Here, if subcategories exist within the same category, the applicants are filtered through a selective merit process that increases in difficulty with award amount. From 2001, a new category with larger grant sizes (50 million to 200 million yen total), the GiA for Scientific Research (S), was additionally started for supporting creative/pioneering research conducted by one or a relatively small number of researchers. In 2003, with the purpose to support young and early-career researchers, GiAs for Young Scientists was initiated (currently renamed GiA for Early-Career Scientists with the modification of its qualification requirements). The GiA for COE Research, which existed from 1995 to 2005, offered over 1 billion yen per project, aiming to cultivate a competitive academic environment among Japanese universities by providing targeted support for the creation of world-class research and education bases. The basic information of all GiAs investigated in this study is summarized in S1 and S2 Tables with the acceptance proportions indicated.
JSPS grant information was retrieved from the KAKEN Grant Database of Grant-in-Aid for Scientific Research (https://kaken.nii.ac.jp/en/index/) web site on March 19 th , 2021. We collected grants whose projects started between 1991 and 2013 and were categorized into a specific Research Category listed in S1 Table. To restrict the grants to life science and medicinerelated fields, we selected grant information categorized in specific Research Fields on the Database as shown in S3 Table. Since, among the grants, GiA for COE Research was not classified into any Research Fields, we manually selected only grants related to life science and medicine.
The PI name and affiliation in English for each awarded grant were retrieved from the KAKEN Researcher Database of Grant-in-Aid for Scientific Research web site (https://nrid.nii. ac.jp/en/index/) by using researcher IDs originating from the retrieved grant information in the KAKEN Grants site on January 1st, 2022 since the KAKEN Grants website frequently lists names and affiliations only in Japanese. From the retrieved grant data set, we excluded any grant information without English PI names and affiliation in both KAKEN Grant and KAKEN Researcher databases. By this operation, we selected 182,810 grants out of 209,732 grants collected in the above paragraph.
Article and associated MeSH terms dataset
Medical Subject Headings (MeSH) is a popular keyword database developed by the US National Library of Medicine. Content-specific MeSH headings are attached to each article under the supervision of professional curators and typically used by PubMed to support literature searches [53]. MeSH terms attached to articles published between 1987 and 2020 were collected from PubMed (https://www.ncbi.nlm.nih.gov/pubmed/) on December 8th, 2021 and included a total of 26,061,316 articles for analysis. We applied the 2021 version of the MeSH tree structure information to analyze the hierarchy of MeSH terms. Details on MeSH term identification attachment can be found in our previous reports [4,5]. This operation obtained 256,680 kinds of terms, totaling 1,190,037,134 occurrences spanning from 1987 to 2020.
Emerging keywords and highly successful emerging keywords
The method to identify EKs from MeSH terms was previously reported [5]. Briefly, the increment rate (I) of MeSH term n in year t was calculated as: where X n in t is the total number of appearances of MeSH term n on PubMed in years t+1 and t+2, and Y n in t is the total number of appearances of MeSH term n in years t-1, t, t+1 and t+2. EKs were defined as the terms ranked in the top 5% of I n in t in year t.
In the present study, when a particular MeSH term was counted, those terms located in the higher positions of the hierarchy were additionally counted since higher-level terms are inclusive of lower-level terms. We thus collected 97,864 kinds of MeSH terms as EKs, totaling 48,211,167 occurrences that spanned the years 1988 to 2018.
An HS-EK was previously defined as satisfying the following criteria: 1) the number of its appearances after 10 years of being designated an EK is at least 10 times larger than its initial year and 2) the total number of its appearances after 10 years is more than 100 [4,51]. While this criterion is arbitrarily set, in practice it allows us to obtain various MeSH terms related to Nobel Prize-winning topics, such as 'Oncogene Proteins, Viral' in 1980 for the Nobel prizewinning topic of "the cellular origin of retroviral oncogenes" in 1989 and 'Apoptosis' in 1991 for "genetic regulation of organ development and programmed cell death" in 2002 [4]. Using these criteria, we identified 3,556 kinds of highly successful emerging keywords out of 225,485 kinds of emerging keywords that appeared from 1989 to 2010.
Identification of articles published by grant-awarded PIs
In PubMed, author names frequently appear as family name plus initials of the middle and last names (e.g., Ohniwa RL) and affiliations, such as institutions, were mostly attached to only first or corresponding authors in PubMed before 2014 [51]. In this study, we identified articles published by a particular PI as ones that included family name and surname initials, plus the initials of middle and/or additional names, and institution name in the same article. Thus, we accepted the risk of counting different authors as the same authors. However, since corresponding authors are usually the PIs and first authors usually belong to the PI's lab, we considered this better than exhaustive analyses to determine perfect research institution matches. Our previous exhaustive study on researcher dynamics to generate ETs on PubMed showed that the identification of articles using only author name (family name plus the initials of middle and/or last names) demonstrated the same tendencies as full affiliation results after 2015 [51], supporting our criterion for the current exhaustive analyses.
Effectiveness of investment to generate emerging topics
To inclusively evaluate the overall effectiveness of a particular grant to contribute to ET generation, we introduced the following equation to estimate the Effectiveness of Investment (EI) of a specific grant category g (EI g ): where, E is the total number of EKs reported by the awarded PIs in the six years after the grant g began, H is the total number of HS-EKs reported by the awarded PIs in the six years after the grant g began, and I is the total invested amount in grant category g.
Relatively small grants more effectively generate articles and emerging keywords
Our aim was to evaluate the effectiveness of Grants-in-Aid (GiAs) offered by JSPS with special attention to the generation of ETs in life science and medicine fields. As such, we analyzed the number of EKs (elements of ETs) reported by the awarded PIs before and after the start of funding cycles. We therefore targeted all grants beginning between 1991 and 2013 as listed in S1 Table and compared the number of EKs with the number of articles reported by the PIs three years before and six years after the start of funding. First, we calculated the average numbers of EKs in articles published by the awarded PIs across the range of the total amount of grants ( Fig 3A). In all amount ranges, the average numbers of both published EKs and articles increased within three years after the funding began and again after four to six years. Here, higher amounts (especially ranging up to 50 million JPY and over 100 million JPY) correlated with higher EK and article production after the funding began. Although there were no differences in the average number of EKs generated by grants between 20 to 50 million JPY and 50 to 100 million JPY (where the average number of articles is even less than 20 to 50 million JPY), it is likely that PIs receiving more money will generate more articles and EKs.
However, this trend was not exponential. Since doubling grant money did not double EK production (Fig 3), investors should evaluate the effectiveness of the total amount of investment on EK and article production. Fig 4A shows the total investment in each amount range versus the total number of EKs and articles published within six years after the funding began. While there is a linear relationship between the numbers of publications and accompanying EKs in the ranges up to 5 million JPY, this linearity is lost above 5 million JPY. These results suggest a "too-large" cutoff, beyond which grants are not effective for EK generation, and that grants of less than 5 million JPY are more effective for Japan.
The relationship between total investment amounts and the production of highly successful emerging keywords
ETs, in general, are expected to influence scientific and public societies with impact and innovation. However, we previously demonstrated that many EKs (the core element of ETs) fade away without any appreciable influence [4]. Therefore, it is valuable to analyze funding effectiveness at generating successful EKs. HS-EKs are identified by including various MeSH terms related to Nobel Prize-winning topics [4,51] and, thus, represent elements of high-impact ETs.
Our study focused on grants which began between 1991 and 2004 for HS-EKs and tracked HS-EKs for 6 years (until 2010). Unlike the EKs shown in Fig 3A, the average HS-EK
PLOS ONE
The effectiveness of Japanese public funding to generate emerging topics in life science and medicine generation over three years increased only up to the grant amount range of 50 million JPY and actually decreased above 50 million JPY (Fig 3B). Here, the average numbers of HS-EK generation in all amount ranges were less four to six years after funding started (Fig 3B). Therefore, it is likely that, for further generation of HS-EKs, PIs are better off receiving smaller grants (less than 50 million JPY) over short periods.
PLOS ONE
The effectiveness of Japanese public funding to generate emerging topics in life science and medicine To evaluate the effectiveness from the investor side, we compared the relationship between the total amount of investment and the total number of generated HS-EKs within six years after funding began (Fig 4B). Like the generation of EKs and articles, a good linear fit was found over ranges up to 5 million JPY, but no higher, where higher amounts created more divergence from the line of best fit. Thus, for the generation of HS-EKs, grant amounts of less than 5 million JPY are effective from an investor standpoint.
The productivity of articles, EKs, and HS-EKs in different grant categories
In Japan, each JSPS grant category has its own purpose (e.g., concentrated on excellent researchers, distributed equally, encouraging start-ups, supporting continuous research, or given only to young scientists, etc.), regardless of its budget size. Consequently, we next evaluated the effectiveness of each grant category to generate articles, EKs, and HS-EKs by the awarded PIs. Fig 5 shows the average number of articles, EKs, and HS-EKs generated in each grant category. In the case of articles and EKs, except for GiA for COE Research and GiA for Specially Promoted Research, the total number of EKs and articles increased after the funding began in three years and further increased within three to six years. In contrast, for HS-EKs, such tendencies were only found in GiA for COE Research and GiA for General Scientific Research (A), with most other GiAs showing their highest HS-EK productivity within three years after the funding began. On the contrary, the productivity of HS-EKs in GiA for Specially Promoted Research was the highest before the funding begun. This HS-EK trend might imply that the research projects proposed by each PI in the grant, consisting of potentially highimpact topics, are concluded and resulted in published papers within three years after the funding initiation. It also suggests a potential inability of the PIs to generate new high-impact topics beyond that timeframe.
To investigate which particular grant categories are more productive in the generation of articles, EKs, and HS-EKs after grant funding, we plotted the average grant amount received by the PIs against the average numbers of articles, EKs, and HS-EKs within six years after the funding began (Fig 6). Linear fittings of all the categories showed that GiA for Co-operative Research (B), GiAs for Exploratory Research, GiA for Cancer Research, and GiA for General Scientific Research (A) appeared in the upper regions of the fit lines for articles, EKs, and HS-EKs. It is likely that these grant categories were relatively successful in generating output from funding.
To evaluate investment efficiency, we also compared the total invested amount in each grant category and the total number of articles, EKs, and HS-EKs generated within six years by the awarded PIs (Fig 7). The linear fittings demonstrated that GiA for Co-operative Research (Fig 6), GiA for Co-operative Research (B), GiA for Cancer Research, and GiAs for Exploratory Research also seemed to be comparatively successful grant categories in terms of the effort balance between investors and PIs.
Grant distribution of moderate amount categories to a variety of researchers is important
In order to determine research policy, consideration of investment effectiveness on various aspects of research productivity is important. In the above analyses, we separately evaluated
PLOS ONE
The effectiveness of Japanese public funding to generate emerging topics in life science and medicine grant effectiveness to generate EKs and HS-EKs. To inclusively evaluate which category is the most effective at generating emerging topics from the investor side, we introduced a simple equation to assess the effectiveness of the total investment on scientific outcomes related to the generation of emerging topics (EIs; see Materials and methods).
The result demonstrated that the best grant category was GiA for Co-operative Research (B), followed by GiA for Challenging Exploratory or Exploratory Research, GiA for
PLOS ONE
The effectiveness of Japanese public funding to generate emerging topics in life science and medicine Encouragement of Scientists, GiA for General Scientific Research (C), and GiA for Scientific Research (C) (Fig 8). The sizes of these grants are relatively small; each is less than 5 million JPY total over three years (S1 Table). This is consistent with our results on the amount range of the grants (Figs 3 and 4). Of note, these grant categories did not target only researchers with excellent publication records. As shown in Fig 3, PIs receiving relatively small grants generated smaller numbers of published articles, EKs, and HS-EKs before staring the funded project. Thus, for the investor side in Japan, moderate, short-term grant distribution to a wide variety of researchers is likely a more effective strategy to promote the generation of ETs.
When categorizing each grant into research funding types aimed at establishing and accelerating a new academic field, continuous research without limiting the field, and/or categories emphasizing new exploration, it is observed that each classification includes both relatively high and low EI values (Fig 8). This suggests that the funding amount has a greater influence than the characteristics of the grant category.
Smaller grants to many researchers are preferable in Japan
In this study, we evaluated the effectiveness of grants to generate ETs by analyzing the articles, EKs, and HS-EKs reported by the awarded PIs. Both the amount range-and category-dependent analyses demonstrated that grants of less than 5 million JPY are more effective at generating ETs. Since the average number of published articles, EKs, and HS-EKs became progressively smaller in line with grant budget size (Fig 3), such small grants were not concentrated only on researchers with past excellent records. Although competitive (acceptance rates varied from 10% to 30%) (S2 Table), good proposals had higher chances to receive funding even if they were not submitted by top-tier researchers.
In this study, we focused on grants from the 1990s to the 2010s but there were none that persisted throughout the entire period. It is particularly well known in life sciences and medicine that there has been a significant shift in research approaches between the pre-2000s and post-2000/post-genomic era. We previously reported that post-genomic era research requires more manpower plus focused and sustained effort [5,51]. We also demonstrated a mode-shift within the scientific culture of life science and medicine to generate ETs before and after the genomic era, i.e., the "progressive stage" with fruitful, novel findings facilitated by identification and manipulation of genes until the late 1990s. This contrasts with the "re-evaluation
PLOS ONE
stage," overlapping with the modern "post-genomic era," that focuses more on re-analyzing old topics by leveraging newly developed methods such as computational techniques, largescale analyses, and nano-scale analyses [5,51]). Some of our targeted grants existed only till the mid-1990s (e.g., GiA for Co-operative Research [B] and GiA for General Scientific Research [C]) while others came into being only after the mid-1990s (e.g., GiA for Scientific Research [C]). However, regardless of the mode-shift, smaller grants were ranked in the top 5 performers of our EI evaluation (Fig 8). Therefore, we conclude that smaller grants distributed to many researchers without excellent performance records are the most effective to promote the generation of ETs. Taken together, our analysis indicates that "small" science is better in life science and medicine research from the investor or policy maker viewpoint in Japan.
Other implications for research policy in Japan
Another implication is the effectiveness of funding for the awarded PIs to generate ETs. In this study, PIs receiving larger grants generated more articles, EKs, and HS-EKs after the grants began (Fig 3). In this case, PIs capturing higher grant funding tended to have excellent past performance at generating articles, EKs, and ETs (Fig 3). Thus, if the investor side can ignore the overall return on the investment, more funding to PIs with better performance records can facilitate the generation of more ETs. In this regard, "big science" does have a place in life science and medicine from the standpoint of supporting productive PIs in Japan.
Public support to young and/or early career researchers is considered a keystone factor in their future performance and careers. A dataset analysis from the Netherlands showed that the number of publications by receivers of early career grants was slightly higher than non-receivers but the citation impact of publications between the groups was similar, especially regarding high citation articles [57]. In the case of Japan, when focusing on individual PI performance, competitive project funding for early career researchers led to lower generation of novelty than block funding [40]. In our results, the average number of articles, EKs, and HS-EKs per young/early-career PI (GiAs for Young Scientists, GiAs for Encouragement of Scientists and GiA for Research Activity Start-up) tended to be relatively low compared with the other GiAs (under the average line of fit, excepting HS-EKs for GiA for Research Activity Start-up) ( Fig 6). In contrast, the overall return on investments of these grants to generate publications, EKs, and HS-EKs were relatively high (over the line of fit) (Fig 7). The EI rankings of GiAs for Young Scientists and GiAs for Encouragement of Scientists were 8 and 3 out of 17 grant categories, respectively (Fig 8) (GiA for Research Activity Start-up was excluded because of its 2010 start date). Thus, from an investor viewpoint only, support to young and/or early career researchers in Japan seems to be functioning.
Concluding remarks and possible future research
Our results demonstrated that, in terms of grant amounts, while PIs tended to generate more EKs with larger grants, the most effective investment from the perspective of investor side was found in the smallest amount range for each PI. It is noteworthy that this study clarified the asymmetric structure between individual PI perspective and investor side with respect to effectiveness of grant concentration. Second, in terms of grant categories, we found that grant categories providing smaller amounts for diverse researchers without excellent past performance records were more effective from the investment perspective to generate EKs. This result emphasized the importance of diversity for generation of scientific novelty. It might be possible that smaller grants allow researchers to take more risks, explore new ideas, and help build relationships with other researchers to get their work noticed. Another interpretation is that pools of many smaller grants could foster greater diversity of research objects, materials, and methods. Investigations of these small-grant researchers will be valuable to clarify the reason why smaller grants are more effective.
In this study, we restricted our analysis to awarded PIs to isolate results from effects due to co-investigators for the same grants and additionally excluded grant-funded researchers, including postdocs, project-based faculty members, technicians, etc. In general, since larger grants tend to include more researchers in separated groups, if the number of publications, EKs, and HS-EKs reported by researchers other than PIs in a particular grant is counted, assessments could be biased, especially in the case of larger grants. This point is crucial for future studies to evaluate the true effectiveness of science funding.
This study only analyzed researcher-supporting GiAs in the life and medical science fields offered by the JSPS, which mainly focuses on basic research. In Japan, there are other public grants to support applied research and/or research for industrialization, such as CREST (Creating REvolutionary technological seeds for Science and Technology innovation) and SAKI-GAKE (which promotes individual research to nurture the seeds of future innovation and organize unique, innovative networks), offered by the Japan Science and Technology Agency. The analyses of outcomes supported by such grants may clarify the differential role of public funds to generate ETs in applied research fields.
Geographical differences should also be considered as multiple studies have focused on the situation of grant strategies in developed countries [14]. It is reasonable that domestic culture may make these results specific and, to extrapolate more generalizable results, analyses on the situations in developing countries may be fruitful for discussion/conclusions about the priority of concentrated or dispersed styles of funding.
Finally, funders must be careful about using metrics to make assessments on where to allocate funding. Our index based on EKs and HS-EKs is a metric to evaluate one aspect of scientific productivity and creativity. In general, it is widely accepted to conduct citation analyses of articles and patents to assess the effectiveness of public funds [8,9,14]. Thus, combinational analyses with the citation impact from both scientific articles and patents are also valuable in order to reveal the significance of emerging topics toward understanding social innovation and impact from public-grant-supported research. Finally, since this study focuses only on life science and medicine, it would be valuable to investigate other fields to seek common, interdisciplinary mechanisms that generate ETs in science.
Supporting information S1 | 7,889 | 2023-08-17T00:00:00.000 | [
"Medicine",
"Economics"
] |
Determinants of Self-service Technology Adoption
With the tremendous growth of self-service technologies (SSTs) in many industries, SSTs in the context of service provision are recognized as more effective and important technologies to minimize investment costs and maximizes service quality. By means of reviewing and integrating literature in several fields, the present paper attempted to provide an understanding of this relationship in terms of the links between SST characteristics (perceived risk, perceived ease of use, and perceived usefulness), consumer technology readiness, social pressures (coercive, normative, and mimetic), and SST adoption. Eight hypotheses from a conceptual model developed to predict and explain consumer intentions towards SST usage were tested through data collection from senior undergraduate and graduate students majoring business as respondents. Through structural equation modeling (SEM), findings indicated that SST characteristics, consumer technology readiness, social pressures were crucial determinants of SST adoption. Besides the empirical confirmation of the hypotheses given, finally, there were several practical implications for service marketers and future research directions for scholars.
INTRODUCTION
Self-service technologies (SSTs) have been prevalently applied in many industries, including airline, banking, travel, hotel, financial, and retailing since the automated teller machines (ATMs) were introduced several decades ago. Today, not only can these SSTs provide a variety of self-services, including automated hotel checkout, flight ticket checkouts at kiosks or online, internet shopping, paying bills online, banking via ATMs, and self-scanning checkouts at grocery or discount stores, to consumers (Bitner et al., 2002;Elliott et al., 2008), but can also produce the tremendous economic value (Burrows, 2001). For example, the dollar value of self-checkout transactions in North America was from $525 billion in 2007 to around $1.3 trillion in 2011 (Lee and Greg, 2011).
interfaces enabling consumers to become service coproducers rather than only service receivers (Meuter et al., 2005). Not only do SSTs shift a traditional service pattern that completely separates production and consumption, but also change the role and the behavior of consumers. For companies, not only can SSTs enhance competitiveness of organizations (Bitner, 2001;Cunningham et al., 2009;Meuter et al., 2000;Messinger et al., 2009), but can also more effectively and importantly minimize costs, and provide better, more efficient, customized services (Burrows, 2001;Cheng et al., 2006;Weijters et al., 2007). Like companies, consumers can also obtain benefits from SSTs, including employee mood avoidance, service demand fluctuation, time and money savings, reduction in dependency on time and location, quick responses to complaints, a more consistent service, and without human employee contact (Cheng et al., 2006;Weijters et al., 2007).
As previously described, SSTs can provide benefits to companies and consumers, but there is a great challenge of overcoming the resistance to SST adoption in handling transactions between service providers and consumers (Cunningham et al., 2009;Gerrard et al., 2006), because shifting existing habits and a traditional service pattern of consumers results in the most prominent obstacle getting consumers to adopt SSTs for the first time (Elliott et al., 2008;Meuter et al., 2005).
Based on the technology acceptance model (TAM) by Davis (1989), perceived ease of use and perceived usefulness are identified as significantly influencing intentions of technology users. This is because the easier and the more useful a technology, the higher the degree to which this technology is accepted (Davis, 1989;Davis et al., 1989). However, the two determinants insufficiently lead consumers to adopt SSTs, because of involving in information privacy and security of consumers (Featherman et al., 2010;Meuter et al., 2005).
During the process of self-checkout transactions, for example, consumers need to list individual sensitive information (for example, credit card number, social security number, telephone number, and addresses) on the Websites or at kiosks. Therefore, SST adoption involves in safety issues (Featherman et al., 2010;Laukkanen et al., 2008).
Research on SSTs indicates that Asian consumers are less likely to use internet banking, due to lack of adequate security and privacy (Elliott et al., 2008). Therefore, a deeper understanding of the possible relationship between SST characteristics and SST adoption is needed.
In the technology context, TAM was originally developed to predict and explain the technology-adopting behavior of individuals at work environment, but TAM is unable to fully predict the intended technology-usage of individuals in marketing settings (Lin et al., 2007). This is because individuals in marketing settings are not mandated to use a technology by organizational objectives and may be freer to select numerous available alternatives. That is, only functional and technical issues cannot explicitly explain consumer SST acceptance (Laukkanen et al., 2008).
To fully explain SST adoption of consumers in marketing settings, consumer propensity to use SSTs should be addressed (Lin et al., 2007;Matthing et al., 2006;Parasuraman, 2000;Parasuraman and Colby, 2001;Xu, 2007). Among many models, technology readiness (TR) by Parasuraman (2000) appears to be the most widely cited to explain consumer propensity to accept technology-based products or services. Consequently, the present study attempts to explain consumer intentions towards SST adoption through TR.
Finally, the social contagion theory addresses the important role of social pressures in influencing innovation usage. This is because individuals exposing to their social environment more likely develop their beliefs, Yang et al. 10515 attitudes, and behaviors consistent with those of their social environment (Shi et al., 2008). The institutional theory also indicates that individuals in a social network consciously or unconsciously take an action due to social pressures. Therefore, social pressures playing an essential role in influencing SST adoption should be addressed. However, relatively few studies have contributed to social pressures in SST adoption (Shi et al., 2008). To fill this gap, therefore, the study applies the institutional theory to posit that coercive, normative and mimetic social forces are also significant determinants of SST adoption.
As discussed previously, the purpose of this study is to empirically test and validate the model (Figure 1) of consumer SST adoption for based on the combination of SST characteristics (perceived risk, perceived ease of use, and perceived usefulness), consumer TR, and three social pressures (coercive, normative, and mimetic forces). Subsequently, the study reviews the integrated model and develops hypotheses from it. The structure equation modeling (SEM) is used to test the model, and then empirical findings are explained. Implications for research and practice are also discussed and further expected to lead service providers to strategy formulation and marketing policy decisions for SST design and introduction. Finally, limitations and future research in this study are provided.
Self-service technologies (SSTs) characteristics
Technology acceptance of an individual is hypothetically determined by his or her voluntary intentions towards adopting a technology (Davis, 1989;Davis et al., 1989). Failure to provide motivating factors of adoption to users will result in technology resistance Ellen et al., 1991;Ram and Sheth, 1989). Evidence evinces that consumer resistance to an innovation is caused by functional and psychological barriers (Ram and Sheth, 1989). Not only are functional barriers linked to innovation characteristics, but are also categorized into the risk barrier, the value barrier, and the usage barrier (Laukkanen et al., 2008;Ram and Sheth, 1989). Moreover, the risk barrier is related to consumer perceived risk, while the value barrier and the usage barrier are related to perceived usefulness and perceived ease of use, respectively (Ram and Sheth, 1989).
Perceived risk
In order to avoid identity theft and the selling of transmitting personal confidential information (for example, credit card number), personal awareness of risk discourages an individual from SST acceptance ( Elliott et al., 2008;Janda et al., 2002;Laukkanen et al., 2008;McKechnie et al., 2006;Roy et al., 2001;Salisbury et al., 2001). Therefore, individual perception of risk is one of key determinants in SST adoption (Laukkanen et al., 2008;Meuter et al., 2005;Mitchell, 1999). Perceived risk (PR) is defined as the overall amount of uncertainty perceived by a consumer in a particular purchase situation (Pavlou, 2003). Under uncertain or ambiguous circumstances, PR will evoke psychological anxiety and may negatively affect consumer decisionmaking process (Featherman et al., 2010;Ranaweera et al., 2008;Taylor, 1974). Substantial evidence also illustrates that PR leads consumers to create an unwillingness to adopt online service transactions (Featherman et al., 2010;Laukkanen et al., 2008), because of threatened feelings and anxiety, and an increase in psychological and learning costs (Ellen et al., 1991;Stone and Grønhaung, 1993).
In the context of SST adoption, lacking face-to-face interactions or unfamiliarity with characteristics of a SST leads consumers to increase risk perceptions and further reduce motivation and the likelihood of SST trial (Elliott et al., 2008;Ram and Sheth, 1989). Therefore, we hypothesize: H 1 : Perceived risk will negatively impact consumer intention towards SST adoption.
Perceived usefulness
In the context of SST adoption, perceived usefulness (PU) refers to an individual's subjective awareness of using a technology will not only increase job related productivity, performance, effectiveness, or profitability, but also reach time and money savings and eventually enhance living quality (Davis, 1989;Davis et al., 1989;Lu et al., 2003;Wu and Wang, 2005).
Previous empirical studies indicate that a considerable increase in job performance from technology usage leads an individual at the workplace to lean towards accepting a technology. Similarly, if a SST in market settings offers superior performance-to-price compared to alternative, it is worthwhile for consumers to change their ways performing tasks (Laukkanen et al., 2008). However, the impact of PU on SST usage must be replicated and reconfirmed in this model. Therefore, the hypothesis is framed as follows: H 2 : Perceived usefulness will positively impact consumer intention towards SST adoption.
Perceived ease of use
Perceived ease of use (PEOU) refers to an individual's subjective awareness of using a technology or system will be free from effort (Davis, 1989). Earlier research on technology acceptance suggests that PEOU is commonly identified as a key determinant in the successful introduction of a technology Lin et al., 2007;Lu et al., 2003;Moore and Benbasat, 1991;Wu and Wang, 2005). Lacking ease-of-use of an innovation or increasing the complexity of usage interface results in individual resistance to this innovation (Moore and Benbasat, 1991;Ram and Sheth, 1989;Wu and Wang, 2005).
In the context of SST usage, PEOU is also a potential catalyst to increasing the likelihood of SST usage (Wang et al., 2003). However, a complicated, inconvenient, and difficult SST is perceived to discourage consumers from adopting the SST (Gerrard et al., 2006;Laukkanen et al., 2008;Meuter et al., 2005). Therefore, we hypothesize: H 3 : Perceived ease of use will positively influence consumer intention towards SST adoption.
Prior studies also suggest that PEOU have an indirect effect on intention via PU Lin et al., 2007;Lu et al., 2003;Moore and Benbasat, 1991;Wu and Wang, 2005). This is because the easier a technology is to use, the more useful it can be and the higher the degree of adopting it . Based on the ease-of-use of a SST probably leading the benefits (usefulness and value of the SST) to consumers, the hypothesis is framed as follows: H 4 : Perceived ease of use will have a positive impact on perceived usefulness.
Technology readiness
The technology readiness index (TRI) by Parasurman (2000) is a multifaceted framework adopted to describe differences in consumer beliefs about technology in Yang et al. 10517 general (Parasurman and Colby, 2001). Different personal traits will lead to different individuals' beliefs about various aspects of technology acceptance (Matthing et al., 2006;Walczuch et al., 2007;Xu, 2007). Not only is technology readiness (TR) defined as "people's propensity to embrace and use new technologies for accomplishing goals in home life and at work" (Parasuraman, 2000: 308), but is also viewed as an overall state of mind resulting from a gestalt of mental enablers and inhibitors that collectively determine a person's predisposition to use new technologies. Based on personal openness to technology, TR construct comprises four sub-dimensions, including optimism, innovativeness, discomfort, and insecurity. Optimism refers to a positive view of technology and a belief in increased control, flexibility, and efficiency in home life and at work due to technology, whereas innovativeness is a tendency to be a technology leader.
Discomfort is a perception of lacking control over technology and a feeling of being overwhelmed by it, whereas insecurity involves in distrusting technology and skepticism about its ability to work properly. In the context of technology usage, therefore, optimism and innovativeness are drivers, while discomfort and insecurity are inhibitors (Lin et al., 2007;Parasurman, 2000;Parasurman and Colby, 2001;Walczuch et al., 2007).
Prior empirical studies on technology-based services suggest that individuals with higher TRI are more likely to accept and adopt SSTs, while ones with lower TRI are less likely to do so (Elliott et al., 2008;Lin et al., 2007;Ranaweera et al., 2008;Parasurman, 2000;Theotokis et al., 2008;Sophonthummapharn and Tesar, 2007;Walczuch et al., 2007). However, results of the study by Lin et al. (2007) reveal that TR has no direct impact on intentions towards using a specific e-service. To bridge this gap, therefore, the next hypothesis is framed as follows: H 5 : Consumers' technology readiness propensities will have a positive impact on their intentions towards SST adoption.
Social pressures
Based on the social contagion theory, beliefs, attitudes, and behaviors of social actors (for example, individuals, groups, and organizations) are consistent with those of other actors (for example, family and peers for individuals, customers, suppliers, partners, and competitors for companies). This is because social actors always incline to share similar notions with other actors surrounding them and further develop direct social networks (Burt, 1987).
When facing pressures, especially, social actors will conform whether their shared notions, attitudes, and behaviors are compatible with those of other actors (Burt, 1987). Three social pressures (coercive, normative, and mimetic) originated from the institutional theory attending to the deeper and more resilient aspects of social structure (DiMaggio and Powell, 1983). Not only can the institutional theory posit that various networks and interactions built up in institutions shape beliefs, attitudes, and behaviors of social actors, but can also address that social ties (for example, networks and interactions) play a pivotal role in explaining social actors' attitudes and behaviors toward innovation adoption (Scott, 2005).
A number of studies have addressed the institutional theory at the organizational level, but relatively little research has contributed to the individual level (Shi et al., 2008). In essence, Cooley (1909) argues that early institutional theory and analyses in economics field were applied at the individual level. This is because "the individual is always cause as well as effect of the institution" and "in the individual the institutions exist as habit of mind and of action" (Cooley, 1909: 314). Research on the technology acceptance also suggests that the institutional theory can explain and predict consumer intentions towards technology usage (Shi et al., 2008).
Coercive pressures
Coercive pressures are defined as formal or informal pressures to make social actors comply with the requested attitudes, behaviors, and practices, due to feeling pressured to do so by other more powerful actors in their social environment (DiMaggio and Powell, 1983).
On one hand, coercive pressures at the organizational level stemming from resource-dominant organizations, regulatory bodies, and parent corporations, are categorized into competition and regulation (Shi et al., 2008). Competitive pressures result from the threat of losing competitive advantage, whereas regulatory pressures arise from government agencies and professional regulatory bodies (Shi et al., 2008). Evidence illustrates the positive impact of coercive pressures from organizations on technology adoption (Mohamad and Ismail, 2009).
On the other hand, the impact of coercive pressures at the individual level on individual technology usage is unobvious. This is because individuals in marketing settings are not forced to use a technology by competitors, suppliers, government agencies, or professional regulatory bodies. However, consumers in marketing settings may still face coercive pressures from service provision and operating strategies (for example, minimizing costs and maximizing service quality) of companies to adopt SSTs. For example, banks ask their customers to fulfill some financial transactions (for example, mortgage and loan) through internet service (Shi et al., 2008).
Based on previous studies, therefore, we hypothesize: H 6 : Greater coercive pressures will positively influence consumers' intentions towards SST adoption.
Normative pressures
Unlike coercive pressures, normative pressures are defined as pressures to make social actors voluntarily, but not consciously, copy or imitate attitudes, behaviors, and practices representing the only way to do things (DiMaggio and Powell, 1983;Scott, 2005). Previous studies also suggests that social actors always unconsciously copy a certain action taken by a large number of other actors, because the action taken by most actors for a long time is taken for granted and legitimized (Liao et al., 2007;Shi et al., 2008). To be identified, individuals in the same social context come to believe that the action represents the only way to do so. This imitation is not coercive by any powerful actors (Shi et al., 2008).
In the context of SST adoption, empirical studies suggest that greater normative pressures lead to greater intended usage of a SST (Liao et al., 2007;Shi et al., 2008). To avoid dissonance and to comply expectations, normative pressures may lead individuals without SST usage to accept SSTs, when most people important to them think they should do so (Shi et al., 2008). Therefore, we hypothesize: H 7 : Greater normative pressures will positively influence consumers' intentions towards SST adoption.
Mimetic pressures
Mimetic pressures occur when social actors believe that only following or imitating actions taken by successful and high-status actors (for example, celebrities, politicians, and entrepreneurs) will yield positive outcomes (for example, reduction in research costs and experimentation costs, and avoidance of risks inherent from being the first-movers) (Shi et al., 2008). Moreover, individuals in an institutional environment are apt to seek behavioral patterns of successful and high-status people and then voluntarily, consciously copy or adopt the same actions taken by them, because individuals think this imitation will lead to their better performance (DiMaggio and Powell, 1983;Shi et al., 2008).
In the SST adoption context, however, findings of an empirical study by Shi et al. (2008) indicate that mimetic pressures have no impact on internet bank adoption. However, evidence of mimetic change in many studies examining adoption of new technology-based products and services illustrates that most consumers, especially for teenagers, adopt products or services endorsed by celebrities (Hawkins et al., 2007). This is because individuals may imitate attitudes and behaviors of actors whom they adore. Based on previous discusses, therefore, we hypothesize: H 8 : Greater mimetic pressures will positively influence consumers' intentions towards SST adoption.
METHODOLOGY
Based on previous studies, SSTs involve a variety of self-services. As a consequence, the study narrows down self-services, including internet shopping, online transaction, and self-scanning checkouts at grocery or discount stores in order to validate the conceptual model ( Figure 1). A 62-item questionnaire is employed to measure the constructs. Of the 62 items, eight items by David (1989) and Davis et al. (1989) are slightly reworded to measure perceived ease of use and perceived usefulness, whereas four items by Broekhuizen and Huizingh (2009) are slightly adapted to measure perceived risk. The full 36-item TRI scares by Parasuraman (2000) are employed to measure the four sub-dimensions of TR (that is, 10 items for optimism, 7 items for innovativeness, 10 items for discomfort, and 9 items for insecurity). Nine items by Shi et al. (2008) are adapted to measure the three social pressures (that is, 3 items for normative, 3 items for coercive, and 3 items for mimetic). Five items for intention to use SST are adapted from David (1989). Furthermore, the 36 items for TRI are measured on 5-point Likert scares, while the other 26 items are measured on 7-point Likert scares. All items originally in English are translated into Chinese and back-translated into English to ensure equivalent meaning (Brislin, 1980). The questionnaire is also pilottested using undergraduate business students with SST experiences. The feedback from the pilot test is used to improve the readability and the questionnaire.
Data collection and sample characteristics
In this study, senior undergraduate and graduate students majoring in business are chosen as survey subjects according to several reasons. First, Im et al. (2003) point out that younger people are more receptive to new technologies. Second, the respondents are students, but they are considered as reasonable representatives of online shoppers, because of having business knowledge and being regular Web users (Gefen et al., 2003). Third, the use of a studentbased sample has been proven useful, due to greater homogeneity leading to greater control over extraneous variables (Peterson, 2001).
Data collection is via a paper-based methodology. Before mailing 1000 questionnaires to the business colleges in the universities in the middle part of Taiwan, invitation letters are mailed to faculty and students in business colleges to explain the purpose of the study as well as solicit their cooperation. After one and half months of data collection, 300 questionnaires are returned. However, due to having 78 incomplete questionnaires, the final number of usable questionnaires is 218, for a response rate of 21.8%. Of the 218 participants, 129 (59.2%) are female and 89 (40.8%) are male. The average age and income of the 218 participants is 28.9 years and about NT 24,037.
DATA ANALYSIS AND RESULTS
Reliability of the instrument was assessed with Cronbach alpha. Results illustrated an alpha coefficient of 0.86 for PEOU, 0.89 for PU, 0.74 for PS, 0.84 for optimism, 0.70 for innovativeness, 0.74 for discomfort, 0.82 for insecurity, 0.70 for coercive, 0.86 for normative, 0.78 for mimetic, and 0.92 for intention to use. That is, the internal consistency and stability of the instrument was accepted (Nunnally, 1978). All subsequent data analyses were conducted through AMOS version 18.
To establish construct validity, convergent validity and discriminant validity were assessed through the confirmatory factor analysis (CFA) before examining the conceptual model. Results indicated an adequate model fit (χ 2 / df = 2.003, p = 0.85, GFI = 0.98, AGFI = 0.92, RMSEA = 0.032). Convergent validity assesses the extent to which items designed to measure the same construct are related, while discriminate validity assesses the degree to which items designed to measure different constructs are related (Hair et al., 2006). It was found that standardized factor loadings of all items measuring the same constructs were over 0.70 and significantly related (p < 0.001). However, correlation values of all items measuring different constructs were significantly low and from 0.00 to 0.62. Therefore, convergent validity and discriminant validity were established (Hair et al., 2006).
Next, the conceptual model was assessed by examining the path coefficients (the β weight values in Table 1) values illustrate the strength of the relationships between independent variables and dependent variables, whereas the R 2 value indicate the amount of the variance predicted from the combination of the exogenous variables (Hair et al., 2006). All path coefficients and tstatistics for hypothesized relationships were calculated through maximum likelihood in AMOS. Results of hypothesis testing were presented in Figure 2.
AMOS output produced the following two questions, and all path coefficients in Figure 2 were statistically significant. On further examining the path coefficients, it was found that the β weight from PR to intended usage of SSTs (β = -0.13, p < 0.05) provided support for H 1 . H 2 and H 4 were supported due to significant coefficients to intention via PEOU and PU (β = 0.87 and 0.45, respectively; p < 0.001). H 3 was also supported because of significant coefficients from PEOU to intention (β = 0.03, p < 0.05). The total effect of PEOU on intention was 0.42.
As shown in Figure 2, TR and three social pressures (coercive, normative, and mimetic) significantly positively impacted intention to use SSTs (β = 0.11, p < 0.05; β = 0.28, p < 0.001; β = 0.14, p < 0.05; and β = 0.14, p < 0.05, respectively). Therefore, findings provided support for H 5 to H 8 . Moreover, not only did the R 2 values of 0.76 and 0.63 indicate 76.0% of the variance in PU and 63.0% in intention to use SSTs explained by the model, but also provided evidence in support of the conceptual model.
DISCUSSION AND CONCLUSIONS
This study examines the impact of determinants (SST characteristics, consumer propensity, and social pressures) on SST adoption by applying TAM, perceive risk theory, TRI, and institutional theory. The analysis results draw some conclusions. First, due to H 1 statistically supported (β = -0.13*), the result provides evidence for the hypothesized negative impact of PR on intended usage of SSTs. That is, high PR may affect consumer evaluations and usage of a SST.
Consumers with more perceptions of risk on SSTs psychically resist SST acceptance and adoption. This is because consumer assessments of risk perceptions on SSTs are higher than those of risk perceptions on traditional services. This study is consistent with studies by Featherman et al. (2010) and Roy et al. (2001). Second, evidence that H 2 to H 4 are supported reconfirms PU as a critical determinant of intention to use SSTs and PEOU with both a direct effect (β = 0.03*) and an indirect effect through PU (β = 0.39***) on intention to use SSTs. Especially, an indirect effect of PEOU through PU on SST usage is much stronger than a direct of PEOU on SST usage. This may be because consumers not just focus on ease-of-use of a SST, but they also pay more attention on usefulness or values (potential benefits) of the SST. This finding also validates TAM as relevant research model in the content of SST adoption. This finding also confirms the study by Lin et al. (2007). Third, the result of H 5 statistically supported (β = 0.11*) illustrates that consumers with higher TR more likely predispose to SST adoption than ones with lower TR do so.
In the study by Lin et al. (2007), however, TR has no direct impact on SST usage. However, TR has an indirect effect through PEOU and PU on SST adoption. Therefore, the impact of TR on SST usage is still identified. Based on the results of H 6 to H 8 supported, fourth, it is evident that three social pressures are key determinants of SST adoption, even though mimetic pressures in the results by Shi et al. (2008) have no impact on intended usage of online banking service. These findings further shed light on that individuals in a social environment are always influenced by other social actors (peers, friends, family, and successful and highstatus person). Based on 63.0% of the variance in intended usage of SSTs, finally, these determinants in the model can take account into consumer assessments of SST adoption.
Practical implications
SSTs recognized as one of technologies in service provision can not only provide cost reduction and service quality improvement for companies, but can also provide the afforded convenience and time savings for consumers. However, there is a great challenge of overcoming the resistance to SST adoption in handling transactions between service providers and consumers (Cunningham et al., 2009;Gerrard et al., 2006).
Analysis of the data in this study also provides practical implications for service providers. Featherman et al. (2010) further suggest that enhancing perceived corporate credibility and image of SST providers are able to lead consumers to reduce risk perceptions on SST usage. This is because consumers always believe that a good firm can make more efforts to deliver what Yang et al. 10521 consumers need and want. Second, TAM validated in this study identifies PEOU and PU as critical determinants of SST usage. Therefore, service providers have to simplify technological interfaces as well as provide a clearer and more readable manual for this SST usage to consumers. Moreover, SST providers should make more efforts to let consumers understand the potential benefits (perceived usefulness and values) from SST adoption because an indirect effect of PEOU through PU on intention to use SST is far stronger than a direct effect of PEOU on SST usage. For example, consumers can get in-depth understanding of benefits from SST usage through advertisement or training activities by SST providers.
As shown in Figure 2, third, TR can be considered as the critical psychological process of consumer assessments on SST adoption. It is recommended that SST providers should place more emphasis on individual indigenous differences by building "the psychographic profile" of their consumers (Lin et al., 2007: 652). Based on the combination of consumer readiness and system characteristics (PEOU and PU), SST providers can more effectively and efficiently segment their target consumers from markets and directly communicate with them.
Fourth, SST providers can take advantage of social pressures to make potential consumers jump onto the SST bandwagon (Shi et al., 2008). This can also shed light on why individuals with low TR always adopt an innovation due to social pressures. The significant impact of coercive on SST usage provides a suggestion that SST providers may offer services available or incentives (for example, promotion, coupon, and discount) only on the internet or the technological interfaces to their consumers.
Regarding normative pressures, SST providers can build a data base of SST users and then create normative expectations through the data (Shi et al., 2008). To be specific, research on subjective norm suggests that wordof-mouth among peers, family, and friends has the significant effect on consumer intentions towards SST adoption in the pre-consumption stage (Liao et al., 2007). SST providers also prompt loyalty of their consumers and further create new consumers through the word-of-mouth of old consumers. Due to the positive effect of mimetic pressures on SST adoption, it is recommended that the high profiles of SST users are able to influence SST usage of others with lower profiles. Through a mouthpiece of successful and high-status actors (e.g. celebrities, politicians, and entrepreneurs), therefore, not only can SST providers keep the current consumers, but can also entice potential consumers to jump onto the SST bandwagon.
As shown in Figure 2, finally, the findings illustrate that the effect of coercive on SST usage is stronger than the effect of normative and mimetic on SST usage. It is recommended that exerting coercive pressures are more efficient than exerting the two others.
LIMITATIONS AND DIRECTION OF FUTURE RESEARCH
The present study significantly contributes to rich insights of perceived risk, TAM and TR in service provision and the institutional theory at the individual level by proposing the combination of SST characteristics, consumer technology readiness, and the institutional theory to predict and explain consumer intentions towards SST adoption. However, there are several limitations in the study.
First, a lower response rate and the target sample involves only in university students, even though they are considered as accepted survey respondents in academic research. Therefore, findings and conclusions of the study may not be generalized for other user groups. That is, the external validity of the study is limited.
Second, due to having 62 items in the questionnaire and similarity in the content between items, respondents may be confused and lose their patience. Moreover, the fact that SSTs involve in a variety of technological interfaces (internet-based interfaces and non-internetbased interfaces) is unable to lead respondents to fully reflect their technology readiness.
Based on the identification of these limitations, this study also provides direction for future research. To validate generalization of the conceptual model, first, future studies may survey other users in different geographies and manage to increase a response rate. Because individuals have different TRI based on different technology-based products and services, second, future studies may focus on only one of SSTs in order to get indepth understanding of consumer readiness.
Finally, prior studies illustrate that determinants leading consumers to adopt SSTs in the pre-consumption stage may not have the significant effect on consumer assessments of SST adoption in the post-consumption stage (Hawkins et al., 2007;Liao et al., 2007). To enhance robustness of the study, therefore, future studies may explore a richer set of variables to predict and explain consumer intended usage of SSTs in the postconsumption stage. | 7,350 | 2012-10-10T00:00:00.000 | [
"Business",
"Computer Science"
] |
An Efficient Pipeline to Obtain 3D Model for HBIM and Structural Analysis Purposes from 3D Point Clouds
: The aim of this work is to identify an efficient pipeline in order to build HBIM (heritage building information modelling) and create digital models to be used in structural analysis. To build accurate 3D models it is first necessary to perform a geomatics survey. This means performing a survey with active or passive sensors and, subsequently, accomplishing adequate post-processing of the data. In this way, it is possible to obtain a 3D point cloud of the structure under investigation. The next step, known as “scan-to-BIM (building information modelling)”, has led to the creation of an appropriate methodology that involved the use of Rhinoceros software and a few tools developed within this environment. Once the 3D model is obtained, the last step is the implementation of the structure in FEM (finite element method) and/or in HBIM software. In this paper, two case studies involving structures belonging to the cultural heritage (CH) environment are analysed: a historical church and a masonry bridge. In particular, for both case studies, the different phases were described involving the construction of the point cloud and, subsequently, the construction of a 3D model. This model is suitable both for structural analysis and for the parameterization of rheological and geometric information of each single element of the structure.
Introduction
Modern surveying technologies in cultural heritage (CH) offer new perspectives of application both as regards the acquisition of metric data and the representation or analysis of objects of historical and artistic interest [1]. In this way, it is possible to obtain a digital representation of objects or structures belonging to the CH environment in terms of position, shape, geometry and description of each element. Geomatics surveys are the primary step in the process of conservation, enhancement and management of CH. A geomatics survey can be performed using image-based 3D modelling (IBM) or range-based modelling (RBM).
IBM methods use 2D images (generated by passive sensor) measurements in order to obtain 3D models. In the last few years, a very successful approach in the construction of 3D models has been that based on the structure from motion (SfM) and multi-view stereo (MVS) algorithms. Using these approaches, a 3D model or 2D orthophotos can be obtained in a rapid and automatic way using photogrammetric software. In general, the several processing steps that lead to the construction of the model are: (i) alignment of the images; (ii) building a dense point cloud (PC); (iii) building mesh and; (iv) building an orthomosaic. Furthermore, the passive sensors used in the IBM method may be used even on mobile platforms (such as cranes, unmanned aerial vehicles (UAVs), hot-air balloons, etc.). In this way, it is possible to acquire data even in big, complex and inaccessible structures, such as upper parts of buildings, aqueducts, bridges etc.
Range-based modelling is based on active sensors, which provide a highly detailed and accurate representation of a 3D object or structure. An example of active sensor is the terrestrial laser scanner (TLS). TLS is a ground-based method that rapidly acquires accurate 3D dense point clouds of a scene through laser range-finding [2].
While in the past these two techniques have been often treated as two separate methodologies, comparing them in terms of accuracy, cost and flexibility [3], only in recent times, they have started to be considered as complementary [4]. The benefit of integrating these two technologies is to take advantage of the TLS capability to directly acquire a dense coloured cloud, with the flexibility of photogrammetry to operate even in exceptional conditions.
By an adequate post-processing of geomatics survey data, it is possible to obtain a georeferenced point cloud of the structure (or object) under investigation.
Next, it is necessary to transform the point cloud into objects for BIM (building information modelling) and FEM (finite element method) analysis. Recently, many studies have been focusing on the possibility of managing point clouds within BIM or structural analysis software and/or identifying a suitable pipeline in order to obtain 3D model for these purposes [5]. While allowing the import of data, the current BIM and structural analysis software does not provide flexible and manageable procedures such as transforming them into models suitable for subsequent processing. Indeed, this is the main challenge pertaining to modelling, as it is necessary to develop simple methods to obtain BIM or HBIM (historic building information modelling) models that still guarantee accuracy, precision and quality of representation consistent with the acquired data. In addition, the model must be enriched with data and information that are not strictly geometric, such as historical information, analysis of degradation or deformation, and levels of detail not granted by the complete model.
Related Works
HBIM for the integration of contemporary technology and the BIM approach in the field of CH documentation was introduced by Murphy et al., 2009 [6]. The purpose of this research was to identify a new methodology for creating full engineering models from laser scan and image survey data for historic structures. Therefore, the identification of a suitable procedure able to obtain a BIM model from the survey is key, especially in the management of structures of particular historicalarchitectural interest. A comprehensive review of the several BIM software types for CH is reported in López et al., 2018 [7] where some information, such as functionality, tools, object structure, interoperability and links are addressed. Fregonese et al., 2015 [8] developed a procedure to obtain a 3D model for BIM purposes. Once the model from the 3D survey is obtained, solid model software was recreated directly in Autodesk Revit, where each single element was modelled using a system family or "Model in Place". This BIM software has allowed to model historical and complex elements in a parametric way which allowed it to be connected with a database. However, due to the limitations of BIM commercial software, the authors have developed software for the management and planning of restoration operations. Barazzetti et al., 2015 [9] have showed a procedure for BIM generation from point clouds via BIM parameterization of NURBS (non-uniform rational B-spline) curves and surfaces using Revit software. In the case study, the authors suggest a procedure that provides BIM objects of complex elements by using the NURBS surface turned into specific BIM families. Using this approach, some problems were found in the modelling of complex objects and in the building of the layer-based reconstruction from the intrados to the extrados. Eigenraam et al., 2016 [10] presents a method in order to obtain free-form shell structures from point cloud to finite element model. In the paper, special attention is given to the geometric accuracy, considering that shape and force interact. The method was applied to Heinz Isler's models for reverse engineering purposes. Furno et al., 2017 [11] compared two different modelling methods: one based on the use of NURBS and the parametric one on BIM objects, using Rhinoceros and Revit software. The "direct" modelling of Rhinoceros made it possible to process the survey data and obtain a model divided into blocks, with the possibility of modifying the intrinsic parameters of the individual elements using the Grasshopper plug-in (included in Rhinoceros). However, the model obtained in this way does not add information of any kind to the elements. For this reason, the modelling of the same structure was also performed with the Revit software and applied to Milan Cathedral in Italy.
León-Robles et. al, 2019 [12] discussed HBIM applied to a masonry bridge using Revit commercial BIM software, but they encountered great difficulties in doing so because only a few families of libraries are dedicated to the modelling of complex civil constructions such as bridges. Moreover, in this case study, an analysis of the deformations between the designed model of the bridge and that surveyed was carried out.
Bassier et al., 2019 [13] suggest a fast and accurate procedure to capture the spatial information required using FEM. The workflow involves two parallel methods: the former converts the point cloud to a complex FEM mesh (through a series of semi-automated procedures) while the second extracts crack information and enhances the FEM mesh to incorporate the crack geometry.
Organization of the Article
This paper is organized as follows. The first part describes the several approaches used in order to reconstruct the surface of the object from a point cloud generated through geomatics surveys. Subsequently, after describing the method that allows obtaining a 3D model for HBIM and FEM from 3D a point cloud, two case studies are discussed. In particular, the method developed is applied to a historical church featuring a rather simple shape, and an old masonry bridge with a complex structure. Conclusions are summarized at the end of the paper.
Three-Dimensional (3D) Surface
To generate a surface model from a point cloud, the reconstruction technique implemented in dedicated software conventionally uses: tessellation, 3D reconstruction using Delaunay triangulation or NURBS surfaces. This procedure may show poor accuracy near the edges or with sudden surface changes from normal. Furthermore, the representation of such surfaces could require numerous pieces and, consequently, greater computational capabilities. By decimating the triangulation, information could be lost on the geometry of the structure under examination. This is the reason why many commercial software products are not able to use mesh models, but use precise analytical models in which surfaces are represented mathematically [14]. In the following sections, the triangular irregular network (TIN) and NURB are described in detail.
Triangular Irregular Network (TIN)
TIN generation is a way to obtain surface reconstruction. Triangulation may be performed in two or in three dimensions, in accordance with the geometry of the input data. TIN utilizes the original sample points to create many non-overlapping triangles that cover the entire region according to a set of rules. The surface is described (approximately) with these triangles [15]. The computer graphics community tends to call this polygonal model "mesh". A mesh contains vertices, edges and faces and its easiest representation is a single face. For triangular meshes, an indexed face list consists of an array of vertices each having three coordinates, and an array of faces each having three indices in the vertex array [16]. The criterion for triangulation division is often used to construct the non-overlapping triangles based on the discrete sampling points. Delaunay is the most common triangulation algorithm.
Non-Uniform Rational B-Spline (NURBS)
The NURBS are mathematical representations of 3D geometry that accurately define a generic geometric entity, such as simple or more complex shapes. The NURBS curve is mathematically defined by the following equation: where the are the weights, the are the control points, and the , are the normalized B-Spline basis functions of degree recursively as [17,18]: where are the knots forming the knot vector = ; ; … ; . Therefore, a NURBS curve is defined by four characteristics: the degree, the control vertices, the knot vectors and the weights. The degree of the NURBS (a positive integer) defines mathematically the piecewise polynomial blending function. The higher the degree of the polynomial, the more flexible the curve and surface. The control vertices are a row of points at least equal to (degree + 1). The knot vectors define how the polynomial pieces are blended together with the proper smoothness. Generally, there are two kinds of knot vector definition: uniform (i.e., with constant spacing between the knots) and non-uniform (i.e., with varying spacing between knot vectors).
A weight is associated with each control point (i.e., its ability to attract the curve). Excluding some exceptions, weights are positive numbers. When the control points of a curve all have the same weight (usually equal to 1), the curve is called "non-rational" and the NURBS curve is reduced to a B-spline curve; otherwise, it is called "rational." For this reason, the letter R of the acronym NURBS stands for "rational" and indicates that a NURBS curve can be rational.
Method
The creation of surfaces suitable for the modelling of objects or structures starting from a 3D "dense point cloud" model (obtained through geomatics surveys) can take place in different ways. Several pipelines have been examined [1,19]: the most efficient of these (in terms of linearity of the method, accuracy, processing times) can thus be schematized ( Figure 1). In fact, should the model generation take place in Revit, it would require the generation of families responding to the geometric characteristics of the object. The generation of the same model in Rhinoceros is "semi-automatic" because it requires the adaptation of any complex surface from the point cloud. This task can be carried out using the different plug-ins within the Rhinoceros software. The processing times for model generation in Revit are considerably longer than those required in Rhinoceros, primarily because complex surfaces do not always find adaptive models in BIM, while in Rhinoceros surfaces can be generated to adapt to the point cloud. As showed in the pipeline (Figure 1), the first step, after performing geomatics surveys, is to import the point cloud into the Rhinoceros software. Through the Arena4D plug-in, implemented in Rhinoceros software, it is possible to obtain optimal management of the point cloud. In other words, this plug-in creates a series of filters on the point cloud such as the elimination of outliers, etc.
In Rhinoceros software, it was also possible to create detailed profiles in the specific part of the structure and, consequently, to build complex and irregular shapes according to NURBS-type geometries. In this way, it is possible to differentiate the several elements of a structure, such as that of a bridge (geometry of the pylons, vaults, retaining walls, etc.). The characterization of each structural element allows each of them to be assigned a specific material.
If the structure shows irregular geometries, it is possible to use an additional plug-in developed in Rhinoceros, called "EvoluteTools PRO", which is able to generate highly complex and sophisticated NURBS surfaces.
Subsequently, the surfaces can be imported into the software of HBIM or structural analysis. In the latter case, NURBS surfaces cannot be imported directly into the software, but it is necessary to build solids. As a result, each NURBS surface can be transformed into a solid through modelling in Rhinoceros. Once solid geometric objects are exported into Midas GTS NX software, the structural mesh can be built.
The transformation from NURBS into solid is performed through solid generation commands such as "offset surface", "loft evolut", "revolution", "extrusion" (i.e., Boolean commands)". Obviously, this phase can be achieved knowing the thicknesses of the structural elements that have been detected and identified through the use of multi-sections on the structure. Consequently, structural objects can be constrained and subjected to loads (permanent and accidental); in this way, it is possible to perform the analysis of stresses and deformations of the structure taken into consideration. However, depending on the structure under investigation, it is possible to use the Grasshopper plug-in, implemented within the Rhinoceros software. This plug-in allows the problem of repeatability of similar objects to be overcome or "parameterization" from time to time of structural elements that have similar geometric characteristics. The programming in Grassopher starts from the insertion of the (surveyed) surfaces generated by the point cloud and adapted in Rhinoceros, which then allows the geometric parameterization. The latter allows us to define any geometric parameter of the object (length, height, thickness, etc.). These geometric elements can be modified and managed according to the space-time use (duplication of the object, comparison with temporal deformations, cracking) using commands such as "number slider" or "Nurbs Curve" (insertable and manageable within the "canvas"). A further advantage of using the Grasshopper plug-in is the possibility to parameterize any type of surface. This is particularly useful in the 4D monitoring activity since it is possible to update the parameterized model according to the deformations detected in different eras. Therefore, the different structural elements generated in this way can be imported into HBIM software or used in structural analysis, as previously described.
Furthermore, all plug-ins and software used in this paper require (commercial) user licenses and support interchange formats. In the Revit environment, the processing requires more manual interventions on the part of the operator than the semi-automatism provided by Rhinoceros. Moreover, if the goal of the process is the complete geometrical parameterization of the object (up to foreseeing temporal modifications or similarities between the objects) it is necessary to have more programming knowledge (Grasshopper) and, consequently, greater manual intervention on behalf of the operator.
San Cono Bridge
San Cono bridge spans the Bianco river located in the municipality of Buccino, in southern Italy (Figure 3a,b). As reported by the inscription on the bridge, the construction of San Cono bridge can be dated to the Augustan age (Figure 3c).
Originally, the bridge had a pronounced donkey-back profile with two shoulders and a steep slope at the ends and a pylon with a triangular rostrum [20]. Now, the current shape of the bridge is incorporated into a new bridge, which in 1872 levelled the road and widened the site (taking it from 3.20 m to 6.45 m), covering it, so as to leave only the original arches visible, below the new ones. In this way, the intervention represented an exceptional example of respect for the ancient monument. As for the bridge architecture, it has two spans of unequal light, for a total length of 40 m. Part of the ancient arches can still be seen below the nineteenth-century one, which changes its profile. The central round arch has a light of 17.3 m and at the base there are five projecting brackets with three others at a higher altitude to complete the support of the rib; the minor arc has a light of 5.9 m with three shelves.
The original vestments of the tympani were in square work; today they are inserted in the new 19 th -century vestments, with an upper parapet that modifies the original donkey back profile [20] (Figure 3d).
Three-Dimensional Survey of the Church
The survey of the church was carried out through the use and integration of active and passive sensors, terrestrial and aerial. In particular, the external façade was surveyed using a TLS, the inner part using a digital single-lens reflex (DLSR) camera with fish-eye lens and the upper part of the building (i.e., the roof and other architectural elements not visible through a terrestrial survey) through the use of a camera mounted on a UAV platform.
Before performing the surveys with photogrammetric techniques and laser scanners, a survey with a total station was performed. The survey was carried out by TS30 Leica Geosystems. This total station allows discrete points to be acquired with an angular precision of 0.5" (0.15 mgon) and to acquire distance with prism (precision of 0.6 mm + 1 ppm) and without prism (2 mm + 2 ppm).
In this case study, the survey was carried out by two base stations. In this way, it was possible to obtain horizontal and vertical angular observations of the ground control points (GCPs). The GCPs, inside and outside the building, were chosen so as to be easily recognizable even on the image ( Figure 4). The post processing of the data was carried out in LGO (Leica Geo Office) developed by the Leica Geosystem company.
Survey of the Terrestrial Laser Scanner of the External Part of the Structure
Regarding the generation of the model for the external part of the church, the survey was carried out by a terrestrial laser scanning survey. In this case study, FARO FocusS 350 instruments were used because specially designed for outdoor applications. HDR imaging and HD photo resolution (overlay up to 165-megapixel colour) ensure true-to-detail scan results with high data quality (distance accuracy up to ± 1 mm). The main features of this scanner are summarized in the following Table 1: In order to cover the entire external surface of the church, three acquisition stations were built. The post-processing of the TLS scans was performed in Autodesk Recap software. This software, where the word "Recap" stands for Reality Capture, allows a fully automatic recording of the scans. In the case the procedure is partially successful, the software allows manual identification of targets and natural homologous points, to reduce distance among contiguous scans, improving their alignment using the iterative closest point (ICP) algorithm [21].
Unmanned Aerial Vehicle (UAV) Photogrammetry to Obtain the 3D Point Cloud of the Upper Part of the Church
The aerial survey was carried out using a Parrot Anafi, a UAS (unmanned aerial system) quadcopter equipped with a Sony Sensor ® 1/2.4" 21MP (5344 × 4016) CMOS (complementary metal-oxide semiconductor), which allows obtaining, thanks also to a 3-axis stabilizer, clear and detailed images (Figure 5a). The distance between the UAV and the building was really close due to the presence of many obstacles in the old town where the church is located. Consequently, the images were acquired with high geometric resolution (Figure 5b). In any case, the photogrammetric survey was carried out with a high degree of overlap between the images. In addition, by varying the tilt angle of the camera, it was possible to acquire images of every part of the building. In this way, it was possible to build a network of the 97 images with a high degree of overlap and convergent image configuration (Figure 5c). Taking into account 5 GCPs, the root mean square error (RMSE) for spatial coordinates, evaluated on the cameras used in this dataset, was of 0.009 m; in particular, this RSME refers to the georeferencing process of the images and not to the resolution of the model. In addition, the modelling of the roof was obtained through the insertion of its own and accidental load.
Photogrammetry of the Internal Part of the Structure Using a Fisheye Lens
For the interior of the church, since there are also frescoes of great historical and cultural value and considering the rather restricted environment, a photogrammetric survey was carried out using a Nikon D5000 DSLR camera with a calibrated fisheye lens (focal length 10 mm). The fisheye is a wide-angle photographic lens that allows a wide scene to be observed. This type of lens has been used successfully in the photogrammetry field, as shown in Kannala and Brandt, 2006 [22], especially in narrow spaces. In self-calibration mode, the dataset of the 22 images was processed in Agisoft Metashape software. The total error, i.e., standard deviation evaluated on 6 GCPs, was 0.003 m.
Considering the high value of the frescoes and the architecture of the small altars inside the structure, orthophotos of each single façade and floor were taken. In order to carry out this task, it was necessary to build a mesh of the interior of the structure. Subsequently, identifying the planes of the single façade, the orthophotos with a geometric resolution of 0.1 mm of the interior of the church were built (Figure 6).
Merging of the Datasets (Point Clouds)
Through the survey activity and post processing of the data obtained either with IBM or RBM methods, it was possible to obtain three datasets, as shown in Table 2. The several point clouds were merged in a single point cloud on the base of common point. This task was carried out in 3DF Zephyr environment, which is a commercial photogrammetry software, developed and marketed by the Italian software house 3DFLOW. A representation of the whole structure according to point cloud is shown in Figure 7.
Three-Dimensional Point Cloud of San Cono Bridge
In order to build the 3D model of the bridge, the photogrammetric survey was divided into an aerial and a terrestrial one. Taking into account the scale of representation (SR) and the aim of the project, a Ground Sample Distance (GSD) equal to 1 cm was chosen as reference for the survey. The terrestrial survey was carried out in order to survey the lower part of the bridge using a Canon EOS 100D DSLR camera (Charged Coupled Device -CCD size = 4.29 μm) with a focal length of 18 mm. A total amount of 400 terrestrial images was acquired. As regards the aerial survey, this was carried out using a UAS Xiaomi Mi 4K, a multi-copter rotary wing weighing less than 1.5 kg and whose declared maximum speed is 18 m per second (about 65 km/h). This UAV was developed and produced by Flymi, a company of Mi Ecosystem. The photogrammetric features of the camera mounted on UAV platform were: CCD size = 4.29 μm and focal length of 3.5 mm. The aerial survey was designed using a software called Mission Planner, which is developed by Oborne for the open-source APM autopilot project. The flight plan was designed with the following characteristics [23]: 80% longitudinal (endlap) and 60% transversal overlap (sidelap). In addition, flight lines (FLs) inclined at 30° and 45° were designed in a direction longitudinal to the bridge in order to increase the rigidity of the aerial photogrammetric block and, at the same time, to increase the redundancy of information with the data obtained from the terrestrial survey. In total, 285 images were taken during the aerial survey.
The post-processing of terrestrial and aerial images was carried out using Agisoft Metashape software. In this case study, two separate chunks were built: one involving aerial (UAV) surveying and another involving terrestrial surveying. To evaluate the quality of image matching (alignment step), the number of the projections and the error achieved on the single chunk were taken into account. Table 3 shows the high quality of the image matching and, consequently, the correctness in the phase of acquisition, for both the aerial and the terrestrial surveys. According to the photogrammetric pipeline, a dense point cloud was built for both datasets. Consequently, in order to obtain the model of the bridge under investigation, it was necessary to integrate the two datasets on the basis of common points. In total, the final 3D point cloud consisted of approximately 8 million points (Figure 8). Subsequently, the model was scaled using 12 Ground Control Points obtained through a traditional topographic survey.
Three-Dimensional Reconstruction of the Models
The point cloud obtained from the geomatics surveys must be classified in objects which the structure under examination consists of. The processes necessary to perform this task must take into consideration several parameters, such as noise, occlusions, the association between faces of neighbouring objects, etc. We carried out this task in Rhinoceros software because it has more tools and plug-ins for 3D modelling. The key point of this software application is the possibility of generating a profile of the structure and, especially, to build a surface that can be adapted to the point cloud obtained in geomatics surveys. Once the point cloud was imported into Rhinoceros, it was possible to reanalyse it using the Arena plug-in. In this way, the density of the points of the PC was decreased and, consequently, it was possible to assess if there were any holes in the 3D model. Within the Rhinoceros software, the tools available to users are quality, point size and the visual analysis tools (render, ratio, opacity). This allowed for editing the point cloud of the structure. Subsequently, the point cloud was dissected into several planes in space. This operation allowed sections in strategic points of the structure, such as the arches of the bridge (Figure 9), to be performed. The plug-in allowed saving the sections in a specific layer. As a result, the sections were displayed as "construction plans" (see Figure 10). Sections that are transverse and longitudinal to the structure were used to create NURBS. Using the EvoluteTools PRO plug-in, it was possible to generate NURBS surfaces (Figure 11a). This plug-in allowed us to shape NURBS surfaces on objects of the structure, exploiting both the sections and the point cloud through an appropriate algorithm developed within this plug-in. For example, the bridge pillar was modelled using an adaptive NURBS (Figure 11b). Of course, the time of the clustering task was related to the complexity of the structure. In this way, it was possible to create surfaces that represent the elements of the structure (vault, stack, retaining walls and superstructure of the bridge), as shown in Figure 12a. Using the same procedure just described for the masonry bridge, it was possible to build a 3D model of the San Nicola in Montedoro church too (Figure 12b). Lastly, thanks to the development of the Grasshopper plug-in, it was possible to model similar structural elements (or parts of them) in 3D. Thus, it was possible to parameterize both from the geometric point of view and from the point of view of the type of material. For example, Figure 13 shows the parameterization of the arch of the bridge using the tools developed in Grasshopper.
Building Information Modelling (BIM)
Many commercial BIM software products are available on the market. One of the most efficient is Autodesk Revit. The original software was developed by Charles River Software, founded in 1997, renamed Revit Technology Corporation in 2000, and acquired by Autodesk in 2002. Autodesk Revit allows users to design a building and structure and its components in 3D, annotate the model with 2D drafting elements and access building information from the building model′s database. Modelling in the BIM environment of the two case studies was carried out using Autodesk Revit software.
In both case studies, the resulting mesh surface obtained in Rhinoceros software in 3D ACIS Modeler (ACIS) format (*.sat) was imported into the BIM Revit software. In this way, the surface created can be quickly opened by the BIM software and can be easily manipulated with rotations and translations. The high detail of the polysurface allowed the precise determination of the levels for the creation of BIM objects. Screenshots of the modelling and management of the information in Revit software, both of the masonry bridge and of the church, are shown below (Figure 14 a and 14 b).
Structural Analysis
The 3D model obtained in Rhinoceros was used in structure analysis software based on the FEM method. The finite element method is the most widely used method for solving problems of engineering and mathematical models, such as structural analysis, heat transfer, fluid flow, etc.
In this paper, an FEM model was used for structural analysis. In particular, Midas GTS NX software, developed by MIDAS Information Technology Co, was used for the several structural analyses. Midas GTS NX is a comprehensive finite element analysis software package that is equipped to handle the entire range of structural design applications.
The procedure that allowed structural information to be generated, starting from the 3D model, is quite simple. In fact, once the surface is imported into the Midas GTS NX structural software, it was possible to create structural meshes. Subsequently, the conditions of external, internal, deadweight (structural elements weight) and accidental loads constraint were assigned to the structure.
As for the materials, the customized information of each of them can be assigned within Rhinoceros through the "VisualARQ" plug-in. The styles of objects with such customize can be exported, in IFC (Industry Foundation Classes) format, in Revit software. These objects, recognized in Revit according to their style and custom information (material properties, costs per unit and custom metric information associated with any object in the model), are further enriched through the Revit libraries with the appropriate material characteristics useful for volumic, thermal, computational-maintenance elaborations. In order to use the advanced structural constitutive relation it is necessary to use FEM calculation software.
The object created in Rhinoceros was imported into Midas GTS NX through: "step" and "parasolid" format. The imported object is congruent and all its structural parts are correctly connected. The object, however, represents a single solid of a single material. Through specific Boolean operations, such as "divide solid", it is possible to divide and auto-connect the different surfaces. Therefore, each of the structural parts generated will be given the appropriate structural material. The materials are characterized by the appropriate constitutive relations (Mohr-Coulomb, Drucker-Prager, Von Mises). The elastic modulus, friction angle, Poisson coefficient etc. were indicated in the software. Once the correct materials were assigned, through the congruence of the structural elements, linear and non-linear seismic analisys can be performed.
For example, Figure 15 shows a view of the results in terms of deformation of the San Nicola in Montedoro church. Of course, the same approach, but using a different method related to the load of the structure and the constraints, was used for the masonry bridge. Specifically, the Mohr-Coulomb constitutive relation was used to assign the materials to the masonry bridge. This constitutive relation allows for linear and non-linear seismic analysis. This task was carried out within the structural software (see Figure 16). As a result, it was possible to analyse the deformation state of the masonry bridge. However, it is necessary to clarify that the analysis performed on the structures represents a test based on the evidence of the correctness of the structural model within the software. Therefore, in order to define a model of deformation closer to reality, it would be necessary to take into consideration further investigations of the dynamic effects, the geotechnical-geological characteristics of the soil, the hydraulic effects (in the case of the bridge), etc. However, the consideration of the latter aspects goes beyond the scope of this paper, whose goal was to identify a specific procedure that we considered more suitable to switch from a 3D point cloud representation (obtained through a geomatics survey) to a 3D model manageable in BIM and FEM environments.
Conclusions
This paper reports an effective procedure to obtain 3D models for HBIM and FEM environments. In addition, using the procedure described herein, it was also possible to model structures (as shown in the case study of the masonry bridge) that had thick vegetation covering part of them. However, the procedure required several manual steps and the use of multiple softwares. At present, no single software has been developed that allows this process to be tackled directly from a geomatics survey to modelling and subsequent transformation into an object useable in BIM or FEM.
In the construction of 3D models, a key role is played by geomatics surveying. In fact, the higher the quality with which a model is built (in terms of precision and structure details) the more suitable the model will be to be implemented within BIM and FEM software.
Lastly, parametric modelling with the Grasshopper tool (implemented in the Rhinoceros software) allowed us to efficiently parameterize the elements of the analysed structures. A further potential of this tool is related to the possible updating of the static condition of the structure. In other words, Grasshopper allows building suitable models for structural verification over time, i.e., in 4D. In addition, this tool allows creating surfaces capable of representing existing structures; therefore, once a model is obtained, it is possible to build structural reinforcements that can be applied to the structure. | 8,117.2 | 2020-02-12T00:00:00.000 | [
"Computer Science"
] |
Improved POLSAR Image Classification by the Use of Multi-Feature Combination
Polarimetric SAR (POLSAR) provides a rich set of information about objects on land surfaces. However, not all information works on land surface classification. This study proposes a new, integrated algorithm for optimal urban classification using POLSAR data. Both polarimetric decomposition and time-frequency (TF) decomposition were used to mine the hidden information of objects in POLSAR data, which was then applied in the C5.0 decision tree algorithm for optimal feature selection and classification. Using a NASA/JPL AIRSAR POLSAR scene as an example, the overall accuracy and kappa coefficient of the proposed method reached 91.17% and 0.90 in the L-band, much higher than those achieved by the commonly applied Wishart supervised classification that were 45.65% and 0.41. Meantime, the overall accuracy of the proposed method performed well in both Cand P-bands. Polarimetric decomposition and TF decomposition all proved useful in the process. TF information played a great role in delineation between urban/built-up areas and vegetation. Three polarimetric features (entropy, Shannon entropy, T11 Coherency Matrix element) and one TF feature (HH intensity of coherence) were found most helpful in urban areas classification. This study indicates that the integrated use of polarimetric decomposition and TF decomposition of POLSAR data may provide improved feature extraction in heterogeneous urban areas.
Introduction
Terrain and land-use classification is an important component of synthetic aperture radar (SAR) image application.SAR data in early years were often collected at a single frequency and pre-determined polarization (H or V), which precluded the separation and mapping of terrain classes due to limited information obtained by these systems [1].Polarimetric SAR (POLSAR) submits and receives fully polarized radar signals, containing more information on land surfaces than conventional single-or dual-polarization SAR systems [2].It is reported in past studies that terrain surfaces can be classified more accurately from POLSAR data [3][4][5][6].The POLSAR image classification has become an important research topic since POLSAR images from ENVISAT ASAR, ALOS PALSAR, TerraSAR-X, Cosmos sky-med and RADARSAT-2 are made publicly available.
A group of methods have been proposed for classifying POLSAR imagery, which can be divided into three schemes.The first classification scheme is based on polarimetric decomposition theory [2].The decomposed polarimetric parameters are related to physical properties of natural media and thus help in identifying terrain classes.Example classifiers in this scheme include the Entropy/Anisotropy/Alpha [7], Freeman 3-component decomposition [8], and Yamaguchi 4-component decomposition [9].The second classification scheme incorporates statistical data such as the polarimetric covariance matrix and the distance between an unknown pixel and a clustering center in feature space [10,11].These statistical measures have been commonly applied in regular supervised or unsupervised (e.g., ISODATA) classification.The third classification scheme adopts the so-called integrated approach, which combines the abovementioned polarimetric decomposition and statistical classification.A representative example is the Entropy/Alpha-Wishart classifier [12].In this approach, the polarimetric data are first initialized by the entropy/alpha decomposition, and the maximum likelihood classification is applied to extract the best-fit complex Wishart distribution [13] of the training samples.Besides the polarimetric decomposition information, this classification scheme can be improved by introducing additional features such as polarimetric interferometric SAR (PolInSAR) [14] and multi-polarization textural information [15][16][17].
Classifiers can be broadly divided into two categories: statistical clustering [18] and machine learning [19].A well-recognized example of statistical classifier is the complex Wishart classifier [11], a pixel-based maximum likelihood classifier based on a complex Wishart distribution of the polarimetric coherency matrix [20].It requires that the distribution of ground features follow a normal probability distribution function.The complex distribution of ground features, especially for those in high-resolution POLSAR data, often violates this premise and leads to poor classification results [21].Example machine learning classifiers include support vector machine (SVM), C5.0 decision tree algorithm, neural network algorithm and ensemble learning methods [19,22], each with distinctive characteristics.Among these, however, the most effective method for classifying POLSAR data is not clear.Another concern in POLSAR image classification is the feature selection.Whether using the statistical clustering or machine learning, feature selection is a critical issue.Numerous features can be extracted from POLSAR data, some of which have been widely applied such as radiometric information and full-polarization decomposition features.Recently, new polarimetric features such as time-frequency (TF) decomposition [23] have been extracted but have yet to be applied in classification.Whether these newly-identified features are useful in classifying POLSAR data is uncertain.
In this study, we explored various processes of feature and classifier selection and proposed a new method for classifying POLSAR data by integrating polarimetric decomposition and TF decomposition.By evaluating the input features, the C5.0 decision tree algorithm [24] efficiently selects the most important features and determines the splits for final tree construction.The effectiveness and stability of these algorithms were demonstrated in experiments on an example C-, L-and P-band NASA/JPL AIRSAR dataset.
Study Site and Dataset
The study area is located in San Francisco, CA, USA.As shown in the Pauli-color coded L-band polarimetric image (Figure 1), it covers both natural targets and urban areas with differently oriented buildings.Common ground covers include sea surfaces, forests, buildings, grass fields, bare grounds, parking lots, and sand surfaces.In Pauli-color coded scheme, red, green and blue are Pauli-color coded as |HH -VV|, |HV|, and |HH + VV|, respectively.In this composition, predominantly surface-scattering objects have bluish tones, double bounce reflections in red and volume scatterers in green.The POLSAR data were the Airborne Synthetic Aperture Radar (AIRSAR) fully polarimetric C-, L-, and P-band images downloaded from NASA Jet Propulsion Laboratory (JPL) [25].The images were acquired on 15 July 1994.The look angle ranges from 21.5° at near range to 71.4° at far range.The ground spatial resolution is about 6.6 m in the range direction and 9.3 m in the azimuthal direction.Before image analysis, this POLSAR dataset was filtered using the 5 × 5 refined Lee POLSAR speckle filter [26].It effectively preserves polarimetric information and retains subtle details while reducing the speckle effect in homogeneous areas.
A set of 12 classes were selected to represent land covers in the image: ocean at far range (FO), ocean at near range (NO), ocean centralized between far and near range (MO), lake (LK), dense forest (DF), trees (TS), grass (GS), bare land (BL), road (RD), orthogonal building (OB), non-orthogonal building (NB) and shadow (SD).Ocean surfaces were divided into far, central and near ocean areas according to their locations along the range direction because radar backscattering on ocean surfaces is affected by incident angles.In addition, classification accuracy of buildings is affected by the orientation of the building relative to the radar line of sight.Thus, buildings were divided into orthogonal and non-orthogonal classes.
By visually interpreting these polarimetric data and referring to Google Earth images, we randomly extracted polygons of the 12 classes (31,929 pixels) of the study area.In order to explain the polygons clearly, the distribution of the samples is shown on the span image in Figure 2.These pixels were then randomly divided into training and validation samples (Table 1).These samples were used for training and accuracy assessment of the POLSAR classification.
Methodology
This study developed a new classification approach to integrating polarimetric information and time-frequency (TF) decomposition in a C5.0 decision tree classifier.The framework of the classification scheme is shown in Figure 3.The main steps are described below.Details of each process are provided in the corresponding sub-sections.
Polarimetric Information
The greatest advantage of POLSAR data over conventional single-or multi-polarization SAR is its inclusion of polarimetric information of ground features.Therefore, it offers a powerful means of detecting objects based on their unique electromagnetic radiation characteristics and scattering mechanisms captured in the image.The polarimetric decomposition technique is an effective method that divides a received radar signal into several scattering responses of simpler objects.It simplifies the physical interpretation of objects, allowing the extraction of corresponding target types from POLSAR data.
A variety of polarimetric decomposition methods have been developed to extract polarimetric information.We explored the following ones: Barnes, Huynen, Holm, Cloude, Freeman Two Components, Freeman Three Components, VanZyl Three Components, Yamaguchi Three Components, Yamaguchi Four Components, Neumann Two Components, Krogager, Touzi, and H/A/Alpha.Please refer to [2] for detailed calculation and physical interpretation of these polarimetric parameters.Moreover, derivative polarimetric features, such as conformity coefficient [27], scattering predominance [28], scattering diversity [29], degree of purity [30], and depolarization index [31], were also extracted to promote an optimal classification.A total of 68 polarimetric information features were obtained using PolSARPro_v4.2(Table 2).
Time-Frequency Decomposition
Through the TF technique, a POLSAR image can be decomposed into several sub-aperture images, each containing the unique scattering characteristics of a target viewed from different azimuthal look angles [23].One advantage of this technique is its full use of "hidden" information in single-shot POLSAR images.For example, when SAR Polarimetry and PolInSAR data cannot be obtained from a two-shot POLSAR image, the TF technique can compensate for the lack of interference information.
The TF analysis in the azimuth direction is introduced as follows.Radar observation at a single pixel is the result of an area observation over a certain range of angles limited by the azimuth antenna pattern [2].TF decomposition in azimuth direction results in a set of images containing different parts of the SAR Doppler spectrum at a reduced resolution, but corresponding to different azimuth look angles.These sub-aperture images can be used to detect objects with isotropic behaviors, for example scatterers with complex geometrical structures [7].
The TF decomposition can also be performed in range direction [32].In this direction, TF decomposition decomposes the POLSAR image into a set of sub-aperture images with different observation frequencies, from which objects with frequency-sensitive responses, for example resonating spherical and periodic structures, can be detected [23].Urban areas are composed of buildings with distinct structures and orientations, therefore radar looking directions are often more important than these frequency effects in urban land classification.For this reason, we only applied the azimuthal TF decomposition and convert the POLSAR data into two sub-aperture images.The frequency-related TF decomposition in range direction is not examined here.Rather, the effect of frequency on building extraction is evaluated from backscattering intensities of the C-, L-and P-band POLSAR images.
The polarimetric difference and interferometric information between the two sub-aperture images are also explored.Both sub-aperture images are processed with polarization decomposition, and the same set of the decomposition components are extracted to calculate their difference in the two images.Three common polarization decomposition methods were applied in this step: Cloude-Pottier [33], and decomposition.Common interferogram information includes complex interferogram intensity, coherence and phase diversity [35][36][37].This information was extracted using the interferometry models in RAT_v0.21[38].The 29 TF features extracted from the decomposition are listed in Table 3.
Table 3. Features obtained by sub-aperture analysis.
Interferometric info. (19)
Intensity, amplitude and phase of complex interferograms on HH, HV, VV Intensity, amplitude and phase of coherence estimation on HH, HV, VV Phase diversity
C5.0 Decision Tree
The decision tree is a classification algorithm favored for its high speed, high accuracy, simple generation mode and applicability to large datasets.Not requiring pre-decided data distribution, this algorithm is popularly used in data mining for complicated, non-linear mapping.Furthermore, this algorithm possesses innate feature-selection ability [26,39,40].Here we used C5.0 decision tree [24] to construct the classification rules in POLSAR image classification.C5.0 decision tree is evolved from C4.5 decision tree that is descended from an earlier system called ID3.Compared with C4.5, C5.0 can automatically winnow the attributes before a classifier is constructed, discarding those that appear to be only marginally relevant.Overall, the features of C5.0 are: (1) robustness to missing data and large input fields; (2) generation of intuitive rules, enhancing user understanding of the algorithm; (3) fast operation speed and efficient memory use; and (4) a powerful boosting technique, i.e., boosting and cost-sensitive tree building, to improve classification accuracy [23].
The 68 polarimetric features (Table 2) and the 29 TF parameters (Table 3) were combined into a multichannel image.A 97-element feature vector was then formed for each pixel (Table 1).All features were initially compared in the C5.0 decision tree with the following process: firstly, pruning severity and minimum records per child branch involved in C5.0 decision tree were set to be 75% and 2, respectively.Then, the information gain ratios of features [41] were calculated.The feature with the highest ratio was selected as the root node of the tree.Other features were hierarchically divided into branches by recalculating and assigning the highest ratio as this branch node.The iteration continued until a pre-defined threshold was satisfied.At last, the tree was pruned to prevent its overfitting.With this decision tree, the optimal features were determined, which were finally used to perform the POLSAR classification.
Comparison between the Proposed Method and the Wishart Supervised Classification
Classification results of the proposed method with the L-band image are shown in Figure 4a.The study area is a highly urbanized city (San Francisco, CA, USA).Urban structures, including buildings in different orientations and roads are fairly identified.Green covers in urban lands (e.g., parks) are clear.Ocean surfaces also show clear tonal differences from far range to near range.As a comparison, the commonly applied Wishart supervised classification [11] was also performed with the L-band image.The Wishart supervised classification (Figure 4b) is more greenish than that of the proposed method, revealing apparent overestimation of green covers.Correspondingly, urban structures are severely underestimated.The near ocean is misclassified as bare land (pink area in the upper right), while the far ocean is confused with lake and near ocean in the left and grass near the bridge in the upper left corner.Between Figure 4a,b, our proposed method yields the overall distributions of land surfaces that are similar to the original image.
Using the validation points in Table 1, the accuracies the two classifications in Figure 4 are also compared with a confusion matrix approach (Tables 4 and 5).The overall accuracy (OA) of the proposed method was 91.17%, much higher than that of Wishart supervised classification (45.65%).The kappa value of the proposed method was 0.90, also much higher than 0.41 of the Wishart supervised classification.Furthermore, the producer's (PA) and user's (UA) accuracies were higher than those of the Wishart supervised classification for all classes.As an example, the UA and PA of bare land (BL) evaluated by the Wishart supervised classifier was 1.29% and 1.23%, respectively.As indicated by the confusion matrix, bare land was frequently confused with near ocean, grass and road.The proposed method greatly alleviated this situation, improving the UA and PA to 91.22% and 84.88%, respectively.For the example of non-orthogonal buildings (NB), the Wishart supervised classifier dramatically confused it with dense forest (DF) and trees (TS), yielding the UA and PA of 41.95% and 41.92%, respectively.The proposed method largely remedied the confusion and increased the UA and PA to 82.34% and 88.89%.Similar results were obtained for classifications with C-and P-band data.The results indicate a huge improvement of classification with the proposed method in urban lands.
Contribution of Polarimetric and TF Features
The contribution was assessed by performing the C5.0 decision tree classification using a specific type of features (polarimetric or TF) each time.Their overall accuracies and Kappa values are compared with the all-feature classification that we proposed in this study (Table 6).
Classification with full features reached the highest accuracies.By using polarimetric features (POL-only) in the classification, the overall accuracy for each band was about 3%-5% lower than the full-feature classification.The kappa coefficients were also decreased.Using TF information itself (TF-only), the overall accuracies were dramatically reduced, with approximately 14% in the C-band, 13% in the L-band and 17% in the P-band.The kappa coefficients also significantly decreased.Therefore, polarimetric features played a better role in POLSAR image classification than TF features.In order to investigate the contribution of TF and polarimetric features to the accuracies of different classes, their producer's (PA) and user's (UA) accuracies with L-band image are listed in Table 7.
In comparison with our classification using full features, the PAs and UAs of different ground objects decreased when POL-or TF-only information was used.It indicates that both TF and polarimetric information are important in the proposed method.The POL-only method significantly reduced the PA and UA of DF (dense forest), TS (trees) and LK (lakes) (>5%), indicating that TF information is required for accurately classifying these ground objects.The TF-only method also considerably decreased the PA and UA of ground objects.The decline in PA and UA of bare land and lake exceeded 20%.Therefore, polarimetric information is important for accurately classifying bare land, lake and central ocean areas.Figure 5 shows the results of POL-only and TF-only classifications on L-band data.In the absence of TF information (Figure 5a), higher misclassifications were observed than the proposed full-feature classification in Figure 4a.For example, near the bridge in the upper left corner, the far ocean was misclassified as bare land.In the absence of polarimetric information (Figure 5b), some green areas in urban lands were misclassified as buildings.Two subsets of the image (marked as the red and blue squares in Figure 5) were selected to show more details about the effects of polarimetric and TF information.In these subsets, the original image and the three classification results are visually compared (Figure 6).As displayed in Figure 6a, the red-squared subset is a typical dense residential area with regularly oriented dense buildings.Compared with the full-feature classification (Figure 6d), removing TF information (Figure 6b) resulted in misclassifying buildings to dense forest.The importance of TF information in delineating dense forest from non-orthogonal buildings has also been reported in previous studies [42].On Google Earth, the blue-squared subset is a newly developed commercial and light industrial land.It has mixed cover of buildings, parking lots and open spaces with dense road networks (e.g., highways) (Figure 6e).For road classification, the TF-only classification results in coarse clusters (Figure 6g), while the POL-only classification (Figure 6f) is noisy.It is the combination of TF and polarimetric features that contributes to a reasonable classification result in Figure 6h.This phenomenon is in conformity with the analysis of accuracy of road classification in Table 7.
Contribution of C5.0 Decision Tree Algorithm
To evaluate the contribution of the C5.0 decision tree algorithm in the proposed method, the algorithm was replaced by various alternative classifiers [19] in L-band; neural network (NN), and SVMs with different kernel functions-radial basis function (SVM-RBF) and polynomial (SVM-POLY) [19].The OA and kappa values of the classification results are listed in Table 8.
From the table, the highest accuracies and kappa coefficients in each band were obtained by the proposed method.This indicates that the C5.0 decision tree classifier adopted in the proposed method is more effective than the other tested classifiers.Moreover, the Wishart supervised classifier yielded the lowest classification accuracy, while the classifier with multiple features achieved a relatively high accuracy, revealing that accurate classification requires the integration of multiple features.Finally, regardless of classifier, P-band data were classified with the lowest accuracy.This behavior may be caused by the long wavelength of the P-band.Ground features in most urban areas are difficult to distinguish due to the complex scattering mechanisms of signals at longer wavelengths.QUEST decision tree is designed to reduce the processing time required for the large decision tree analysis.Compared with QUEST, the rule of C5.0 decision tree is more complex, but it allows for more than two subgroups of segmentation many times.SVM is computationally expensive.Neural network has a strong ability of nonlinear fitting, but it is difficult to provide clear classification rules.C5.0 decision tree has a better performance on feature space optimization and feature selection, especially when the feature set is large [24].
Contribution of Multi-Frequency Dataset
Radar signals at different wavelengths exhibit different sensitivities to ground features [43,44].Thus, combining multiple bands might be helpful for ground imaging.Here, POLSAR data of three frequencies are combined and input to C5.0 decision tree.The results of this test are shown in Figure 7 and Table 9.Compared with other results, simultaneous use of C-, L-and P-band data further reduces the quantities of confused pixels between classes.For example, misclassification is diminished near the bridge in Figure 7, and the distribution of vegetation and buildings is more comparable to the high-resolution image at Google Earth.In Table 9, combining any two bands dramatically increased the accuracies compared to any single-frequency classification.Using all of C-, L, and P-band data reached the highest OA (96.39%) and Kappa coefficient (0.96).In order to study the effects of single bands and band combinations of classification accuracy on different ground objects more clearly, PA and UA of typical classes were provided in Figure 8. From the Figure 8a, PA of trees in C-band was higher than that in L-band, while PA of orthogonal building in C-band was lower.Comparing the scattering mechanisms at different frequencies, the C-band return is primarily from volume scattering in the vegetation canopy, whereas L-band scattering is stronger for ground as well as double bounce in urban areas.The L-band classification plays better in the distinction among forest, trees, and building.At higher frequencies, POLSAR data are less sensitive to azimuth slope variations because electromagnetic waves at short wavelength are more sensitive and less penetrative to small scatterers.This may explain the poorest performance of P-band classification.
Classification accuracies of multi-frequency dataset performed better than those of single bands.For instance, using the combination of C-and L-band datasets, the PA of each class was increased, compared with that of a single band.The PA and UA of trees, grass, and non-orthogonal buildings were enhanced to a large degree.As waves at different wavelength are sensitive to various scatterers, the methods using the combination among different bands dataset for comprehensive utilization of this nature makes the classification precision improvement.Overall, the C-and L-band PolSAR data are more suitable for single band data classification, and multi-band classification performs much better than any single-band data.
Stable Features in POLSAR Image Classification
When all POLSAR features are included, the proposed method reaches high classification accuracy.However, practically, it is time consuming and inefficient to collect such a large set of features from POLSAR imagery.With reduced sets of features, the complexity of the C5.0 decision tree can be effectively decreased and the applicability improved.For this purpose, all features (100%) involved in the proposed method were sorted by their predictor importance (calculated by the C5.0 decision tree algorithm) to test the feasibility of feature reduction.The feature groups at top-ranking 50%, 40%, 30%, 20% and 10% were selected and classified in the C5.0 approach.The accuracies are compared in Table 10.For all images in three frequencies, the overall accuracies were similar when using 100%, top 50%, 40%, and 30% features.Accuracies slightly changed when features used in classifications dropped to 20%.When only 10% of features were used, however, there was a relatively large decrease of the accuracies.Therefore, the top-ranking 20% of features are a reasonable set of input features for classification.Table 11 lists the top 20% of features used in the proposed method of C-, L-and P-band in a descending order of their predictor importance scores.For images at different frequencies, a different set of features was included in each rank.Four features were always selected: three polarimetric features including H/A/Alpha decomposition (entropy), Shannon entropy, and T11 Coherency Matrix element that describes the single scattering flat surface (or odd scattering), and one TF feature that is the intensity of coherence of HH.These four features are highlighted in bold in Table 11.
Using these four features as inputs, the accuracies of the proposed method and the Wishart supervised classification method are compared in Table 12.
For all frequencies, the overall accuracies of the proposed methods were around 30% higher than the Wishart supervised method.For the C-band image, its accuracy was even higher than the top 10% features as listed in Table 10.Interestingly, with only four features, classification of the C-band image reached the highest accuracy, while that of the L-band image had the best results when more features were used (as shown in Table 10).The P-band image turned out to have the lowest accuracies for all combination of features, which could be related to noises introduced by more complex interaction between longer wavelength signals and heterogeneous urban surfaces.The four features in bold are the stable features which exist in the top 20% of features used in the proposed method of C-, L-and P-band.(TF) stands for TF feature, others are polarimetric features.
Discussion
The proposed method mines the information inherent in POLSAR images, and achieves relatively high classification accuracies without support from other data.For example, repeat-pass interferometry improves the classification of ground features, such as buildings [40].However, a polarimetric interference dataset is difficult to obtain, and incurs high cost.In the absence of a repeat-pass interferometric dataset, the proposed method obtains interferometric information between different sub-aperture images using the TF technique.
The benefit of the proposed method is revealed in several ways.First, the data are processed images without the need of complex pre-processes as needed for raw data.Second, the model adopts the well-established TF and polarization decomposition techniques and the C5.0 decision tree algorithm, which can be easily implemented and integrated.Third, the proposed method is compatible with different POLSAR features and classifiers.Accordingly, our procedure is adaptable to new features or classifiers.For example, the QUEST algorithm [45] is less accurate than the C5.0 algorithm, but its tree depth can be controlled to decrease the complexity of the classification rules.Hence, the C5.0 could be replaced by this algorithm if a simple decision tree is sufficient.Finally, the classical Wishart supervised classification assumes a Gaussian distribution of ground features.This assumption is suitable for natural environments with relatively homogeneous land covers, but not viable in urban areas.Therefore, the Wishart supervised classification yields low accuracy in the present study.In contrast, the proposed method is decision treebased and does not require a hypothesized statistical distribution, and is applicable to various land covers.Different from black box algorithms, such as neural networks, the proposed method is a white box.The given classification rule in each branch reveals the ground objects associated with specific POLSAR features.Therefore, the proposed method can yield a clear physical explanation.
Among the rich set of POLSAR features, three polarimetric features (H/A/Alpha entropy, Shannon entropy, T11) and one TF feature (HH coherence intensity) are found always holding high importance in urban classification of the test site.T11 stands for single or odd-bounce scattering, entropy measures the degree of the randomness of the scattering process, for which entropy→0 corresponds to a pure target, whereas entropy→1 means the target is a distributed one.Shannon entropy [46] is a way of quantifying the disorder of random variables, it is the sum of two contributions related to intensity and polarimetry of PolSAR data.So it can determine which fraction of the disorder quantified by the entropy comes from intensity fluctuations from depolarization, and from incoherence.The fluctuating random variables have high value of Shannon entropy, while the quasi-deterministic random variables have relatively low value.Intensity of coherence of HH is the coherence generated by PolInSAR technique using the two sub-aperture images from the full-resolution POLSAR data.These features played different roles in urban classification.For example, TF information (HH coherence intensity) could be very helpful in distinguishing dense forest and slant-buildings.Generally, buildings have the typical characteristics of double-bounce scattering, and dense forest has the typical characteristics of volume scattering.However, some buildings have specific orientations not aligned in the azimuth direction or have complex structures, which may cause significant depolarization and produce high cross-polar levels that can appear as volume scattering.Consequently, those buildings were classified as a volume class, and then misinterpreted as dense forest (Figure 3b).But in the two sub-aperture images, buildings, unlike dense forest, are high-coherence targets, thus TF information can separate buildings from dense forest.The selection of POLSAR features is related to physical properties of ground objects and their distributions.Better understanding of these features is thus important in advancing POLSAR applications.
As demonstrated in this study, accuracies of POLSAR image classification also vary using data acquired in different frequencies.One may notice that C-and L-band data achieve higher accuracies than P-band (Table 8).The possible reason is that the shorter wavelength (C, L) can get more spatial information than the longer (P-band) in high-density urban area.But multi-frequency information has strong mutual complementariness.For example, the long wavelength of P-band supplies electromagnetic scattering information that is unobservable in the C-or L-band, but reveals less detailed spatial information.By combining the P-band data with those of the C-and L-bands, the electromagnetic and spatial details can be fully utilized to enhance the delineation of ground objects.Additionally, some studies have shown that other features, such as the object-oriented spatial information, are also useful in POLSAR image classification [40].More experiments will be conducted in the future to investigate the contribution of these new features in urban mapping.
Conclusions
This study integrates time-frequency information, polarimetric information and C5.0 decision tree into a novel approach to performing POLSAR image classification in an urban area.The integrated results achieved an overall classification accuracy around 90% on C-and L-band data, and 85% on P-band data, much higher than the Wishart supervised classification.Polarimetric information better distinguished among bare land, lake and ocean, while TF information reduced the confusion between urban/built-up areas and vegetation.Four stable features, entropy, Shannon entropy, T11 and HH intensity of coherence, are found more useful than other POLSAR features in urban classification.This approach provides a superior way of classifying urban areas from multi-band POLSAR imagery.
Figure 2 .
Figure 2. The distribution of the samples shown on the span image.
Figure 3 .
Figure 3. Flowchart of the classification method.
Figure 4 .
Figure 4. Classification results of proposed method and Wishart supervised method on L-band data; (a) proposed method; (b) Wishart supervised method.
Figure 7 .
Figure 7. Classification results of adding C-and P-band data to L-band data.
Figure 8 .
Figure 8. PA and UA histogram of Multi-Frequency Dataset.
Table 1 .
Number of Pixels Allocated to Training and Validation Samples in Image Classification.
Table 4 .
Confusion Matrix of the Proposed Method (L-band).
Table 5 .
Confusion Matrix of the Wishart Supervised Classification (L-band).
Table 6 .
Accuracies for classification with full features (proposed), polarimetric features (POL-only) and TF features (TF-only) of the three images.
Table 7 .
PA and UA of POL-only and TF-only method on L-band.
Table 8 .
Classification Accuracy of Different Classifiers.
Table 9 .
Accuracy of Multi-Frequency Dataset.
Table 10 .
Overall Accuracies of classification with reduced features.
Table 11 .
Top 20% of features in the proposed method of C-, L-and P-band.
Table 12 .
Overall Accuracy of Wishart supervised method and proposed method using only 4 features. | 6,972 | 2015-04-08T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
50 Years of Zweifel Olefination: A Transition-Metal-Free Coupling
The Zweifel olefination is a powerful method for the stereoselective synthesis of alkenes. The reaction proceeds in the absence of a transition-metal catalyst, instead taking place by iodination of vinyl boronate complexes. Pioneering studies into this reaction were reported in 1967 and this short review summarizes developments in the field over the past 50 years. An account of how the Zweifel olefination was modified to enable the coupling of robust and air-stable boronic esters is presented followed by a summary of current state of the art developments in the field, including stereodivergent olefination and alkynylation. Finally, selected applications of the Zweifel olefination in target-oriented synthesis are reviewed. 1 Introduction 2.1 Zweifel Olefination of Vinyl Boranes 2.2 Zweifel Olefination of Vinyl Borinic Esters 2.3 Extension to Boronic Esters 3.1 Introduction of an Unsubstituted Vinyl Group 3.2 Coupling of α-Substituted Vinyl Partners 3.3 Syn Elimination 4 Zweifel Olefination in Natural Product Synthesis 5 Conclusions and Outlook
Introduction
The stereocontrolled synthesis of alkenes is a topic that has attracted a great deal of attention owing to the prevalence of this motif in natural products, pharmaceutical agents and materials. 1 Of the many olefination methods that exist, the Suzuki-Miyaura coupling represents a highly convergent method to assemble alkenes (Scheme 1, a). 2 However, although the coupling of vinyl halides with primary and sp 2 boronates takes place effectively, the coupling of secondary and tertiary (chiral) boronates remains problematic. 3 Furthermore, the high cost and toxicity of the palladium complexes required to catalyze these processes also detract from the appeal of this methodology. 4 The Zweifel olefination represents a powerful alternative to the Suzuki-Miyaura reaction, enabling the coupling of vinyl metals with enantioenriched secondary and tertiary boronic esters with complete enantiospecificity (Scheme 1, b). 5 The reaction is mediated by iodine and base and proceeds with no requirement for a transition-metal catalyst.
Short Review Syn thesis
This process is based upon pioneering studies reported in 1967 by Zweifel and co-workers on the iodination of vinyl boranes. This short review summarizes the key contributions made over the last 50 years that have enabled this transformation to evolve into an efficient and atom-economical method for the coupling of boronic esters. Recent contributions to the field are described including the development of Grignard-based vinylation, stereodivergent olefination and alkynylation processes. Finally, selected examples of Zweifel olefination in target-oriented synthesis are reviewed to highlight the utility of this methodology.
Zweifel Olefination of Vinyl Boranes
In 1967, Zweifel and co-workers reported that vinyl boranes 1, obtained by hydroboration of the corresponding alkynes, could be treated with sodium hydroxide and iodine, resulting in the formation of alkene products 2 (Scheme 2). 6 Intriguingly, although the intermediate vinyl boranes were formed with high E-selectivity, after addition of iodine, Z-alkenes were produced. A reaction with a diastereomerically pure secondary borane afforded the coupled product, 2d, as a single anti diastereoisomer, indicating that the process proceeds with retention of configuration. 7 Mechanistically, this reaction is thought to proceed by activation of the π bond with iodine along with complexation of sodium hydroxide to form a zwitterionic iodonium intermediate 3. This species is poised to undergo a stereospecific 1,2-metalate rearrangement resulting in the formation of a β-iodoborinic acid 4. In the presence of sodium hydroxide, this intermediate then undergoes anti elimination to afford the resulting Z-alkene product. 8 Because vinyl borane intermediates could only be accessed by hydroboration of alkynes, the iodine-mediated Zweifel coupling was initially limited to the synthesis of Zalkenes. 9 However, Zweifel and co-workers subsequently reported an elegant strategy for the complementary synthesis of E-alkenes (Scheme 3). 10 This transformation was achieved by reacting dialkyl vinyl borane 5 with cyanogen bromide under base-free conditions. Following stereospecific bromination, a boranecarbonitrile intermediate 8, was formed, a species that was sufficiently electrophilic to undergo syn elimination. A variety of boranes underwent this transformation, forming alkenes 6a-c in high yields and with very high levels of E-selectivity. Chiral non-racemic boranes could be transformed with complete stereospecificity.
Scheme 3 Synthesis of E-alkenes using cyanogen bromide
A related syn elimination process was reported by Levy and co-workers (Scheme 4). 11 In this case, a vinyl lithium reagent was prepared by lithium-halogen exchange and then combined with a symmetrical trialkylborane resulting in formation of boronate complex 9. Treatment of this intermediate with iodine resulted in stereospecific iodination to produce β-iodoborane 10. The enhanced electrophilicity of this species (compared to β-iodoborinic acids such as 4) enabled a syn elimination to occur, generating the corresponding trisubstituted alkene 11 with high levels of stereocontrol. Although the substrate scope of the process is wide, the method was limited to the use of symmetrical trialkyl boranes.
Brown and co-workers demonstrated that the Zweifel olefination can also be applied to the synthesis of alkynes (Scheme 5). 12 In this case, monosubstituted alkynes were deprotonated to form lithium acetylides, which were reacted with trialkylboranes to form alkynylboronate complexes 12. Addition of iodine triggered a 1,2-metallate rearrangement to generate β-iodoboranes 13, which spontaneously
Short Review Syn thesis
underwent elimination to form alkyne products. This process represents a convenient alternative to the alkylation of lithium acetylides with alkyl halides and has been successfully employed in total synthesis. 13
Zweifel Olefination of Vinyl Borinic Esters
The transformations described in the previous section suffer from an inherent limitation in that only one of the alkyl groups present in the borane starting materials is incorporated into the alkene product. This is particularly waste-ful when the borane is challenging to access or expensive. One solution to this problem would be to employ a mixed borane in which one (or two) of the boron-bound groups demonstrates a low migratory aptitude (e.g., thexyl). 14 However, in practice, determining which group will migrate has proved to be non-trivial and highly substrate-dependent. For example, Zweifel and co-workers showed that treatment of divinylalkylborane 15 (obtained by double hydroboration of 1-hexyne with thexylborane) with iodine resulted in competitive migration of both the sp 2 and thexyl groups leading to a mixture of the desired product 16 along with 17 (Scheme 6). 15 They overcame this problem by treating the intermediate divinylalkylborane 15 with trimethylamine oxide, resulting in selective oxidation of the B-C thexyl bond to afford borinic ester 18. Due to the low migratory aptitude of an alkoxy ligand on boron, 16 addition of iodine and sodium hydroxide now led to selective formation of Z,E-diene 16. Although this allowed control over which group migrated, the method was limited to the synthesis of symmetrical dienes.
Short Review Syn thesis
A more general approach to the iodination of vinyl borinic esters was later reported by Brown and co-workers (Scheme 7). 17 In this case, non-symmetrical vinyl borinic esters 20 were obtained by hydroboration of alkynes with alkylbromoboranes followed by methanolysis of the resulting bromoborane intermediates 19. Addition of sodium methoxide and iodine led to alkene products 21a-d in good yields and very high levels of Z-selectivity.
Extension to Boronic Esters
Although the use of borinic esters significantly expanded the potential of the Zweifel olefination, there were still significant problems with this approach, most notably associated with the high air sensitivity of the borane starting materials. In contrast to boranes, boronic esters are air-and moisture-stable materials which can be readily prepared via a wide range of methods. 18 Evans and Matteson independently recognized the potential of boronic esters as substrates for Zweifel olefination communicating independent studies almost simultaneously. 19, 20 Matteson's coupling process began with the synthesis of a vinyl boronate complex 23 by addition of an organolithium reagent to a vinyl boronic ester 22 (Scheme 8). 19 This intermediate was treated with iodine and sodium hydroxide, resulting in iodination followed by 1,2-metallate rearrangement to form a β-iodoboronic ester which underwent anti elimination to form the corresponding Z-alkene. This reaction could be carried out with alkyl or aryl lithium reagents and the coupled products 24a and 24b were formed in moderate to good yields.
Scheme 8 Zweifel olefination of vinyl boronic esters
Evans and co-workers' strategy also began with formation of a vinyl boronate complex (Scheme 9). 20 In contrast to Matteson's approach, this intermediate was accessed by reacting E-vinyl lithium reagent 26a (prepared by lithiumhalogen exchange) with secondary alkyl boronic ester 25. Treatment of the resulting vinyl boronate complex 27a with iodine and sodium methoxide resulted in formation of alkene 28a in 75 % yield (>96:4 Z/E). When a Z-vinyl lithium precursor 26b was employed, alkene 28b was obtained in 58 % yield with very high E-selectivity. The flexibility derived from the ability to form identical vinyl boronate complexes by either reacting a vinyl boronic ester with an organolithium or a vinyl lithium with a boronic ester is a particularly appealing feature of the Zweifel olefination.
Brown and co-workers subsequently extended this methodology to enable the synthesis of trisubstituted alkenes (Scheme 10). 17c,21 By reacting various trisubstituted
Short Review Syn thesis
vinyl boronic esters (29) with organolithium nucleophiles, a range of products was prepared in good to excellent yields. Notably, heteroaromatic groups could be introduced (in 31b) and alkyl Grignard reagents could be used in place of organolithium reagents (in 31d).
The methods shown in Schemes 8-10 represented a significant advance upon the early work on the Zweifel olefination of boranes and borinic esters. However, at the time the potential of the method was not fully realized owing to the paucity of methods available for the preparation of boronic esters. Consequently, only a handful of studies involving Zweifel olefination were published over the following three decades. 22 In recent years, the huge increase in methods available for the enantioselective synthesis of boronic esters has led to a renaissance in chemistry based upon the Zweifel olefination. Several new studies into the process have been reported along with elegant reports employing Zweifel olefination in total synthesis. These results are described in the following sections.
Introduction of an Unsubstituted Vinyl Group
The introduction of a vinyl group into a target molecule is commonly required in synthesis owing to the prevalence of this motif in natural products and as a valuable handle for further functionalization. The first report describing the introduction of an unsubstituted vinyl group by Zweifel olefination was published by Aggarwal and co-workers in their stereocontrolled synthesis of (+)-faranal (Scheme 11). 23 In this process, vinyl lithium was prepared in situ from tetravinyltin by tin-lithium exchange and was then reacted with enantioenriched secondary boronic ester 32. The resulting vinyl boronate complex was treated with iodine and sodium methoxide, thus promoting 1,2-metallate rearrangement and elimination affording alkene 33. This key intermediate was directly subjected to hydroboration and oxidation to provide alcohol 34 in 69 % yield with very high diastereoselectivity. Oxidation with PCC completed the synthesis of (+)-faranal in 76 % yield.
It was subsequently shown that the vinyl lithium approach could be also be applied to the enantiospecific coupling of trialkyl tertiary boronic esters (Scheme 12, a) 24 and benzylic tertiary boronic esters 25 (Scheme 12, b). It is noteworthy that in these cases despite the sterically congested nature of the boronic ester starting materials, the coupled products were obtained in excellent yields. The double vinylation of primary-tertiary 1,2-bis(boronic esters) has also been achieved using this approach (Scheme 12, c). 26 Using four equivalents of vinyl lithium, diene 37 was obtained in 77 % yield.
Short Review Syn thesis
Vinylation under Zweifel conditions represents a powerful strategy for the synthesis of alkenes. However, the necessity of preparing vinyl lithium in situ from the corresponding toxic stannane or volatile vinyl bromide detracts from the appeal of the process. In contrast, stable THF solutions of vinylmagnesium chloride or bromide are commercially available. 27 Aggarwal and co-workers have studied the Zweifel olefination of tertiary boronic ester 38 with vinylmagnesium bromide in THF. 25 Monitoring the reaction by 11 B NMR spectroscopy revealed that with one equivalent of vinylmagnesium bromide, the expected vinyl boronate complex 39 was not observed and instead a mixture of unreacted boronic ester 38 and trivinyl boronate complex 40 was formed (Scheme 13). The latter species originates from over-addition of vinylmagnesium bromide promoted by the high Lewis acidity of the Mg 2+ counterion. Upon addition of an excess of vinylmagnesium bromide (4 eq.), trivinyl boronate complex 40 was obtained exclusively, and after addition of I 2 followed by NaOMe, the coupled product 41a was obtained in good yield. These conditions were successfully applied to the synthesis of a series of benzylic tertiary substrates 41a-d. The reaction is ineffective at forming very hindered alkenes such as 36, although this product could be synthesized efficiently with vinyl lithium.
Very recently, an improved procedure for coupling unhindered boronic esters with vinylmagnesium chloride has been reported (Scheme 14). 28 As with tertiary boronic esters, it was observed that addition of vinylmagnesium chloride to a THF solution of secondary boronic ester 42 resulted in over-addition to form trivinyl boronate complex 44. However, if the reaction was carried out in a 1:1 THF/DMSO mixture, 29 over-addition was completely suppressed and only mono-vinyl boronate complex 43 was obtained. After addition of iodine and sodium methoxide, the coupled product 45a was obtained in 89 % yield. This process pro-ceeds effectively with a range of primary, secondary and aromatic boronic esters. Notably, the use of the mild Grignard reagent allows chemoselective coupling to occur in the presence of reactive functional groups such as carbamates (in 45b) and ethyl esters (in 45d). Although good yields of product were obtained with unhindered tertiary boronic esters (in 45e), in general, the Zweifel vinylation of tertiary boronic esters is best achieved either with four equivalents of vinylmagnesium halide in THF or with vinyl lithium.
In summary, there are currently three methods available to introduce an unsubstituted vinyl group by Zweifel olefination (Scheme 15). For aromatic, primary and unhindered secondary boronic esters, the desired boronate complex can be formed efficiently using 1.2 equivalents of vinylmagnesium halide in 1:1 THF/DMSO. For the majority of tertiary boronic esters it is recommended to employ four equivalents of vinylmagnesium halide in THF (to form the trivinyl boronate complex), although with extremely hindered tertiary boronic esters, the best results are obtained with vinyl lithium.
Coupling of α-Substituted Vinyl Partners
In addition to the synthesis of alkyl-substituted alkenes, the Zweifel olefination has also been applied to the cou-
Short Review Syn thesis
pling of vinyl partners α-substituted with a heteroatom. The coupling of lithiated ethyl vinyl ether 46 (readily prepared by deprotonation of ethoxyethene with t BuLi) with a tertiary boronic ester proceeded smoothly to provide enol ether 47, which was hydrolyzed under mild conditions to form 48 (Scheme 16, a). 25,30 This process represents a novel method for the conversion of boronic esters into ketones. This methodology has also been extended to the enantiospecific synthesis of vinyl sulfides (Scheme 16, b). 28 A related strategy for the alkynylation of boronic esters has recently been reported by Aggarwal and co-workers (Scheme 17). 31 In contrast to the successful alkynylation reactions of trialkyl boranes discussed previously (Scheme 5), 12,13 boronic esters undergo reversible boronate complex formation with lithium acetylides. This means that addition of electrophiles does not result in coupling, but instead leads to direct trapping of the acetylide and recovery of the boronic ester. A solution to this problem was developed in which vinyl bromides or carbamates were lithiated at the α-position with LDA and then reacted with boronic esters in a Zweifel olefination. Treatment of the resulting vinyl bromides or carbamates with base (TBAF for bromides and t BuLi or LDA for carbamates) triggered elimination to form the corresponding alkynes 50. Coupling of a wide range of secondary and tertiary boronic esters was achieved in excellent yields with complete enantiospecificity.
In 2014, an interesting intramolecular variant of the Zweifel olefination for the construction of four-membered ring products was reported (Scheme 18). 32 In this process, 51, which possesses both a boronic ester and a vinyl bromide, was treated with tert-butyllithium resulting in chemoselective lithium-halogen exchange followed by spontaneous cyclization to form cyclic vinyl boronate complex 52. Upon treatment with iodine and methanol this species underwent stereospecific ring contraction to provide β-iodoboronic ester 53. Elimination of this intermediate gave exocyclic alkene 54 in 63 % yield. It is particularly noteworthy that this challenging Zweifel olefination occurs in good yield despite the highly strained nature of the exomethylene cyclobutene product.
Syn Elimination
Aggarwal and co-workers have reported a method for the synthesis of allylsilanes through a lithiationborylation-Zweifel olefination strategy (Scheme 19). 33 In this process, silaboronate 56 was homologated with configurationally stable lithium carbenoids 55 to provide α-silyl-
Short Review Syn thesis
boronic esters 57, which were then subjected to Zweifel olefination to obtain allylsilane products 58. Notably, it was necessary to carry out the Zweifel olefination without sodium methoxide owing to the instability of the allylsilane products under basic conditions. The substrate scope of the process was wide and a range of allylsilanes was prepared in high yields and with excellent levels of enantioselectivity. Interestingly, with a hindered α-silylboronic ester, E-crotylsilane 58d was obtained as a single geometrical isomer, but Z-crotylsilane 58c was formed in slightly reduced selectivity (95:5 Z/E).
To rationalize the reduced selectivity observed in the formation of Z-crotylsilane 58c, it was postulated that as the boronic ester becomes more hindered, the transition state for anti elimination becomes disfavored due to a steric clash between the bulky R 1 and R 2 substituents (Scheme 20). This allows the usually less favorable syn elimination pathway to compete, resulting in the formation of small amounts of the E-isomer.
Similar behavior has been observed in the Zweifel olefination of hindered secondary boronic esters with alkenyllithiums (Scheme 21). 34 As the boronic ester becomes more sterically encumbered (for example, benzylic or β-branched), increasing formation of the E-isomer was observed, up to 90:10 Z/E in the case of menthol-derived alkene 60c.
Scheme 21 Reduced Z/E selectivity with bulky boronic esters
In these cases, Aggarwal and co-workers have shown that iodine can be replaced with PhSeCl resulting in the formation of β-selenoboronic esters (Scheme 22). 35 Because the selenide is a poorer leaving group than the corresponding iodide, treatment of these intermediates with sodium methoxide led exclusively to anti elimination providing the coupled products 60a-c as a single Z-isomer in all cases. 34 It was also demonstrated that β-selenoboronic esters (obtained by selenation of vinyl boronate complexes) could be directly treated with m-CPBA resulting in chemoselective oxidation of the selenide to give the corresponding selenoxide (Scheme 23, a). 34
Short Review Syn thesis
alkenes with high selectivity. In conjunction with the Zweifel olefination (or its PhSeCl-mediated analogue) this represents a stereodivergent method where either isomer of a coupled product can be obtained from a single isomer of vinyl bromide starting material (Scheme 23, b). The substrate scope of both processes is broad and a range of diand trisubstituted alkenes was prepared including 61c which represents the C9-C17 fragment of the natural product discodermolide.
In some cases, the ability to carry out syn elimination of β-iodoboronic esters is also desirable. For example, very recently Aggarwal and co-workers reported a coupling of cy-clic vinyl lithium reagents with boronic esters (Scheme 24). 28 In this case, the cyclic β-iodoboronic ester intermediates 63 cannot undergo bond rotation and therefore must undergo a challenging syn elimination. It was found that this elimination could be promoted by adding an excess of sodium methoxide (up to 20 eq.). Using this methodology, a range of five-and six-membered cycloalkene products 64 were prepared in high yields and with complete stereospecificity, including glycal 64b and abiraterone derivatives such as 64c.
Short Review Syn thesis
Since the pioneering studies on Zweifel olefination reported by Evans and Matteson, the method has been significantly developed such that a wide range of functionalized alkene products can now be obtained. The final section of this short review showcases selected examples where Zweifel olefination has been used in complex molecule synthesis. 36
Zweifel Olefination in Natural Product Synthesis
Aggarwal and co-workers recently reported an 11-step total synthesis of the alkaloid (-)-stemaphylline employing a tandem lithiation-borylation-Zweifel olefination strategy (Scheme 25). 37 Pyrrolidine-derived boronic ester 65 was homologated with a lithium carbenoid to afford boronic ester 66 in 58 % yield and 96:4 d.r. A subsequent Zweifel olefination with vinyl lithium (synthesized in situ from tetravinyltin) gave alkene 67 in 71 % yield. Notably, these two steps could be combined into a one-pot operation, directly providing 67 in 70 % yield. The alkene was later employed in a ring-closing-metathesis-reduction sequence to form the core 5-7 ring system of (-)-stemaphylline.
A recent formal synthesis of the complex terpenoid natural product solanoeclepin A has been reported by Hiemstra and co-workers (Scheme 26). 38 A key step in this synthe-sis was the vinylation of the bridgehead tertiary boronic ester in 68. Formation of the trivinyl boronate complex with excess vinylmagnesium bromide in THF followed by addition of iodine and sodium methoxide produced alkene 69, which was employed without purification in a subsequent sequence of oxidative cleavage and Horner-Wadsworth-Emmons olefination to form 70 in a yield of 67 % over four steps.
Morken and Blaisdell have reported an elegant stereoselective synthesis of debromohamigeran E that employs a Zweifel coupling of an α-substituted vinyl lithium (Scheme 27). 39 Cyclopentyl boronic ester 72 was prepared from 1,2bis(boronic ester) 71 in 42% yield by a highly selective hydroxy-directed Suzuki-Miyaura coupling. This intermediate was then subjected to Zweifel coupling with isopropenyllithium (synthesized by Li-Br exchange) to form 73 in 93 % yield. Completion of the synthesis of debromohamigeran E required four further steps including hydrogenation of the alkene to an isopropyl group.
A short enantioselective total synthesis tatanan A was reported by Aggarwal and co-workers, which employs a stereospecific alkynylation reaction (Scheme 28). 40
Short Review Syn thesis
76 in 97 % yield with complete diastereospecificity. This alkyne was converted into the trisubstituted alkene of tatanan A in two further steps.
A collaborative study on the synthesis of ladderane natural products was recently published by the groups of Boxer, Gonzalez-Martinez and Burns (Scheme 29). 41 A key intermediate in these studies was the unusual lipid tail [5]-ladderanoic acid. This compound was prepared from mesoalkene 77 by a sequence involving copper-catalyzed desymmetrizing hydroboration (95 % yield, 90 % ee) followed by Zweifel olefination with vinyl lithium reagent 79 (3:1 E/Z). It was found that carrying out the Zweifel olefination with N-bromosuccinimide rather than iodine was critical to achieve efficient coupling. Following silyl deprotection, the coupled product 80 was obtained in 88 % yield as an inconsequential mixture of Z/E isomers. Hydrogenation of the alkene followed by Jones oxidation of the primary alcohol completed the first catalytic enantioselective synthesis of [5]-ladderanoic acid. Negishi and co-workers have employed a Zweifel olefination in the synthesis of the side chain of (+)-scyphostatin (Scheme 30). 42 In this case, a boronate complex was formed between vinyl boronic ester 81 (prepared in 7 steps from allyl alcohol) and methyllithium. After addition of iodine and NaOH followed by silyl deprotection, trisubstituted alkene 82 was obtained in 76 % yield. The very high stereoselectivity obtained in this reaction (>98:2 E/Z) is particularly noteworthy and represents a significant improvement upon previous synthetic approaches toward this fragment.
Hoveyda and co-workers have employed a similar strategy to synthesize the antitumor agent herboxidiene (Scheme 31). 43 In this case, Z-vinyl boronic ester 83 was prepared as a single stereoisomer by a Cu-catalyzed borylation-allylic substitution reaction. Boronic ester 83 was then converted into trisubstituted alkene 84 in a Zweifel olefination with methyllithium. The resulting alkene was obtained as a single E-isomer in 70 % yield and could be converted into herboxidiene in five steps.
A stereocontrolled synthesis of (-)-filiformin has been reported by Aggarwal and co-workers involving an intramolecular Zweifel olefination (Scheme 32). 32 Intermediate 85 (synthesized in high stereoselectivity by lithiationborylation) was converted into cyclic boronate complex 86 by in situ lithium-halogen exchange. Addition of iodine and methanol brought about the desired ring contraction to provide exocyclic alkene 87 in 97 % yield. Deprotection of the phenolic ether followed by acid-promoted cyclization and bromination completed the synthesis of (-)-filiformin.
Conclusions and Outlook
Fifty years have passed since the first report by Zweifel and co-workers on the iodine-mediated olefination of vinyl boranes. Since then, this process has evolved into a robust and practical method for the enantiospecific coupling of boronic esters with vinyl metals. Recent contributions have significantly expanded the generality of the process, enabling the efficient coupling of a wide range of different alkenyl partners and allowing increasingly precise control over the stereochemical outcome of the transformation. Rapid progress in enantioselective boronic ester synthesis combined with the extensive applications of chiral alkenes bode well for the continued development and application of the Zweifel olefination in synthesis.
Funding Information
We thank EPSRC (EP/I038071/1) and the European Research Council | 5,766.6 | 2017-07-11T00:00:00.000 | [
"Chemistry"
] |
STUDY OF THE HUMAN BRAIN POTENTIALS VARIABILITY EFFECTS IN P300 BASED BRAIN–COMPUTER INTERFACE
The P300-based brain–computer interfaces (P300 BCI) allow the user to select commands by focusing on them. The technology involves electroencephalographic (EEG) representation of the event-related potentials (ERP) that arise in response to repetitive external stimulation. Conventional procedures for ERP extraction and analysis imply that identical stimuli produce identical responses. However, the floating onset of EEG reactions is a known neurophysiological phenomenon. A failure to account for this source of variability may considerably skew the output and undermine the overall accuracy of the interface. This study aimed to analyze the effects of ERP variability in EEG reactions in order to minimize their influence on P300 BCI command classification accuracy. Healthy subjects aged 21–22 years ( n = 12) were presented with a modified P300 BCI matrix moving with specified parameters within the working area. The results strongly support the inherent significance of ERP variability in P300 BCI environments. The correction of peak latencies in single EEG reactions provided a 1.5–2 fold increase in ERP amplitude with a concomitant enhancement of classification accuracy (from 71–78% to 92–95%, p < 0.0005). These effects were particularly pronounced in attention-demanding tasks with the highest matrix velocities. The findings underscore the importance of accounting for ERP variability in advanced BCI systems.
The brain-computer interfaces (BCI) enable the use of executive devices without mediation of peripheral nerves and muscles.The technology involves recording and transformation of the electrical activity of the brain, most commonly by means of electroencephalography (EEG) [1].The conventional scope of applications for BCI includes neurorehabilitation and replacement of speech and locomotion output in patients with severe motor impairments [2].Other applications of BCI include their use as accessory means of instrumental diagnostics, e.g. in autism [3] or anorexia nervosa [4], as well as in cognitive training devices [5].
BCI systems based on exposure to external stimuli and detection of event-related potentials (ERP) by EEG are considered the most efficient in terms of communication and control [6].A pioneering interface for text typing termed P300 BCI was firstly published in 1988 [7].The user is presented with a letter matrix and receives the stimuli in the form of sequential highlighting of the letters.The mental response to the highlighting of target letters is accompanied by enhancement of certain ERP components, notably the P300 wave.Based on ERP analysis, the interface identifies the letter on which the user's attention is focused at the moment (target stimulus) [8].
The general classification principle in BCI (subdivision of EEG reactions into target and non-target classes) is based on the fundamental technique of ERP extraction and analysis.The technique employs the accumulation of epochs corresponding to identical repetitive stimuli as a substrate for ERP extraction.The averaging of these 'identical' epochs reveals a coherent ERP signal against the background noise which is incoherent to the moment of stimulation [6].
The variability of latency of individual EEG reactions from the moment of stimulation is a well-known neurophysiological phenomenon [9].A failure to account for this source of variance can substantially distort the output of individual ERP components [10].This effect involves both early and late components of EEG reactions [11,12], with the averaged P300 wave being particularly vulnerable [13].As a consequence, the amplitude of the component decreases and the width increases [14].Beyond its fundamental interest, the temporal variability of ERP should be regarded as a major hindrance for P300 BCI classification accuracy.
The latency of ERP components, notably P300, is known to correlate with the age, cognitive status of the subject and other parameters [15,16].Deviations in characteristics of isolated responses to external stimuli can be observed in the divided attention tasks; the variability positively correlates with the complexity of the second task (i.e. its competitiveness for perception resources) [16].The process of achieving a final goal with BCI (text typing) and execution of immediate instructions (reacting to stimuli) may be competing tasks themselves.In addition, the practical use of BCI technology in real-world settings is usually accompanied by collateral tasks and events that promote continuous variations in attention and perception [17].It should be noted that additional source of multidirectional destabilization of ERP characteristics, including variability, involves the stimulation parameters per se: in BCI, the presentation rate is usually high, up to 4-5 stimuli per second [8], whereas the majority of standard protocols for ERP acquisition use presentation of one stimulus in 1-2 seconds [18].
From a neurophysiological perspective, variations in the brain output are rooted in the hierarchical complexity of the nervous system organization, so that these variations are generally considered inherent for the brain [19].However, the elevated overall levels of such variation have been associated with certain pathologies.The abnormally high levels of neuronal noise and plasticity may interfere with the integrity of external stimuli processing and production of adequate behavioral responses, e.g. in autism [19,20].The increased variability of ERP was also demonstrated in patients with attention deficit hyperactivity disorder, especially under conditions of cognitive challenge [19,21].
The cognitive fatigue of the user, a major cause of variability in EEG reactions [22], may negatively affect the neurocontrol efficacy in healthy users and even more so in patients.People with locomotion and speech impairments often have reduced attention capacities possibly accompanied by cognitive deficits.Such users tend to quickly get tired and may experience difficulties upon sustaining the control in BCI [23,24].
Therefore, the effects of ERP variability in P300 BCI should be given immense consideration.On the one hand, proper understanding of the variability patterns will allow enhancement and optimization of the stimulus environment in terms of efficiency; on the other hand, it will mitigate the undesirable effects of variability to facilitate mastering of this technology by healthy users and notably by patients with neurocognitive impairments.This study aimed to analyze the effects of ERP variability in EEG reactions in order to minimize their influence on P300 BCI command classification accuracy.
METHODS
The study used EEG data obtained earlier in a modified version of P300 BCI with a stimulus matrix moving freely within the visual field.The details of this modification and some results obtained with its use were described by us previously [25].The current study deals with identification and evaluation of ERP variability effects possibly encountered by users of such interfaces.
The recording was carried out at the Faculty of Biology, Lomonosov Moscow State University, and enrolled 12 participants (four men and eight women) aged 21-22 years.Inclusion criteria: healthy volunteers of both sexes, aged 18-35 years.Exclusion criteria: diagnosed neurological and/or mental conditions, a history of convulsive seizures episodes or diagnosed status epilepticus.The study initially intended to test the feasibility of ERP-based monitoring of the subject's attention to continuously moving target stimuli [25].
The participants (subjects) were presented with a 3 × 3 icon matrix, angular dimensions 7.4° × 7.4°, single stimulus size 2.2° × 2.2°.The stimulation was performed by highlighting (125 ms in every 500 ms) of the rows and columns of the matrix in random order.
The subjects were tasked with focusing their attention on a target stimulus within the matrix, carefully follow this stimulus, and mentally count the number of highlights encompassing this stimulus.
The study used various modes and velocities of matrix motion within the screen limits.The matrix moved at a constant speed in a straight direction inverted upon reaching the edge of the screen.A total of six modes were used in the study: -'static matrix' (motionless, positioned at the center of the screen); -'horizontal movement' (at 5°/s); -'vertical movement' (at 5°/s); -'random movement' (at 5°/s, the direction could change at random moments of time); -'velocity 10°/s' (horizontal movement); -'velocity 20°/s' (horizontal movement).Each participant was exposed to all modes succeeding in random order.Each mode encompassed presentations of 120 target and 240 non-target stimuli.
The EEG recordings were carried out with six scalp electrodes (Cz, Pz, PO7, PO8, O1 and O2) and a common reference electrode attached to ear lobule, using an NVX 24 electroencephalograph (Medical Computer Systems; Zelenograd, Russia) at 250 Hz discretization frequency.We used the CONAN-NVX software for the recording and original software written in Python 2.6 for the stimuli presentation.Synchronization of EEG recording with the highlightings involved a photodiode sensor.Simultaneously with the The signal processing including ERP extraction and analysis was carried out in MATLAB 9.11 (R2021b) (MathWorks; USA).The EEG signal was band-pass filtered within 0.5-20 Hz range (0.5-10 Hz for working with single epochs and calculating classification) using a fourth-order Butterworth filter and split into epochs from -400 to 1200 ms time-locked to the stimulus onset.The artifact epochs containing +/-50 µV excess of signal amplitude in any of the channels were excluded.The percentage of excluded epochs was usually within 10%.
The epochs were classified into target and non-target and averaged within each class, subject and mode.The procedure yielded target and non-target ERP in a reduced -200 to 800 ms window.The amplitude of P300 was determined as the maximum signal value in Pz lead within a 300-600 ms window.The amplitude of N1 component was determined as the minimum signal value in PO7, PO8, O1 and O2 leads within a 100-300 ms window.Peak latencies were measured from the stimulus onset.
To analyze the component variability, P300 and N1 peak latencies were calculated similarly in the same channels and windows, albeit using single, non-averaged epochs.The epochs within each lead, mode and subject were sorted (ordered) based on these latencies.To analyze the variability of latencies, the median absolute deviation (MAD) value was calculated within each mode for each subject individually.To analyze the effect of component variability on calculated ERP, all epochs were centered on the peak time prior to averaging: each epoch was shifted by the subtracted difference between the latency in the averaged ERP and its own specific latency, forward or backward along the time axis, after which the epochs were averaged conventionally in a -200 to 800 ms window.In Cz and Pz leads the epochs were corrected by P300 and in occipital leads the epochs were corrected by N1.The peak amplitudes were subsequently calculated for the averaged corrected ERP.For the group analysis of N1 amplitudes, the values were calculated using the curves averaged over four occipital leads (PO7, PO8, O1 and O2).
To assess the variability of the ERP components subjectwise, the amplitudes were calculated for individual EEG epochs (raw and latency-corrected); in each epoch, the average signal value was calculated in a 52 ms window centered on the peak latency for a particular lead and mode.
To identify the effects of ERP variability on the command classification accuracy in P300 BCI, offline classification scores were calculated for all subjects in each mode, separately for the initial averaged ERP and for the latency-corrected epochs.The feature vectors for the linear Fisher discriminant analysis in each mode were built based on signal amplitudes in all EEG channels, spanning 600 ms post-stimulation (one point per 50 ms).The classification accuracy was assessed by cross-validation (leave-one-out) with sequential testing of each epoch with a classifier trained on all other epochs of the same mode.The procedure was repeated for all epochs, and the classification accuracy was assessed as the percentage of correctly identified epochs (two classes: target and non-target).To correctly calculate the accuracy before classification, the quantities of target and non-target epochs were equalized by randomly deleting a subset of non-target epochs.To exclude sampling-related variations, this classification process was repeated 100 times with random elimination of non-target epochs and the accuracy values obtained over 100 iterations were averaged.
All quantitative data (amplitudes, latencies and classification accuracy values) were analyzed using STATISTICA 7.0 package.One-or two-way analysis of variance (ANOVA) was used for group analysis.The Tukey's or Benjamini-Hochberg's posthoc tests were applied in cases of significant main effects in pairwise comparisons.The analysis of component amplitudes within subjects involved the normality check by χ 2 (Chi-square) test followed by paired Student's t-test.
RESULTS
To visualize the accumulations of single EEG epochs (before ERP averaging), the time was plotted horizontally, the epochs were plotted vertically one by one and the amplitude values were color-coded [11].This method allows representation of different grouping options for individual epochs and accentuates the effects of their variability.Fig. 1 shows an example of such representation of target epochs for a single participant, acquired in Pz and PO8 using the 'static matrix' mode.In Fig. 1A, arrangement of the epochs from top to bottom corresponds to their actual chronological order.In Fig. 1B, the epochs are sorted by latency so that epochs with earlier peaks of P300 (in Pz) or N1 (in PO8) are located at the top.Fig. 1C shows the latency-corrected epochs, i.e. adjusted with the use of averaged latency value for particular mode and channel.For better clarity, we applied vertical smoothing by the 'moving average' in 10 epoch series.Fig. 2 shows ERP obtained by averaging of raw and latencycorrected epochs for the same participant (subject).The latency-corrected amplitudes of both components significantly exceeded the initial values obtained by raw averaging without correction for the peak latency.These differences were significant for all subjects in all modes (p < 0.001, paired Student's t-test).
We further analyzed the influence of latency correction procedure on the calculated amplitudes of P300 and N1 components at the group level; the results are presented in Fig. 3.The amplitudes of P300 and N1 obtained with the latency-corrected epochs were significantly higher compared to those calculated by conventional method.Two-way ANOVA ('latency correction' factor -two levels, 'motion type' factorfour levels including the 'static matrix' mode) revealed significant effect of latency correction on the amplitudes of P300: F(1,11) = 95.7 and λ = 0.10 at p = 0.000001, and N1: F(1,11) = 58.1 and λ = 0.16 at p = 0.00001.The 'motion type' factor had significant effect on P300: F(3,9) = 7.5, λ = 0.29 at p = 0.008 (a lower amplitude for horizontal movement), but not N1.
The analysis of 'latency correction' and 'velocity' factors (comprising, respectively, two and three levels) revealed significant influence of the 'latency correction' factor on the amplitudes of P300: F(1,11) = 88.5 and λ = 0.11 at p = 0.000001, and N1: F(1,11) = 46.6 and λ = 0.19 at p = 0.00003, despite the lack of significant influence from 'velocity', and significant interaction between the two factors for N1 component: F(2,10) = 10.4 and λ = 0.32 at p = 0.0036 (a tendency towards lower amplitude at the highest velocity identified with the use of conventional averaging of the epochs).
The group analysis of ERP variability using MAD indicator of the peak latency revealed certain statistically significant effects.The movement velocity factor (three levels) significantly affected N1 component in PO8, O1 and O2 leads: F(2,22) = 4.4 at p = 0.024, F(2,22) = 3.8 at p = 0.037 and F(2,22) = 4.9 at p = 0.017, respectively.Post-hoc analysis revealed a higher variability (expressed through MAD) upon using the highest velocity mode compared to slower movement.The effects observed in O1 and O2 leads were significant (p < 0.05), whereas in PO8 the differences amounted to a trend (p < 0.1).The analysis revealed no significant effects of the 'motion type' factor on N1 component and the 'motion type' and 'velocity' factors on P300 component.Fig. 4 compares the offline classification accuracy for the standard ERP extraction algorithm as compared to the use of P300 and N1 peak latency-corrected data.Two-way ANOVA ('latency correction' factor -two levels, 'motion type' factorfour levels) revealed significant effect of the latency correction procedure on classification accuracy: F(1,11) = 102.7 and λ = 0.09 at p = 0.00001.Post-hoc analysis revealed higher classification accuracies when using latency-corrected epochs in all four modes (94.7, 92.2, 93.2 and 94.8%) compared with the conventional ERP extraction procedure (respectively, 78.3, 78.1, 78.3 and 76.1%; p < 0.0001 for all modes).Two-way ANOVA ('latency correction' factor -two levels, 'velocity' factor -three levels) revealed significant effects of both the calculation method and the matrix movement velocity: F(1,11) = 110.0 and λ =0.09 at p = 0.00000; F(2,10) = 6.0 and λ = 0.46 at p = 0.0196, as well as significant interaction between these factors: F(2,10) = 11.5 and λ = 0.30 at p = 0.0026.Posthoc analysis revealed higher classification accuracy when using latency-corrected epochs in all three modes (92.2, 93.7 and 94.0%) compared with the conventional ERP extraction procedure (respectively, 78.1, 76.7 and 71.2%; p < 0.0005 for all modes).Of note, in the highest velocity mode, the accuracy was significantly lower than in two other modes (71.2% vs 78.1 and 76.7%; p < 0.05) unless the latency correction was applied.
DISCUSSION
Overall, the obtained results confirm that ERP variability is inherent to P300 BCI and should be considered as a major influence on the shapes of ERP components; the exact impact depends on the degree of attention involvement.Correction of such variability at the level of single EEG reactions can substantially improve the command interpretation accuracy.
Despite the fact that ERP approach relies on the averaging of multiple EEG reactions to a stimulus, at certain signal processing parameters the detection of individual reaction peaks is quite feasible.This statement is illustrated well by our data (Fig. 1) along with other studies [9,10,12].Importantly, the analysis of single epochs allows correction for the variable peak latency in individual realizations of the response to ultimately afford a better extraction quality and enhanced amplitude for the components of interest (Fig. 2).Despite the well-established phenomenon of variable latency, prediction of its specific impact in P300 BCI is nontrivial, as the conventional BCI stimulation rates are considerably higher compared with those typically used in psychophysiological studies.One one hand, this difference can mitigate the variability and stabilize the temporal heterogeneity of the reactions; on the other hand, it might also augment the heterogeneity and complicate correct interpretation of the stimulus at higher rates of presentation, as the suboptimal conditions for the attention/concentration activity may promote a concomitant increase in ERP variability [20].The variable latency has been attributed to the inherent variance of time required for perception and categorization of the stimulus in every single presentation event [26].We show that accounting for the variability effects allows significant enhancement of the amplitude for components that represent attention to target stimuli at both individual and group levels (Figs. 2 and 3).
Despite the lack of significant effects of different matrix motion modes on the amplitudes, MAD index showed increased variability of N1 component at higher velocities of the matrix.The decrease in visual acuity upon increase in the speed of tracked objects [27] has been associated with a concomitant increase in attention costs.Given the profound association of ERP components with specific features of oculomotor functionality [28], the observed increase in N1 component variability at higher velocities can be explained by the inherent attention variance combined to the pressing demand for tracking the matrix cells.The observation is also consistent with the decreased amplitude of N1 component in the difference (target -non-target) waveforms reported by us previously [25].Noteworthy, the effect is characteristic of this earlier component, sensitive to target events at the eye fixation point [29], but not of the later P300 component.
Apart from its fundamental relevance, the developed correction procedure is of clear applied interest.The accounting for the latency factor in ERP components significantly rescued the accuracy of target stimuli classification in the modified P300 BCI environment used by us in this study (Fig. 4).The temporal variability of ERP has been already featured as a putative cause of poor individual performance in BCI [17].However, the authors suggested to improve the accuracy by using the amplitude values of the classifier output instead of the raw peaks in EEG epochs, which may seem less consistent and efficient, as the classifier output is more prone to external influences (noise, etc.).Another study has indeed succeeded in increasing the accuracy of P300 BCI output by means of latency correction for P300, albeit in a narrow window; overall, the method provided no classification benefits compared with the conventional approach [26].Besides, the authors themselves emphasize irrelevance of their algorithm for online use, since it requires the explicit target/non-target labeling of each epoch.By contrast, our setting significantly enhances the classification accuracy through correction of the latency for both P300 and N1 components known to provide comparable contributions [30] and is equally suitable for the online mode as the correction is applied to both target and non-target epochs.It should be noted that the initial accuracy at the highest velocity was lower (71%) than in other modes (76-78%) and that introducing the latency correction rendered the accuracy ubiquitously high (92-95%) in all modes.This result underscores the utility of the developed correction approach in similar and even more attention-demanding BCI operation modes.
CONCLUSIONS
The dedicated analysis at the level of single EEG epochs enables overall correction for the variable latencies in a modified P300 BCI environment.Correction for this major source of variability refines the target ERP components with a concomitant improvement in the command classification accuracy.By using a movable stimulus matrix, we demonstrate particular relevance of the developed correction procedure under conditions of increased cognitive demand modeled by higher movement velocity.Taken together, our findings underscore the importance of accounting for ERP variability in the development of P300 BCI environments and provide the basis for creation of advanced ERP-based systems of neurocontrol, particularly those intended for people with reduced attention capacities.
Fig. 1 .
Fig. 1.Color maps of single target EEG epochs for subject #1, 'static matrix' mode, leads Pz and PO8.Horizontal axis represents time, ms; vertical axis represents individual epochs numbered and sorted by the number from top to bottom, with the moving average-based vertical smoothing applied in series of 10. A. The epochs are sorted in chronological order (as recorded).B. The epochs are sorted by peak latency for P300 (Pz) or N1 (PO8).In charts A and B the epochs are synchronized by the moment of stimulus presentation (vertical dashed lines).C. Peak latency-corrected epochs with dashed lines indicating the moment of stimulus presentation
Fig. 2 .Fig. 3 .
Fig. 2.An example of averaged target ERP (subject #1) acquired in the 'static matrix' mode.Gray curves correspond to standard method of ERP averaging (no latency correction applied), black curves correspond to the use of peak latency-corrected epochs for P300 (Pz) and N1 (PO8).Vertical dashed lines (0 ms) indicate the moment of stimulus presentation; red lines indicate the latency of particular component in a given lead
Fig. 4 .
Fig.4.The offline classification accuracy for different modes, calculated by the standard method (no latency correction applied) as opposed to the use of peak latencycorrected epochs for P300 and N1.Heights and error bars correspond to means and standard errors of the mean, respectively (n = 12) | 5,203.8 | 2022-06-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Medicine"
] |
Human Calmodulin Methyltransferase: Expression, Activity on Calmodulin, and Hsp90 Dependence
Deletion of the first exon of calmodulin-lysine N-methyltransferase (CaM KMT, previously C2orf34) has been reported in two multigene deletion syndromes, but additional studies on the gene have not been reported. Here we show that in the cells from 2p21 deletion patients the loss of CaM KMT expression results in accumulation of hypomethylated calmodulin compared to normal controls, suggesting that CaM KMT is essential for calmodulin methylation and there are no compensatory mechanisms for CaM methylation in humans. We have further studied the expression of this gene at the transcript and protein levels. We have identified 2 additional transcripts in cells of the 2p21 deletion syndrome patients that start from alternative exons positioned outside the deletion region. One of them starts in the 2nd known exon, the other in a novel exon. The transcript starting from the novel exon was also identified in a variety of tissues from normal individuals. These new transcripts are not expected to produce proteins. Immunofluorescent localization of tagged CaM KMT in HeLa cells indicates that it is present in both the cytoplasm and nucleus of cells whereas the short isoform is localized to the Golgi apparatus. Using Western blot analysis we show that the CaM KMT protein is broadly expressed in mouse tissues. Finally we demonstrate that the CaM KMT interacts with the middle portion of the Hsp90 molecular chaperon and is probably a client protein since it is degraded upon treatment of cells with the Hsp90 inhibitor geldanamycin. These findings suggest that the CaM KMT is the major, possibly the single, methyltransferase of calmodulin in human cells with a wide tissue distribution and is a novel Hsp90 client protein. Thus our data provides basic information for a gene potentially contributing to the patient phenotype of two contiguous gene deletion syndromes.
Introduction
CaM KMT (previously C2orf34) has been reported to be within the deletion region of two autosomal, recessive syndromes. The first reported, 2p21 deletion syndrome is caused by a homozygous deletion of 179,311 bp on chromosome 2p21, which includes the type I cystinuria gene (SLC3A1), the protein phosphatase 2Cb gene (PPMB1), prolylendopeptidase like (PREPL) gene, and the first exon of the CaM KMT gene. Patients homozygous for this deletion present with cystinuria, neonatal seizures, hypotonia, severe mental and growth retardation, facial dysmorphism and reduced activity of all, except the 2 nd mitochondrial encoded respiratory chain enzymatic complexes [1,2]. The second disorder is atypical hypotonia-cystinuria syndrome (HCS) caused by a smaller deletion of 77.4 kb on the 2p21 chromosome that encompasses SLC3A1, PREPL and similarly to the first report the first exon of CaM KMT. Atypical HCS patients present with a phenotype partly similar to 2p21 deletion syndrome such as severe hypotonia at birth, poor feeding, facial dysmorphism, growth retardation and cystinuria but with growth hormone deficiency not observed in the patients of the 2p21 deletion syndrome [3]. In addition, they showed a mild to moderate mental retardation and cytochrome C oxidase deficiency (mitochondrial complex IV) [4].
We have previously shown that CaM KMT is transcribed in a wide range of human tissues whereas the loss of the first exon abolished the transcription in the cells of 2p21 deletion patients [2]. Recently, we identified CaM KMT as a class I, non-SET domain calmodulin-lysine N-methyltransferase that catalyzes the formation of a trimethyllysyl residue at position 115 in calmodulin [5].
Calmodulin (CaM) is a ubiquitous calcium-binding protein that regulates a multitude of different protein targets. It is a major transducer of calcium signaling that sets the free Ca 2+ level by binding calcium ions more rapidly than other Ca 2+ -binding proteins [6]. CaM is frequently trimethylated at Lys-115 and its methylation status changes at different developmental stages as well as in tissue specific manners that potentially modulate its actions [5].
Aiming to better characterize CaM KMT and its potential contribution to the 2p21 deletion syndrome, we demonstrate alternative transcription from the CaM KMT exons outside the 2p21 deletion syndrome, but they are not likely to be translated, the accumulation of hypomethylated calmodulin in patients compared to normal controls suggesting that CaM KMT plays a pivotal role in calmodulin methylation and also that there are no compensatory mechanisms for CaM methylation in humans. Furthermore, cellular localization studies revealed that the fulllength CaM KMT localizes to the cytoplasm and nucleus in accordance with a similar subcellular distribution of calmodulin, whereas the short variant of CaM KMT (CaM KMTsh) that lacks the methyltransferase domain is localized to the Golgi complex. Western blot analysis showed expression of the CaM KMT variant in various mouse tissues. Furthermore we show that Hsp90 a highly conserved molecular chaperone [7] required for the late-stage folding of a number of classes of proteins referred to as client proteins, interacts through its middle domain with Cam KMT. Client proteins depend on Hsp90 function for the correct folding, maturation and become deregulated upon limited Hsp90 activity [8], and CaM KMT relies on Hsp90 chaperone activity for its stability since geldanamycin [9] treatment of HeLa cells transfected with CaM KMT resulted in a geldanamycin concentration dependent decrease in CaM KMT. These data indicate that CaM KMT is a novel client protein for Hsp90 and provide a new connection between Hsp90 chaperon function and CaM methylation.
Cell Culture
HeLa, COS-7 and HEK293 cells were maintained in Dulbecco's modified Eagle's medium (DMEM), supplemented with 10% fetal calf serum and 2 mM L-glutamine, 100 U/ml penicillin, 0.1 mg/ml streptomycin, 12.5 mg/ml nystatin, at 37uC in humidified atmosphere of 5% CO 2 . Lymphoblastoid cell lines from 2p21 deletion patients and normal individuals (approved by the Soroka Medical Center IRB and participants provided written informed consent) were maintained at a logarithmic growth phase in 1640 RPMI supplemented with 10% fetal calf serum, 2 mM glutamine, 100 U/ml penicillin, 0.1 mg/ml streptomycin, 12.5 mg/ml nystatin and 2.5 mg/ml Amphotericin B at 37uC in humidified atmosphere of 5% CO 2.
Constructs
Preparation of mammalian expression vectors for CaM KMT with C-terminal GFP tag: GFP-CaM KMTsh mammalian expression vector was constructed by PCR amplification using pDNR-LIB-CaM KMTsh as a template and the following forward and reverse primers: 59CGGAATTCA AATGGAGTCGC-GAGTCG39; 5 9ACGCGTCGAC-CATTTTCCCCTGGTTGCTT39 subcloned into the EcoRI and SalI sites of pEGFPN1 plasmid (Clontech). The GFP-CaM KMT full length variant was subcloned into the XhoI and EcoRI of pEGFPN1 plasmid (Clontech) by PCR amplification using pGEX-5X-1-CaM KMT construct as a template and the primers: forward-59CTCGAGATGGAGTCGCGAGTCG39, reverse-59GAATTCGCTTTC CATGTTTGGTC39. Preparation of mammalian expression vector for CaM KMT with N-terminal Myc-tag: The CaM KMT was amplified by PCR using FLAG-CaM KMT construct as a template, with the primers: forward-59TTAGAATTCAT GGAGTCGCGAGTCGCG39, reverse-59TTACTCGAGCTATCCATGTT TGGTCAAAAT39. The PCR product was subcloned into the EcoRI and XhoI sites of pCan expression plasmid. Cloning of the CaM KMT into bacterial expression vector pGEX-5X-1 with N-terminal GST tag: GST-CaM KMT was restriction digested out of pFLAG-CMV5a-CaM KMT with EcoRI enzyme and ligation into the EcoRI site of the pGEX-5X-1 plasmid. GST-CaM KMTsh was subcloned into the EcoRI and XhoI of the pGEX-5X-1 vector by PCR amplification using GFP-CaM KMTsh as a template and the primers-forward-59GAATTCATGGAGTCGCGAGT CG39; reverse-59CTCGAGTCATTTTCCCCT GGTTGC39. All constructs were verified by DNA sequencing on an ABI PRISM 3100 DNA Analyzer with the BigDye Terminator v. 1.1 Cycle Sequencing Kit according to the manufacturer's protocol (Applied Biosystems, CA, USA). GST-Hsp90 C, M, N terminal domains were a kind gift from professor F. Ulrich Hartl, Max-Planck-Institute of Biochemistry, Munich, Germany.
Transient Transfection
All transfections were performed using TransIT-LT1 reagent (Mirus). For Western blots and immunoprecipitation experiments, cells were plated at density of 2610 6 and 1610 5 cells per 100 mm plate and per 1 well of 6-well plate respectively, 24 hours prior to transfection. Cells were harvested 24-48 hours after transfection.
Antibodies
The anti-Myc monoclonal, anti-FLAG monoclonal (M2), anti-GFP antibodies were purchased from Sigma-Aldrich. The anti-Hsp90 alpha/beta (F-8), anti-GAPDH antibodies were obtained from Santa Cruz Biotechnology. Peroxidase (HRP) -conjugated whole IgG secondary antibodies, and Cy3, Cy2-conjugated secondary immunofluorescence antibodies were from Jackson Immunoresearch Laboratories. Anti-CaM KMT polyclonal antibodies were raised in rabbits by immunization with a GST-CaM KMT fusion proteins followed by affinity purification. The anti-CaM antibody raised in mouse was purchased from Invitrogen. The AP-conjugated IgG secondary antibody specific for mouse was from BioRad.
CaM KMT Polyclonal Antibody Production
CaM KMT polyclonal antibodies were generated by immunizing two New Zealand white rabbits with GST-CaM KMT fusion protein. Subcutaneous injection of 1 ml of antigen solution containing approximately 100 mg of the purified recombinant proteins, using Freund's complete adjuvant were done for the initial immunization. The antigens were injected as native and denatured proteins to produce antibodies that would be useful for Western and immunoprecipitation experiments. Rabbit serum was collected before immunization as a negative control. Three boosts were given with intervals of 3 to 5 weeks, using Freund's incomplete adjuvant. Serum was collected 1 week after the last injection. All blood samples were refrigerated for 16 h and centrifuged (450 6 g; 10 min) at room temperature. The serum was introduced to further purification [10] and stored at 280uC.
RNA Isolation and cDNA Preparation
Total RNA was isolated from lymphoblastoid cells using the EZ-RNA Total RNA Isolation Kit (Biological Industries). Total human RNA of different tissues was purchased from Clontech. Reverse transcription was done using Reverse-iT 1 st Strand Synthesis kit (ABgene). The quality of the resulting cDNA has been tested by the amplification of tubulin chaperone E gene with the primers: forward-59AAAACGTCCATGTTCCCATC39; reverse-59CCCCAGACACGATAAGCAGT39.
RACE-PCR-SMART 59
RACE cDNA amplification kit (Clontech) was used to amplify potential CaM KMT transcripts from lymphoblastoid cells of patients and normal controls. For the first strand synthesis we used the universal primer mix (UPM) as the forward primer and the reverse primer was designed at the border of the 5 th and 6 th exons of the long variant, to avoid priming in the residual DNA that may be retained in the RNA preparation: 59GCACATTTCT-GATGGCCTTTTCATTCC39.The product was then amplified using nested PCR reaction with UPM and a reverse primer that was located within the 4 th exon of the long variant (presented in the RT-PCR paragraph). The RACE products were cloned into pGEM-T easy vector (Promega) and transformed into DH5a strain of E.coli.
Fluorescence Imaging
Cells were grown on cover slips, fixed with freshly prepared 4% paraformaldehyde/PBS for 15 minutes, then washed extensively in PBS and mounted with Prolong Gold antifade reagent containing DAPI (Invitrogene) on microscope slides. The samples were visualized on a Leica DMR compound microscope equipped for immunofluorescence and photographed with a Spot RT digital camera (Diagnostic Instruments). Confocal fluorescent images were obtained by a Zeiss LSM Axiovert 100 laser scanning microscope.
Whole Cell Protein Extraction
Cells were collected by scraping, pelleted by centrifugation and washed with cold PBS three times before lysis. To stabilize transient and weak protein-protein interaction, the cells used for immunoprecipitation were treated with formaldehyde (Sigma) 1% for 15 minutes and quenched with 1.25 M glycine/PBS prior to collection [11]. The cells (1610 8 cells) were lysed in 1 ml modified RIPA buffer (50 mM Tris HCl, pH 8.0, 150 mM NaCl, 1% NP40, 1 mM EDTA, protease inhibitors (Sigma)) for 20 min on ice, followed by centrifugation in an Eppendorf microfuge for 20 minutes at 13000 rpm at 4uC to remove insoluble debris. The supernatant was either used directly or stored at 280uC.
Extraction of Mouse Tissues
The tissues from the ICR mice were frozen at 280uC and the lysates were prepared immediately before the Western blot experiment. The tissues were homogenized in RIPA buffer 50 mM Tris HCl, pH 8.0, 150 mM NaCl, 1% NP40, 0.5% sodium deoxycholate, 0.1% SDS, 1 mM EDTA, protease inhibitor (Sigma) using Polytron-PT-2100 homogenizer. Tissue and cell debris were removed by centrifugation at 4uC for 20 minutes, 12000 rpm. Protein concentration was determined with Bio-Rad protein assay. The lysates were boiled for 5 min in 16SDS sample buffer (50 mM Tris-HCl pH 6.8, 12.5% glycerol, 1% SDS, 0.01% bromophenol blue, 5% b-mercaptoethanol) and 100 mg of proteins were loaded. The purified antibodies were diluted 1:200 with 2.5% milk in TTBS, preimunne serum at 1:50 and anti-HSP90 1:200 with 2.5% milk in TTBS. The samples with formaldehyde crosslinking were boiled for 40 minutes, before they were separated by SDS-PAGE.
Immunoprecipitation, Western Blot Analysis and Coomassie Staining of SDS-PAGE
For immunoprecipitation, samples of 0.5-1 mg protein from formaldehyde cross linked lysates were incubated with appropriate amount of antibodies or with unrelated IgG antibody as a negative control, for 1 hour at 4uC. Then 20 ml protein A/G agarose beads (Santa Cruz) were added to each sample and incubated for 1 hour at 4uC. The beads were precipitated and washed for 10 minutes with the RIPA modified lysis buffer. Washing was repeated four times. All steps were performed with mild agitation. SDS sample buffer was added to the beads after the last wash, and then the samples were boiled, separated by SDS-PAGE and either immunoblotted with the appropriate antibodies or stained with the Coomassie blue based sensitive staining (Imperial protein stain, Pierce) according to the manufacturer's instructions. For mass spectrometric analysis, the protein bands were excised from the stained gel and delivered to the Biological Mass Spectrometry Facility at Weizmann Institute of Science or to the University of Kentucky proteomics core facility.
For Western blot analysis, protein samples were separated on 12% SDS-polyacrylamide gels, then transferred to nitrocellulose membranes (BioTrace NT, Pall Inc.). The efficiency of transfer was monitored by Ponceau-S (Sigma) staining. The membranes were blocked for 1 h at RT with 5% milk (Sigma) in TTBS. Incubation with the primary antibodies was for 1 h at RT or overnight at 4uC. The membranes were washed three times with TTBS for 5 minutes each then incubated with secondary antibody for 1 hour at room temperature and subsequently washed with TTBS four times for 5 minutes. Blots were exposed and developed using the ECL blot detection reagent EZ-ECL (Biochemical Industries) using Chemi Doc XRS+ digital camera with Image Lab software. Western blot analyses for CaM in cell lysates were performed similarly except that PVDF membranes were used (with 20 mg of protein) and the membrane was developed using NBT/BCIP (Sigma).
Expression and Purification of GST-Fusion Proteins
All the GST-fusion proteins were produced in Rosetta E.Coli cells, growing in YTx2 medium, by induction with 0.1 mM IPTG for three hours. Cells were lysed in the presence of 100 mM PMSF with seven 20-s sonicator pulses 50% duty on ice. The resulting lysate was centrifuged for 40 min at 12,000 rpm at 4uC. The proteins were then purified from the lysate by binding to glutathione-Sepharose 4B beads (Amersham Biosciences) according to the manufacturer's instructions; the GST-fusion proteins were eluted with 30 mM glutathione, 50 mM Tris-HCl, pH 7.5, and 120 mM NaCl.
Pull Down Assays
Lysate from HeLa cells transfected with Myc-CaM KMT or Myc plasmid containing 3 mg protein were incubated with 20 ul glutathione sepharose beads conjugated to 15 mg purified GST-Hsp90 N, M, and C-terminal fragments or GST as a negative control for overnight, at 4uC, with mild agitation. The beads were precipitated and washed four times for 10 minutes with the RIPA modified lysis buffer. Washing was repeated 4 times. Western blot was performed using anti-Myc antibody.
CaM Methylation Assays
Cell lysates from lymphoblastoid cells (harvested as described above) were obtained by sonication in 50 mM Tris pH = 7.5, 150 mM NaCl, 5 mM DTT, 0.01% Triton X-100, 1 mM PMSF (eight 5 second pulses at 60% power on ice). The lysates were then clarified by centrifugation at 16000 g at 4uC for 10 min. The assays, in a final volume of 100 ml, contained 100 mM bicine pH 8, 150 mM KCl, 2 mM MgCl 2 , 2.5 mM MnCl 2 , 0.01% Triton X-100, 100 mM CaCl 2 , 2 mM DTT, 10 mCi [ 3 H-methyl] AdoMet (70-80 m Ci mmol 21 ,from PerkinElmer), 5 mg of human CaM KMT (HsCaM KMT), expressed using a SUMO vector and purified according to [5], and 100 mg of total protein from cell lysates. All reactions were performed at 37uC for 2 hours and terminated by protein precipitation with 25 volumes of 10% (v/v) trichloroacetic acid. The precipitated protein pellet was dissolved in 150 ml of 0.1 N NaOH and precipitated again with the same volume of trichloroacetic acid prior being dissolved in SDS-PAGE loading buffer. The samples were electrophoresed on 12.5% SDS-PAGE gels, and transferred to a PVDF membrane prior to phosphorimage analyses.
CaM KMT is Alternatively Transcribed in 2p21 Deletion Syndrome Patients
The CaM KMT gene has two splicing variants that share the first three exons (Fig. 1A). CaM KMTsh, the short variant, has a 4 th exon, whereas the long variant has eight additional exons and we demonstrated that it has calmodulin-lysine N-methyltransferase activity [5]. We previously reported that in accordance with deletion of all the 59 sequence, including the promoter region, first exon and additional 300 bp into the first intron of CaM KMT in 2p21 deletion syndrome, the gene is not expressed in lymphoblastoid cells from the patients when testing with primers from the first exon. We have also demonstrated that in normal individuals both splice variants of CaM KMT have a broad transcription profile, including tissues that are affected in the 2p21 deletion syndrome: muscle, brain, testis and kidney [2]. To determine whether transcription of CaM KMT may be salvaged in the patients by the use of alternative exons outside the deletion region and more specifically in the interval of 313.9 Kb between the 3rd and 4th exons of the long isoforms we performed 59RACE PCR on cDNA derived from patients' lymphoblastoid cells using a primer positioned at the border of the 5th and 6th exons and a nested primer in the 4th exon of the long CaM KMT isoform. RACE-PCR products were subcloned into pGEM-T and sequenced. The results revealed two new CaM KMT splice variants derived from the patients' cells (Fig. 1A). The first exon of the CaM KMT-1 variant is in the genomic interval between the 3rd and 4th exons (position chr2:44776694-44776867 on hg19), its size is 174 bp and it connects to the known 4th exon (Fig. 1C). Alignment of this exon with the genomic sequence displays the AG/GT consensus for splice site at the intron-exon boundaries. To assess whether the expression of this novel isoform CaM KMT-1 is exclusive to the patients, we tested its production in lymphoblastoid cells of a normal control and in several human tissues. We performed RT-PCR with a 59 primer in the new exon and a 39 primer in the 4th exon of the long CaM KMT. As shown in Fig. 1B, the new variant is also expressed in the normal control lymphoblastoid cells and in brain, testis and muscle. The second new variant, termed CaM KMT-2, starts exactly at the beginning of the 2nd exon of CaM KMT and continues to the last exon of the long isoform. To verify whether these new transcripts code for proteins we searched for open reading frames (ORFs) in the newly identified isoforms and found none. Both variants encode the ORF known for CaM KMT; the first methionine is in the known 4th exon resulting in a length of 167 amino acids (Fig. 1C). However, this initiation codon is not in a good Kozak consensus sequence, missing both most important nucleotides G, after the methionine codon and A, three nucleotides before the methionine that determine the efficiency of mRNA translation [12] (Fig. 1C). These results may suggest that no additional CaM KMT protein is expected to be produced.
The Absence of CaM KMT Causes Accumulation of Hypomethylated Calmodulin in 2p21 Deletion Syndrome Patients
It has been reported that the methylation state of CaM changes in developmental and tissue dependent manners potentially affecting the interaction of CaM with target proteins, thus influencing various cellular processes [5,[13][14][15]. Since the 2p21 deletion syndrome patients do not express CaM KMT, we evaluated the methylation status of CaM in two 2p21 deletion syndrome patients' lymphoblastoid cells. We performed an in vitro methylation assay using lysates from lymphoblastoid cells from patients and normal controls as a source for CaM as a substrate. The lysates were incubated with purified SUMO-HsCaM KMT and [ 3 H-methyl] AdoMet as the methyl donor. A protein of the molecular size of CaM was radioactively labeled in patient cells' lysates, while this labeling was absent in normal controls ( Fig. 2A). We confirmed that the methylation occurred on CaM and not on another cellular protein with a similar molecular mass, by depletion of the radiolabeled band by chromatography on phenyl-sepharose that binds CaM [16] (Fig. 2B), immunoblotting analysis for CaM that demonstrated comparable quantity of CaM in patients and control cells (Fig. 2C) and a reduced amount of CaM after phenyl sepharose depletion, with still comparable amount in patient and normal individual (Fig. 2D). MS/MS analysis of a non-radiolabeled immuno-reactive band from a duplicate experiment that shows 60% coverage of the polypeptide sequence for CaM including un-methylated Lys-115 from the patients' cells is reported in Fig. 2F. Finally, to prove that CaM from patient cells could still be methylated by SUMO-HsCaM KMT in vitro, we purified CaM from patients cells by phenylsepharose and then incubated it with HsCaM KMT and [ 3 Hmethyl] AdoMet and a strong radiolabel incorporation was detected (Fig. 2E). An additional analysis of the methylation status of CaM in patient and normal cells was conducted by mass spectrometry on CaMs after phenyl sepharose purification. A mass of 1349Da was detected in the patient cells ( fig. S1A), corresponding to peptide L116-R126, obviously a product of tryptic digestion at K115, and another peptide of 2359Da corresponding to H106-R126 without methyl groups on K115. The absence of methyl groups was also confirmed by the absence of any mass corresponding to peptide H106-R126 containing trimethyllysine. CaM from normal individual (Fig. S1B) was demonstrated to be fully methylated, presenting peptides corresponding to sequence H106-R126 containing a fully methylated K115 and different level of oxidation on methionines (peptides 2417Da and 2433Da). No peptides containing unmethylated K115 were visible ( fig. S1B and S1C). These results show that the deletion of CaM KMT in patients promotes accumulation of hypomethylated CaM that can be methylated in vitro by HsCaM KMT, and further demonstrate the absence of any compensatory cellular mechanisms for methylation of Lys-115 in CaM.
When CaM KMT was added to cell lysates in the presence of [ 3 H-methyl] AdoMet we observed radiolabel incorporation into HsCaM KMT (Fig. 2B, arrow). This may be self-methylation since incubation of GST-CaM KMT fusion protein purified from bacteria with [ 3 H-methyl] AdoMet resulted in labeling of GST-CaM KMT (see Figure S2).
The Subcellular Localization of the CaM KMT Proteins
To determine the subcellular localization of CaM KMT we subcloned it into pEGPF-N1 expression vector, which produce CaM KMT in-fusion with the C-terminal GFP tag and studied the cellular localization by confocal microscopy. Transfection of the CaM KMT-GFP into HeLa cells, showed both cytoplasmic and nuclear localization which was distinct from the diffused cellular localization of GFP control construct (Fig. 3A, B). We concluded that CaM KMT has nuclear and cytoplasmic distribution. We also determined the subcellular localization of the short CaM KMT variant, encoding a protein of 132 amino acids. This variant contains the same three 59 exons as CaM KMT and an additional 4th exon that lacks the methyltransferase domain. COS-7 cells were transfected with the GFP-CaM KMTsh construct and analyzed by fluorescence microscopy. GFP-CaM KMTsh overexpression revealed a discrete localization near the nucleus, similar to the Golgi apparatus localization. To verify if CaM KMTsh was sublocalized to the Golgi, COS-7 transfected cells with the GFP-CaM KMTsh were immunostained with the Golgi marker, anti-58 k antibody. The fluorescent signals from the two proteins overlapped considerably, indicating that GFP-CaM KMT could localize to the Golgi (Fig. 3D). These results suggest that the short CaM KMT variant has a distinct subcellular localization from the full length variant.
Using the affinity-purified polyclonal anti-CaM KMT antibody, we examined endogenous CaM KMT expression in different mouse tissues (Fig. 3C). Protein bands with the expected molecular masses of CaM KMT (36 kDa) were detected in most of the tissues examined, with the highest expression in the brain and muscle. The short variant could not be detected. These data support that CaM KMT is a ubiquitously expressed protein, including the high expression in the tissues affected in 2p21 deletion syndrome.
CaM KMT Interacts with Hsp90 Molecular Chaperon
To search for cellular proteins that specifically interact with CaM KMT, lysates of HEK293 cells expressing FLAG-CaM KMT were immunoprecipitated with anti-FLAG antibody. The immunoprecipitates were Coomassie stained and one predominant protein band of about 90 kDa that appeared to specifically co-purify with FLAG-CaM KMT could be distinguished. The other less intensive bands of ,70 kDa were revealed as nonspecific in additional experiments (Fig. 4A). The 90 kDa band was excised from the Coomassie stained gel, subjected to mass spectrometry analysis and identified as the alpha and beta isoforms of the molecular chaperon Hsp90. The sequenced peptides represent 26% coverage of the amino acid sequences and allow differentiating between the a and b isoforms of Hsp90 (Fig. 4B) suggesting that both of them interact with CaM KMT. Human Hsp90a and Hsp90b homologs show approximately 85% identity to each other with molecular masses of 84 and 83 kDa, respectively. These homologs exhibit similar participation in multi-chaperon complexes and interact with the same substrates under normal conditions [17]. To ascertain the association between CaM KMT and Hsp90 we transiently transfected HEK293 cells with Myc-CaM KMT and performed immunoprecipitation with a monoclonal anti-Myc antibody. The immunoprecipitates were subjected to SDS-PAGE followed by immunoblotting with anti-Hsp90 a/b antibody. In agreement with the mass spectrometry results CaM KMT was found to bind Hsp90 (Fig. 4 C-left). Conversely, the transfected cell lysates were precipitated with anti-Hsp90 antibody and then probed with anti-Myc (Fig. 4 C-right). Thus, CaM KMT and Hsp90 proteins are suggested to be in a protein complex.
CaM KMT Binds to the Middle Domain of Hsp90
Sequence alignments and proteolytic digests of Hsp90 have shown a modular structure of three domains: the N-terminal is an ATP binding domain; the C-terminal domain mediates the dimerization of the chaperons and the middle domain acts as a discriminator between different types of client and co-chaperon proteins [18,19]. Therefore, we next asked whether the interaction
Hsp90 is Required to Stabilize CaM KMT Protein
The Hsp90 chaperone machinery comprises numerous partner proteins: the scaffold proteins for the Hsp90 complexes and cochaperons influence the affinity of the chaperon to substrates by regulation of ATPase cycle, recruit chaperons to specific proteins or assist in protein folding directly. Another group of Hsp90 interacting proteins is substrates or client proteins which folding, stability and conformational maturation are affected by the chaperon activity. It has been reported that several client proteins such as Akt1, Aha1, Hch1, and Src bind to the middle domain of Hsp90 whereas co-chaperons bind mostly to the C-terminal domain [20][21][22][23].
Therefore, the fact that CaM KMT physically associated with the middle domain of Hsp90, encouraged us to ask whether it is a new client protein. For this purpose, we inhibited Hsp90 ATPase dependent chaperon activity with geldanamycin (GA) and tested the stability of CaM KMT. Geldanamycin is a specific antagonist of Hsp90 that binds specifically to the N-terminal ATP binding site of Hsp90, destabilizing the association between Hsp90 and its client proteins resulting in degradation of the client proteins via the proteasome pathway [24,25]. We transiently transfected HeLa cells with Myc-CaM KMT for 24 h. At the time of the transfection we added increasing concentrations geldanamycin (GA) for 24 h. Then total cell extracts were analyzed by Western blotting with anti-Myc. As controls of equal loading of proteins we probed the membrane with anti-GAPDH antibody. As shown in Fig. 6A, GA induced a significant decline of CaM KMT protein levels in a dosedependent manner in comparison to untreated cells. To verify that the sensitivity of CaM KMT to degradation is not tag dependent, HeLa cells were transiently transfected with CaM KMT tagged at C-terminus with FLAG tag and subsequently exposed to GA. The result demonstrated that the tag has no effect on GA induced loss of CaM KMT protein (Fig. 6B). The effect of GA was specific, since it has no effect on the GAPDH protein levels. These results suggest that CaM KMT is a novel client protein as it depends on Hsp90 chaperon activity for the stability and down regulated by Hsp90 inhibition.
Discussion
We have previously reported an autosomal recessive 2p21 deletion syndrome in which three genes (SLC3A1, PREPL, PP2Cb) and the first exon of CaM KMT are deleted. We demonstrated that the deletion abolished the transcript of CaM KMT in the 2p21 deletion syndrome patients, while the gene is ubiquitously transcribed in human normal tissues such as: brain, liver, colon, muscle and lung. The broad transcription profile of CaM KMT gene includes the tissues affected in the 2p21 deletion syndrome such as: muscle, brain, testis and kidney, suggesting a role for CaM KMT absence in 2p21 deletion syndrome clinical manifestations of the patients.
Here we identified two alternatively transcribed isoforms by 59RACE-PCR experiments. These transcripts are located outside the deletion borders, thus, they are expressed in the patients' cells as well as in several normal, human tissues. These new transcripts are not predicted to produce truncated CaM KMT proteins since they do not possess an initiator methionine codon within a good Kozak consensus sequence. However, we cannot rule out, the possibility that these transcripts could be translated since translational initiation has been shown for other proteins lacking the canonical motifs in their initiation codons [26].
We show here for the first time that loss of CaM KMT gene expression in 2p21 deletion syndrome patients results in an accumulation of hypomethylated CaM. This result proposes that CaM KMT is the major methyltransferase of CaM and there are no compensatory mechanisms for this activity in the patients. The absence of the CaM KMT activity can thus contribute to the mental retardation and mitochondrial defect observed in the 2p21 deletion patients but not in the hypotonia cystinuria patients with the absence of only the SLC3A1, PREPL alone. The results suggest that the methylation status of CaM may play a role in affecting CaM-dependent signaling pathways, and proteins with domains capable of reading protein methylation status have been described [27].
The importance of the methylation status of CaM has been ambiguous. The absence of methylation has been reported not to affect cell growth and viability in a chicken cell line [28]. However, the methylation status of CaM can vary in a developmentally specific manner [13,29]. While the activity of some enzymes are directly affected by the methylation status of CaM such as plant NAD kinase [30], others like myosin light chain are not [30,31]. Considering the relatively high number of proteins known to interact with CaM (over 300) there is likely many proteins that interact differentially with methylated versus non-methylated forms of CaM. We also noted an apparent automethylation of CaM KMT but do not know the site of methylation or whether or not it carries any biological significance. This type of autocatalytic activity has been shown for several enzymes, and it can affect different protein functions. For instance, inhibition of enzymatic activity by automethylation was identified in the DNA-cytosine-5methyltransferase (m5C-MTase) M.BspRI [32] as well as repression for Metnase, a human SET and transposase domain protein that methylates histone H3 and promotes DNA doublestrand break repair [33]. A different effect of automethylation is seen for histone H3 methyltransferase G9a. The autocatalytic G9a methylation was found to be important for protein-protein interactions. The methylation creates a binding site which mediates in vivo interaction with the epigenetic regulator hetero-nuclei by DAPI (blue), and the merged image. (C) Cell lysates (100 mg of protein/lane) from mouse muscle, heart, liver, kidney, brain and spleen were resolved by SDS-PAGE, transferred to a nitrocellulose membrane, and blotted with an affinity purified polyclonal anti-CaM KMT antibody (1) immune and (2) pre-immune serum. Anti-HSP90 antibody served for protein loading control, 100 mg protein/lane were analyzed. Positions of CaM KMT and HSP-90 are indicated by the arrows. (D) GFP-CaM KMTsh is localized to the Golgi. COS-7 cells were transfected with the GFP-CaM KMT short variant and immunostained with primary antibodies against Golgi 58 k protein. GFP-CaM KMTsh was detected directly by the fluorescence microscopy (green) and 58 k Golgi protein was visualised with Cy3-labeled secondary antibodies (red). Cells nuclei were stained with DAPI (blue). Shown is the merged image presenting colocalization (in yellow) of the GFP-CaM KMT short protein with Golgi apparatus. doi:10.1371/journal.pone.0052425.g003 chromatin protein 1 (HP1) [34,35]. The significance of the automethylation is not known, for Dnmt3a it was suggested to be either a regulatory mechanism which could inactivate unused DNA methyltransferases in the cell, or simply be an aberrant side reaction caused by the high methyl group transfer potential of AdoMet [36].
Our analysis of the subcellular localization of CaM KMT within the cell showed both cytoplasmic and nuclear localization. Taking together these observations suggests CaM KMT activity probably takes place in both compartments. The distribution of CaM KMT in the nucleus and the cytoplasm seems equal in all cells, suggesting that the shuttling is not a cell cycle dependent event. However, the purpose and the mechanism of the shuttling into the nucleus remains to be further investigated. Intracellular distribution of calmodulin was also found to be both nuclear and cytoplasmic. Little is known about how the subcellular localization of calmodulin is regulated, a process that, by itself, could regulate calmodulin functions [37]. Calmodulin is the major calcium sensor in neurons when present in the cytoplasm [38]. While in the nucleus, calmodulin binds to some co-transcription factors, like BAF-57, a protein member of a complex involved in the repression of neuronal specific genes [39]. The mental retardation in the patients lacking CaM KMT may suggest an important role for CaM KMT in neuron functions. Since in the 2p21 deletion syndrome patients we previously reported reduced activity of mitochondrial respiratory complexes, except complex II [1], it was possible that CaM KMT will have a mitochondrial localization (we have tested subcellular expression of all other genes deleted in the 2p21 deletion syndrome and none localizes to the mitochondria, not reported). It could localize similar to C20orf7, a predicted methyltransferase that is essential for complex 1 assembly or maintenance and may methylate NDUFB3, complex subunit. C20orf7is peripherally associated with the matrix face of the mitochondrial inner membrane [40].
CaM KMTsh-GFP fusion protein was found to localize to the perinuclear structure resembling that of the Golgi complex in COS-7 and HeLa cells. However, the precise function of this variant remains obscure. The 4th exon specific to this variant is not evolutionary conserved. Lysates of HEK293 cells transiently transfected with FLAG-CaM KMT or FLAG were immunoprecipitated with anti-FLAG antibody. The precipitated proteins were subjected to SDS-PAGE and then Coomassie stained. Molecular mass markers in kDa are indicated on the left. The band of approximately 90 kDa (shown with the asterisk) was excised from the gel, and analyzed by mass spectrometry. The heavy chains of the antibodies ,50 kDa, two nonspecific bound proteins about 70 kDa and FLAG-CaM KMT immunoprecipitated protein were also observed. (B) Alignment of the protein sequences Hsp90a and HSP90b. The bold stretches of amino acids (26% of the protein sequence) represent peptide sequences as identified by mass spectrometry in the NCBI data bank matching Hsp90a and Hsp90b. Diverse amino acids in Hsp90a and Hsp90b, present in the sequenced peptides and enable to distinguish between the isoforms (shown in red). (C) CaM KMT and Hsp90 proteins immunoprecipitate each other.HEK293 cells were transiently transfected with Myc-CaM KMT or an empty Myc vector and 48 h after the transfection, equal protein amounts of whole cell lysates were immunoprecipitated using an anti-Myc (left), anti-Hsp90 (right) and mock IgG antibody (left) as a negative control. The immunoprecipitates were subjected to the Western blot analysis using anti-Myc and anti-Hsp90 antibody as indicated. Equal protein amounts in the immunoprecipitation assays were demonstrated by analysis of 1% input. These experiments were repeated three times with identical results. doi:10.1371/journal.pone.0052425.g004 Subsequent to the anti-CaM KMT polyclonal antibodies generation, the CaM KMT expression was confirmed in various mouse tissues: spleen, brain, kidney, liver, heart and muscle. These results support the suggestion that CaM KMT is a ubiquitously expressed protein, and highly expressed in the tissues affected in 2p21 deletion syndrome.
In this work we have shown that CaM KMT interacts with the Hsp90 protein. Hsp90 is a molecular chaperone and the most abundant heat shock protein under normal conditions. It exhibits ATPase activity which is essential for its chaperone function. Hsp90 binds to an array of client proteins that require its chaperon function for their folding, stabilization, ligand binding and activation. In addition, Hsp90 interacts with various types of cochaperones, which regulate the ATPase activity of Hsp90, mediate the folding and activation of the client proteins, and direct Hsp90 to interact with specific client proteins [7]. Our mass spectrometry results have shown that both isoforms, Hsp90a and Hsp90b, interact with CaM KMT. We mapped the interaction of CaM KMT with Hsp90 to the middle domain of Hsp90, showing that it is a direct binding, by performing pull-down experiments with Hsp90 purified fragments. The middle domain is known to act as a discriminator between different types of Hsp90 interacting proteins, mostly the client proteins [21]. To verify whether CaM KMT is a co-chaperone or a client protein we inhibited the ATPase chaperon activity of Hsp90 by geldanamycin. Since the inhibition led to a significant decrease in CaM KMT protein levels we concluded that CaM KMT is a novel Hsp90 client protein.
Another methyltransferase protein, Smyd3, was shown to interact with Hsp90 and this association was demonstrated to enhance the catalytic activity of Smyd3 [41]. Presumably, the biological activity of CaM KMT is also regulated by Hsp90 through a mechanism which requires CaM KMT-Hsp90 interaction. We conclude that CaM KMT is a novel, direct middle domain binding client protein and Hsp90 plays a general role in its expression levels. In this study we demonstrated that the novel CaM Lys-115 methyltransferase CaM KMT has a cytoplasmic and nuclear localization. The loss of CaM KMT results in hypomethylation status of CaM and we found that CaM KMT is a novel Hsp90 client protein. This study demonstrates the first step to provide basic information on CaM KMT that is deleted in patients of two contiguous gene deletion syndromes. Figure S1 Mass spectrometric analyses of CaM purified from lymphoblastoid cell lines of 2p21 deletion patients and normal individuals. CaM purified using a phenyl sepharose resin was subjected to trypsin digestion and peptide analysis. CaM extracted from 2p21 patients (A) showed one peptide corresponding to the digested fragment L116-R126, an indication that K115 was not methylated, and one peptide corresponding to H107-R126, in which the trypsin cut was overpassed, but also shows that K115 is not methylated. The missing peptides corresponding to the same sequence H107-R126 if trimethyllysine was present are reported in between parenthesis and in a smaller font. The expected region in the spectrum where their masses should be visible are indicated by the arched shape. The analysis on CaM from normal individual (B) showed 2 peptides corresponding to the sequence H107-R126 containing one or two oxygens (indicated in figure by ox), both trimethylated at position 115 (K3Me). The peptides seen in panel A (not containing the trimethyl group on K115) were not detected in the normal individual, as also demonstrated by panel C where the peptide L116-R126 is not visible in the wild type CaM spectrum. (TIF) AdoMet (from PerkinElmer), in 100 mM sodium phosphate buffer, pH 7.4, at 37uC for 1 hour. In control reaction unlabeled (cold) AdoMet (Sigma) was also added to a final concentration of 100 mM. After incubation, the reaction was terminated by the addition of SDS sample buffer, and the samples were subjected to 12% SDS-PAGE, the gel was stained with Coomassie blue staining (Imperial protein stain kit, Pierce). For fluorography, gels were treated with 2,5-diphenyloxazole (PPO) (Sigma), vacuum dried at 70uC and exposed to X-ray scientific imaging film (Kodak, MS ) at 280uC for 7-14 days. (TIF) | 9,378.6 | 2012-12-20T00:00:00.000 | [
"Biology",
"Medicine"
] |
Analysis of the Nature and Level of Social Capital in Smallholder Grain Farmers Marketing Groups in Kenya
Bridging and bonding social capital has been known to widen the benefits of collective action. Data drawn from 100 smallholder grain farmers groups in Mt. Kenya region was used to measure three dimensions of bonding social capital namely: relational, cognitive and structure. Social capital results indicated that the groups’ bonding social capital was relatively high and equal as indicated by strong close connections, trust among members and sharing a common vision. However, the groups varied significantly (p ≤ 0.1) in their level of bridging social capital where high performing groups in collective grain marketing had the highest average score (0.88), followed by average groups (0.44) and then low groups had the least (0.35). This indicates that bridging social capital could have had a positive and significant influence on group grain marketing performance. This shows that strong bridging social capital embedded within a group with strong bonding social capital fosters more successful collective action.
Introduction
The availability of bridging social capital (linkages) to institutions and individuals has been known to improve their access to resources and opportunities giving them an added advantage to perform better than those without (Lawal et al., 2009). Liang et al. (2015) adds that the availability of trust, close connections, reciprocity and cooperation among market agents could also add to strengthening their gains from services and goods produced.
Social capital has attracted diverse definitions, interpretations, forms and methods of measurement. Sander (2015) defined social capital as the total value of social networks (the people one knows) and the inclinations that arise from the networks to do things for others (norms and reciprocity). This creates value for the connected people in the networks and also for bystanders (free riders) as well. Dill (2015) adds to Sander's definition that social capital is social resources including networks for cooperation, support and mutual trust. Organization for Economic Co-operation and Development (OECD) (2001) has a similar definition where social capital refers to networks and shared norms and values that enhance cooperation and understandings within or among groups. What is common across all the definitions is issue of networks, cooperation and close social cohesion. Additionally, it shows social capital is embedded in social networks and groups of people with close connections (Lewis and Chamlee-Wright, 2008).
There is a general consensus among economists that the traditional types of capital; human, financial, natural and physical capital; partially determine the economic growth and performance of an individual or an institution (Lawal et al., 2009;Dill, 2015). This is because these types of capital overlook the way economic actors interact and organize themselves to spur higher economic performance (Dill, 2015). This missing link is what Lawal et al. (2009) refers to as social capital. They also argue that, like any other type of capital, social capital can also be accumulated over time. Nilsson, Svendsen and Svendsen (2012) add that, some network resources like social capital, though not visible to the naked eye, could have some economic impact on the enterprises that are part of the networks. Fischer and Qaim (2014) further argue that linking smallholder farmer groups to emerging high-value chains, umbrella bodies and supporting organizations have been viewed as pillars to strengthen their performance and enhance their sustainability. This not only provides opportunities for efficient information flows and capacity building but also offers a bridge for the groups to forge effective business relations in emerging markets (Fischer and Qaim, 2014;Ochieng, 2014).
Literature review
This section provides a review of social capital literature. History of social capital, its dimensions and embeddedness has been discussed. Additionally, the role of social capital in organizations has also been reviewed. The review informs methodologies applied to analyze social capital in this study.
Dimensions and Embeddedness of social capital
Conceptualization that economic behaviours are embedded in social capital was popularized by Granovetter (1985). Granovetter (1985) perceived and distinguished between structural and relational embeddedness of social capital. Nahapiet and Ghoshal (1998) define structural embeddedness as the presence of impersonal linkages or network ties among actors either people or units. Relational embeddedness is defined as the personal relationships people have developed between each other over time and whose key facets include trust and feelings of closeness. This is what Lewis and Chamlee-Wright (2008) refer to as bonding social capital in relatively small and homogeneous groups of people with a shared common identity and norms of reciprocity.
Trust and shared goals are important in governing repeated face to face interactions among members of a group. Structural embeddedness gives rise to bridging social capital which contributes to social change as people with different social structures cooperate and share resources (Lewis and Chamlee-Wright, 2008). Granovetter (1985Granovetter ( , 1992 argues that transaction cost economics and rational choice theory are not sufficient to explain people's participation in markets as they ignore their involvement in social networks which dissuade them from behaving opportunistically. On this view, Granovetter (1992) adds that there is need for research that considers how people's economic actions are influenced by and in turn influence social networks. However, a key concern in literature has been how to define, identify and measure social capital.
Role of social capital in organizations
Social relations tend to exist among people in an organization. Therefore, the influence of social relations on an organization's activities has been the main theme in most studies on social capital. How social capital influences an organization's conduct, structure and institutions have been one of the key questions of social theory. There is a broad consensus in literature that social capital is a valuable asset which holds a promise for explaining the performance at various levels (Granovetter, 1992;Moran, 2005). However, social capital is not as separable from an organization as financial capital or physical capital nor is it as mobile as human capital. But, it is firmly bound within the firm's organization, strategy and development (Nahapiet and Ghoshal, 1998;Walker, 1998). As a result social capital can be a firm's long lasting source of competitive advantage (Adler and Kwon, 2002). Consequently the influence of social capital on performance of individuals, small groups, large organizations and nations has attracted wide scholarly attention over the years (Walker, 1998;Moran, 2005;Popp et al., 2013). Popp et al. (2013) concluded that social capital, especially trust, creates opportunities to be more innovative and work collaboratively to solve complex issues for mutual gain of the actors. When a group of people trust each other, it is easier for them to engage in collaborative activities for mutual gain and at lower transaction costs (Nilsson et al., 2012). Nilsson et al. (2012) also add that, though agricultural markets may exist in different parts of the world, there are always strong connections among collective action members with regard to collection of agricultural products. Coleman (1990) noted that social capital's influence comes from closed networks of personal relations that foster collective action among individuals in a group. This is because such individuals are able to reinforce their norms of exchange, easily monitor each other, and enforce sanctions. This helps to create cohesion, constrain exploitative behavior, reduce uncertainty in exchange and promote cooperation.
Different types of capital in farmer groups and other organizations are said to influence performance. Literature shows social capital is one of the important types of capital in an organization. However, social capital has been relatively overlooked yet it could also explain the differences in performance among farmer groups. Additionally, very little is known about the level of bonding and bridging social capital within the farmer groups involved in collective grains marketing in Kenya. Therefore, to fill this knowledge gap, this paper provides an analysis of the levels of social capital among grains marketing farmer groups in Mt. Kenya region of Kenya.
Methodology 3.1 Research design
Simple random sampling using a table of random numbers was used to select a sample of 100 groups from a population of 273 registered smallholder grain farmers groups in the Mt. Kenya region of Kenya as at December 2016.
Face to face interviews were used to collect quantitative and qualitative data from each farmer group. Respondents' feedback was recorded in a structured questionnaire partially adopted from World Bank's Social Capital Assessment Tool (SOCAT) at organization levels (World Bank, 2011). Respondents representing each group included both leaders and members, with each group having between five and twelve participants. This helped reveal the consensus views of a group's members. Interview for each farmer group was guided by a moderator and one observer who worked collaboratively. The moderator's main role was to facilitate the interview through asking the questions, probing key issues and systematically focusing the interview to the main issues of interest. The observer's main role was to record data into the questionnaire. ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.11, No.24, 2020
Indicators of social capital
Three indicators of level of relational, cognitive and structure were used to proxy bonding social capital for a farmer group. A likert scale with five pre-coded items (1 if strongly disagree; 2 if disagree; 3 if neutral; 4 if agree; 5 if strongly agree), for ranking 6 statements, was used to measure the level of bonding social capital in each group in terms of the three indicators. Each group member was expected to give a view of what they believed was the status of the group in terms of the social capital statements. Each member wrote their view (score based on the likert items) on a card and the consensus view was taken as the view of the majority of the members. There were two statements to measure the level of trust between members and their leaders, while an extra statement captured aspects of changes in the level of trust in each group over a period of the last three years as at the time of the survey. Upon explaining what a vision was, the members were asked to give their rank on how they felt members shared into the vision of the group. Finally, coming from the same locality (village) and having close relatives within the same group were used as indicators of close connections. The statements are as stated in the section that follows.
a. T r u s t Members in this group trust the leaders with making decisions that are for members benefit? Members in this group trust the leaders with the groups' assets and members' money? Trust in the last three (3) years has improved? b. Group vision Majority of the group members understand where they would like to see the group achieve in the next 10 y e a r s ? c. Close connections Majority of the group members are close relatives? Majority of the group members come from this village? Different proxies were used to estimate the level of bridging social capital in smallholder grain farmer groups from the Mt. Kenya region of Kenya. The proxies were: linkage (ties) to Non-Governmental Organizations (NGOs), Projects, government institutions, and membership or linkage with a farmers Community Based Organization (CBO) or bulk buyer. These ties were direct or indirect including membership ties, information relations, communication ties and business cooperation. The score was zero (0) for no linkage, 1 for direct linkage and 2 for indirect linkage. Indirect linkage referred to where a group got assistance or resources from a given support actor only through another actor. On the other hand, direct linkage referred to where a group and a support actor worked together as a formal team or informally and actively pursued opportunities of mutual gain either through a collaboration, partnership and membership. Direct linkage was therefore considered better than indirect linkage.
Analytical framework
Based on literature review and conceptual framework the selected variables were adequate to capture the key levels of social capital in the farmer groups. To measure the levels of social capital separate indices for bonding and bridging social capital were generated using Principal Component Analysis (PCA). The PCA multivariate statistical techniques were used to reduce the number of variables (score for each statement) in the data set to a lower dimension to reveal simplified structures that underlie it. That is, PCA creates uncorrelated indices or components from an initial set of n correlated variables. Following Wu (2012) each index component was a linear weighted combination of the initial variables. This is demonstrated in a model using a set of variables X 1 to X n in Equation 1. Model specification:
……….………… EQUATION 1
Where: b mn represents the weight (coefficients for the PCA rotated components) for the m th component and the n th variable. First, Statistical Package of Social Analysis (SPSS) software was used to measure sample adequacy was measured using Kaiser-Meyer-Olkin (KMO) and Bartlett's Test of Sampling Adequacy. The KMO value should be greater than 0.5 for a satisfactory factor analysis to proceed, while a significant Bartlett's Test value indicates that there some relationships between the variables included in the analysis (Field, 2005;Yong and Pearce, 2013). Additionally, communalities after extraction should probably be above 0.5 (Field, 2005;Yong and Pearce, 2013). Field (2005) added that, the average communality should be above 0.6 for sample size greater than 250, in this case the sample size was 400 observations. Table 1 indicates that the sample was adequate to run a PCA multivariate statistical analysis shown a KMO value of 0.570 while the Bartlett's Test of Sphericity was also significant with a p-value = 0.000. The average communality (5.064/6=0.844) is greater than 0.6 ( Table 2). For example, over 71% of the variance in members trust leaders decisions is explained while over 98% of the variance in members understand group vision is explained. This further show that all the variables were robust enough to ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.11, No.24, 2020 55 be included in the analysis therefore PCA was suitable for further analysis. Then, a pairwise correlation test was carried out to check whether the variables were correlated. All the variables had a correlation score of below 0.5 showing weak correlation between the variables as shown in Table 3. Then, all the six variables were included in the PCA matrix which was rotated using orthogonal varimax (Kaiser off) technique to standardize the coefficients. Components with eigenvalues of more than one were selected as they account for the most variance. The scores for the index were then predicted based on rotated factors. The conbach's alpha α was computed to check whether the selected items were related to one latent factor using the 'scale reliability coefficient' which should preferably be above 0.5 in order to accept (Nelson, 2007;Wu, 2012). In this case it was 0.5211 hence all the six bonding capital variables were accepted for further analysis as shown in Table 4.
Results and discussion
The results for bonding and bridging social capital characteristics of farmer groups in the three clusters are shown in Table 6. Analysis of variance (ANOVA) F-test results were used to test if there was significant difference across the three clusters of farmer groups. Cluster means and standard deviations were also computed.
Bonding social capital
Analysis of variance (ANOVA) F-test results show that cognitive (having a shared and well understood group vision); structural (close connections in a farmer group like coming from the same village and having close family members); and relational (trust among members and leaders in the group) bonding social capital were not significantly different across the three clusters. The three indicators of bonding social capital were further combined using PCA to show the overall level of bonding social among the farmer groups. Analysis of variance results for bonding social capital further confirm that there was no significant (F=0.69, p=0.50) difference across the three clusters as shown in Table 6.
The level of a shared vision was relatively above average for majority of the groups with an overall mean of 3.65. Most of the groups disagreed that they had close connections in the group with the mean for all groups being 2.67. Level of trust in leaders' decisions, trusting leaders with group assets and a rise in level of trust in the group over the last four years was also not significantly different across the three clusters. This shows that farmer groups in in the Mt. Kenya region of Kenya had relatively the same level of bonding social capital despite the variation in marketing performance. Bonding social capital is therefore like a necessary condition before any meaningful collective action takes place. The findings concur with Pretty et al. (2011) who pointed out that success in ISSN 2222-1700(Paper) ISSN 2222-2855(Online) Vol.11, No.24, 2020 57 collective agricultural activities stems from developing bonding social capital among farmers with a common interest.
Bridging social capital
Bridging social capital relatively distinguishes the three clusters of grains farmer groups in the Mt. Kenya region of Kenya as it was significantly (F=10.49, p=0.00) different as shown in Table 6. The average score for all the groups was 0.56. This was way lower than that for high performing groups which was 0.88 while that for average and low performing groups was lower than the overall mean at 0.44 and 0.35 respectively as shown in Figure 1.
FIGURE 1: FARMER GROUP'S LEVEL OF BRIDGING SOCIAL CAPITAL
The high score for high performing groups could be due to the relatively high direct and indirect linkages with NGOs, projects, government institutions, bulk buyers and membership to farmers CBOs. Majority of the high performing farmer groups had a direct linkage with the Ministry of Agriculture (MoA), umbrella farmer group associations or farmer Community Based Organizations (CBOs) and select Non-governmental Organizations (NGOs) working with farmer groups in the region. These organizations were said to foster linkages with bulk grain buyers, and enhance access to inputs and trainings. This shows that having strong bonding social capital backed up with an equally strong bridging social capital fosters more collective action in relation to collective marketing of grains as compared to having strong bonding capital with weak bridging social capital. This makes bridging social capital like a sufficient condition for success in fostering higher performance in collective marketing. The results concur with Pretty et al. (2011) who concluded that for people to gain the most from social capital there is a need for a balanced mixture of bonding, bridging and linking social capital. This also agrees with Van and Adekunle, (2012) who concluded that bridging social capital can strengthen farmers' access to knowledge, resources and adoption of agricultural innovations. However, the findings differ with Ruben, and Heras, (2012) who argue that if bridging social capital is stronger than bonding social capital, collective action in agriculture become more feasible.
Summary, conclusion and recommendations 5.1 Summary
To sum up, farmer groups in the Mt. Kenya region of Kenya have relatively strong bonding social capital. This is shown by relatively similar and high level of relational, cognitive and structure dimensions of bonding social capital. This could be because most of the groups were founded and bound by the principle of mutual trust and reciprocity. Bridging social capital measured by external linkages with different actors was statistically different across the three farmer group clusters. High performing groups had the highest average score (0.88), followed by average groups (0.44) and then low groups had the least (0.35).
Conclusion
1. Bonding social capital is the foundation of any meaningful collective action. As a result, farmer groups were similar in terms of having relatively equal and strong bonding social capital regardless of their success in fostering collective action. 2. High levels of bridging social capital embedded within a group with strong bonding social capital fosters higher performance in terms of collective marketing of grains. Groups can further be strengthen their bonding social capital through focusing on building more internal cohesion in form of trust among members and leaders, working as a team to achieve a shared vision and abiding by set group rules. 2. To strengthen bridging social capital across all the farmer groups, it is important for the groups to spread their tendrils and link with new actors along the value-chain, especially with those that link them to new lucrative markets. 3. Future research: -Future research can consider measuring the change in the level of bridging and bonding social capital in the farmer groups and compare its effects with the effects from other forms of capital in the groups like physical and financial capital. | 4,747.4 | 2020-12-01T00:00:00.000 | [
"Economics"
] |
Galileon Radiation from a Spherical Collapsing Shell
Galileon radiation in the collapse of a thin spherical shell of matter is analyzed. In the framework of a cubic Galileon theory, we compute the field profile produced at large distances by a short collapse, finding that the radiated field has two peaks traveling ahead of light fronts. The total energy radiated during the collapse follows a power law scaling with the shell's physical width and results from two competing effects: a Vainshtein suppression of the emission and an enhancement due to the thinness of the shell.
Introduction
The discovery of the accelerated expansion of the universe [1] triggered a variety of attempts to modify the infrared sector of gravity in order to avoid the introduction of a cosmological constant, whose value cannot be explained naturally in the framework of quantum field theory [2]. Among the earlier proposals was the five-dimensional Dvali-Gabadadze-Porrati (DGP) model [3], in which the gravitational dynamics is governed by an action containing two Einstein-Hilbert terms for both the ambient metric and its pullback on our four-dimensional braneworld. The hierarchy between the five-and four-dimensional Planck masses results in an effective mass term for the four-dimensional graviton. Its phenomenological prospects have been widely studied (see, e.g., [4] and references therein).
One consequence of the formulation of the DGP model was a renewed interest in massive gravity, which has turned out to be one of the the most interesting large scale modifications of gravity studied in the last decade (see [5,6] for reviews). The graviton mass breaks general covariance, which allows in principle for a degravitation of the cosmological constant term [7]. Invariance under coordinate transformations can nevertheless be restored through the introduction of Stückelberg fields [8].
Massive gravity, at least in its most naive formulation, is not free from potential problems.
One of them is that the additional graviton polarizations do not decouple in the limit of zero mass, so General Relativity (GR) is not recovered. This van Dam-Veltman-Zakharov (vDVZ) discontinuity is overcome through the Vainshtein mechanism [9], in that the strong nonlinearities of the massive gravity Lagrangian screen the matter coupling of the massive graviton's scalar mode at distances below some characteristic Vainshtein radius. A second problem is the emergence of classical Ostrogradsky instabilities [10] or ghost states at the quantum level, in particular the so-called Boulware-Deser ghost [11]. This dangerous mode is avoided through a resummation of nonlinear terms [12], leading a ghost-free theory of massive gravity at all orders.
In both the DGP model and massive gravity, there is a limit in which the massive graviton's scalar mode π(x) decouples from the transverse components h µν (x), resulting in a scalar field theory invariant under Galilean transformations, π(x) → π(x) + a + b µ x µ , and characterized by with m the graviton mass. These Galileon theories [13] have some interesting properties as field theories: the scale Λ is stable under quantum corrections and there is a regime in which non-Galileon interactions remain unimportant [14,13]. Galileon field theory has been extensively studied in a number of physical setups [6,15].
Gravitational collapse is a powerful testbench in gravitational physics. Analyzing the problem of a collapsing sphere of dust, Oppenheimer and Snyder [16] were able to glipmse the nonsingular character of the horizon, decades before a mathematical solution to the issue was available. As an ubiquitous process in astrophysics, it is the source of many observational signals in the Universe [17].
There are several reasons justifying the study of gravitational collapse in the context of massive gravity and Galileon theories. In GR, Birkhoff's theorem prevents the emission of gravitational radiation from spherical collapse. Gravitational theories with scalar degrees of freedom, on the other hand, allow the radiation of energy even when spherical symmetry is preserved [18]. The opening of new channels for the radiation of energy can be relevant in a number of astrophysical processes and might be used to put the theory to the test. In the case of massive generalizations of GR, and particularly in Galileon theories, the very special features of these scalar modes might lead to distinct observational signals.
So far, gravitational collapse in Galileon theories has been studied mostly in the context of structure formation [19] and the Vaidya solution [20]. In this paper we analyze the problem of Galileon emission at the onset of the gravitational collapse of a spherical thin shell of matter.
Our model consists in a delta-function shell that starts collapsing with or without initial velocity, stoping collapse after a short time. Due to its coupling through the trace of the energymomentum tensor, this collapsing matter introduces a time-dependent perturbation acting as a source for a radiating Galileon field.
One of the consequences of considering the ideal situation of a delta-function shell is that we have field gradients above the Galileon scale Λ, leading to breakdown of effective field theory.
In addition, the total energy radiated during the process diverges due to the contribution of arbitrary high frequencies. In order to avoid these problems we carry out our calculations using a physical cutoff in frequencies whose value is determined by the physical width of the shell, which we take to be much larger than the cutoff scale Λ −1 . What we find is that the profile of the Galileon field detected at large distances exhibits two pulses propagating ahead of light fronts. As for the total energy radiated, we obtain a very simple scaling with the shell's physical width.
The plan of the paper is as follows. In the next section we present the model to be studied, an imploding delta-function spherical shell collapsing under its own gravity. In Section 3 we detail the perturbative approach to be used and solve for the profile of the Galileon field at large distances from the source. After this, the total energy radiated is computed in Section 4, whereas Section 5 is devoted to the analysis of the next-to-leading order correction, and in particular to the case in which the collapse starts from rest. Finally, in Section 6 we comment on some possible directions for future work.
The model
We work in the context of a cubic Galileon theory with Lagrangian [6] where Λ is the Galileon energy scale, M Pl is the Planck mass, and T is the trace of the matter energy-momentum tensor. To address the problem of Galileon emission in the gravitational collapse of a spherical source we consider the following form of the energy-momentum tensor The time evolution of ρ(t, r) is determined by the equations of gravity. In our calculation we also follow the strategy of [21] and consider that time evolution is treated as a perturbation on a static background. In other words, we treat the problem perturbatively and split the energy-momentum tensor into a static background and a dynamic perturbation ρ(t, r) = ρ 0 (r) + δρ(t, r), while the Galileon field is also split accordingly as π(t, r) = π 0 (r) + φ(t, r), (2.4) where π 0 (r) is a static, spherically symmetric solution to the Galileon field equations [22].
To be more specific, let us focus on the Galileon equations sourced by an energy-momentum tensor associated with a static spherical shell located a the position r = R 0 , where σ 0 is the superficial density of the shell. In choosing a spherical shell instead of a ball we simplify the analysis in that Galileons are emitted only at the surface of the collapsing body and not from the interior, which would be the case if δT = 0 for r < R. This can be seen as a rough model of the collapse of an outer layer of an astrophysical object over its core.
The equations of motion for the background helicity-0 mode in the cubic Galileon theory To solve them, we look for solutions outside and inside the shell and match them across r = R, using the conditions derived from integrating Eq. (2.6), which gives With this result, we integrate Eq. (2.6) over a ball of radius r > R 0 , obtaining This gives a quadratic equation for π 0 /r whose solutions are The condition that the Galileon vanishes at infinity selects the − branch. Notice that this is the same solution than for the case of a pointlike particle with mass m = 4πσR 2 0 . For the shell interior, we just integrate the homogeneous equation with the result Plugging Eqs. (2.9) and (2.12) into the matching condition (2.8), we fix the value of the integration constant C to be Equation (2.12) has therefore two solutions: a trivial one π 0 = 0 together with To find the right background solution for the cubic Galileon we have to take into account that for σ 0 → 0 we should recover a continuous "vacuum" solution π 0 = 0. Thus, we are forced to choose the trivial solution for the interior of the shell and write where θ(x) is the Heaviside step function. We read the value of the Vainshtein radius off this expression, with the result The solution for the background Galileon field π 0 (r) obtained by integrating Eq. (2.15) is continuous. The discontinuity in its first radial derivative at r = R 0 is a consequence of the field being sourced by an infinitely thin distribution of matter.
Thus, our problem has two natural length scales: the radius of the shell R 0 and the Vainshtein radius r . Let us asume first that the Vainshtein radius is (much) smaller than the radius of the shell. To see whether this approximation is physically relevant, we rewrite the condition which can be recast as with V 0 the volume enclosed by the shell. Defining the equivalent density of the shell as we get the bound We take the usual value [5] for the cutoff scale Λ (1000 km) −1 , which is obtained from (1.1) by assuming a graviton mass of the order of the Hubble scale, m ∼ H −1 0 . This value is around the current bounds for the graviton mass [25]. With this we arrive at where the bound is of the order of the present energy density of the universe. This energy density is completely negligible in an astrophysical setup, so in order to have a physically meaningful model we exclude the case when the Vainshtein radius is much smaller than the radius of the shell. In the following we will work in the case where the radius of the shell lies well inside the Vainshtein radius, R 0 r .
Perturbative analysis
Inserting the decomposition (2.4) into the Lagrangian for the cubic Galileon theory (2.1) and keeping terms quadratic in the perturbed quantities leads to the following Lagrangian for the perturbation in the Galileon field φ(x) where the effective metric Z µν is given by [6] In terms of this, the equations of motion read The radiating Galileon field is sourced by the perturbation in the trace of the energymomentum tensor. In the case of a collapsing shell, we have where τ is the proper time for an observer falling with the shell and R(τ ) is given by the solution to the equation [23] Here, M is the mass of the shell as seen by a distant observer and R 0 , σ 0 are the initial values of the radius and surface energy density respectively. The first term on the right-hand side of this equation can be interpreted as the kinetic energy of the shell, whereas the second one is its gravitational binding energy. Once R(τ ) is found, the (exterior) time coordinate at the location of the shell is given in terms of proper time by the solution to the equatioṅ Finally, the time evolution of the surface density is given in terms of R(τ ) by Let us consider a physical situation in which the shell is stable for negative times and that at τ = 0 it implodes with initial velocityṘ 0 during a short proper time δτ . The corresponding perturbation on the static energy-momentum tensor (2.2) induced by time evolution is given by while δT = 0 for τ < 0 and τ > δτ . Using the equation for the time evolution of the surface energy density (3.7) we can eliminateσ(τ ) to write where our expansion parameter is Once δT is known, the corresponding perturbation in the Galileon field φ(t, r) can be computed as where G(x, x ) is the retarded Green function of the Laplacian operator defined in Eq. (3.3).
This object has been studied in [24]. In spherical coordinates, it is explicitly given by the solution to the equation (3.13) where L 2 denotes the Laplacian over the transverse two-dimensional unit sphere and the functions e i (r) are given by 14) e 3 (r) = 4r 3 + r 3 4 r 3 (r 3 + r 3 ) .
Using the fact that the coefficients are time independent, the Green function can be expanded as [24] G where the radial part of the Green function satisfies the equation Due to the spherical symmetry of the gravitational collapse under study, the multipole expansion of the Green function gets truncated to the monopole term = 0. This means that Eq.
(3.12) reads where in the second line we have exploited the fact that the perturbation to the energymomentum tensor is spherically symmetric, so the integration over angles is trivial, and that the integrand vanishes outside the region 0 < τ < δτ . Notice as well that we have changed from the global time coordinate to proper time τ , which accounts for the Jacobian factor, where we have defined Substituting the expression of the perturbation given in Eq. (3.10), we can carry out the integration over the radial coordinate, to find where ∂ 2 indicates the derivative with respect to the second argument of the function.
As explained above, we have to assume that the radius of the shell is located well below the Vainshtein radius (R 0 r ), whereas we are interested in the radiation reaching an observer located far away from the source (r r ). We are therefore in the so-called radiation limit, ξ ωr ξ, where the function g 0 (ξ, ξ ) takes the form [24] g 0 (ξ, ξ ) = h with h where I ∞ ≈ 0.253.
In order to compute the derivative in the integrand of Eq. (3.19), we use the Bessel function In addition, the integral admits a further simplifications in the case of a nonrelativistic collapse: taking the radius of the shell much larger than its Schwarzschild radius we can set f (τ ) ≈ 1, whereas asuming its velocity during the collapse process to be much smaller than the speed of light we haveṘ(τ ) 1. With this, we arrive at For a very short implosion, the integral over τ can be linearized to find On general grounds we can assume that the matching takes place at a frequency ω 0 r ∼ 1 For the first integral, we find in the limit R 0 r The expression of the field given in (3.25) shows that both the low and high frequency contributions to the integral come multiplied by an overall factor R 9/4 0 . This means that the low frequency contribution shown in Eq. (3.27) is suppressed by a factor (R 0 /r ) 3 , whereas the prefactor of the high frequency modes is just (R 0 /r ) 3/4 . As a consequence, due to the relative suppression of the low with respect to the high frequency modes we can neglect the former and write φ(t, r) ≈ 3 8π This is indeed a problem from the point of view that we are dealing with an effective field theory valid below some energy scale Λ. A physical way to avoid this is to consider a finite size source, in such a way that gradients in the Galileon field are kept below Λ 2 . This indeed makes the analysis much more involved. Here we use an alternative procedure consisting in introducing a physical cutoff function in the integral suppressing high frequencies. The scale of the cutoff is determined by the characteristic width of the collapsing shell ∆ which is also bound by the cutoff scale, ∆ Λ −1 . In the following, we use an exponential damping factor e − x , where will be taken to be of the order As it will be seen later, other choices of the cutoff function lead to modifications of our result by factors of order one. Thus, our analysis is valid in the regime This in particular means that the radius of the collapsing shell should satisfy R 0 ( Λ) −1 .
The imaginary part of the integral in (3.29) can be computed numerically. To optimize the calculation we use an adaptive mesh in which, after a first sample, the density of points increases in those regions with a finer structure. The results are shown in Fig. 1. We see that a distant observer located at r > r (the regime of validity of our analysis) observes two consecutive pulses in the profile of the Galileon field centered at where the time difference between the two flashes only depends on the radius of the shell. For R 0 2G N M , a light ray emitted from the surface of the collapsing shell at t = 0 propagates along t − r ≈ 0. Since we are assuming that R 0 r , we find that both Galileon pulses will arrive before the light ray by a time interval where the time difference between the pulses is very small compared to the time of arrival. As a consequence, we find that the Galileon field pulses travel ahead of the light fronts.
Energy radiation
Next we evaluate the energy radiated during the implosion. Computing the energy-momentum tensor for the Galileon perturbation, the energy radiated by solid angle is given by [24] To evaluate the integrand in this expression, we notice that in the solution given in Eq. (3.29) all dependence on t and r comes through the combination t − r, apart from the overall 1/r factor. This leads to the following relation between the time and radial derivatives At large distances we can neglect the r −2 corrections and write where u = (t − r + r I ∞ )/R 0 and the right-hand side is independent of r. As expected from the symmetry of the problem, energy emission is isotropic.
The numerical solution for the Galileon field shown in Fig. 1 indicates that the integrand is strongly damped for large values or |u |, which guarantees the convergence of the total energy radiated during the collapse, the contribution to the integral being peaked around the two pulses at u = ± √ 3/2. The result can be written as f (x uv ) = 0.371x 3.005 uv . (4.6) The right panel of Fig. 2 shows a logarithmic plot of the numerical results in this case, together with the previous fit function.
From this we infer the following expression for the total energy emitted by the collapsing shell to be where C is a numerical constant of order 1 depending upon the details of the collapsing object.
We see how the overall size of the total energy radiated results from the competition of two effects: a Vainshtein suppression by a factor (R 0 /r ) 3/2 and the enhancement due to the finite width effects scaling as (R 0 /∆) 3 . This contrasts with what is found for the Galileon radiation from a binary system, where the suppression factor is not determined by the characteristic size of the system but by its frequency [21].
It is important to stress that the dependence on ∆ in Eq. (4.7) cannot be considered a spurious effect. On physical grounds, it is expected that quantities such as the radiated energy depend on the details of the shell, in particular its effective width. In our approach this width is introduced as a physical scale cutting the contributions of high frequencies off. Being a physical cutoff, there is every reason to expect that the final result keeps a memory of it.
Moreover, we have seen that the scaling of ∆ is robust with respect to different mathematical implemententations of the cutoff scale.
15
Our previous analysis was made under the assumption that the implosion of the shell occurs with nonzero initial velocity,Ṙ 0 = 0. In order to consider the collapse from rest rather than an implosion, we need to compute the perturbation of the trace of the energy-momentum tensor at second order in δτ .
In order to preserve the pertubative expansion, we impose the "slow roll" conditions Adding the next-to-leading order correction to the source in Eq. (3.3) allows for a resolution of the Galileon perturbation in the form where [φ(x)] 1 is the solution found in the Eq. (3.29).
With these expressions we can calculate the next-to-leading order corrections to the Galileon radiation process studied in previous sections. It can be seen that this correction has a structure similar to the leading term. Again we find two pulses located at the positions given in Eq.
(3.32). Here, however, we will be interested instead in the case of a matter shell at the onset of gravitational collapse from rest, for which the leading order contribution vanishes, [φ(x)] 1 = 0.
SettingṘ 0 = 0 and following the same steps and approximations as in Sec. 3, we arrive at the following expression for the perturbation of the Galileon field Comparing this result with the one found in Eq. (3.23), we see that the only difference with respect to that calculation presented in the previous section is that now we have a different prefactor depending onR 0 rather thanṘ 0 . Physically, we find the same profile for the Galileon field depicted in Fig. 1, two successive pulses travelling ahead of the light front. As for the total energy radiated, we find where again C is a numerical constant of order one.
Closing remarks
Apart from their intrinsic interest in classical and quantum field theory, Galileons emerge in theories of massive gravity and therefore provide a window to test alternative theories of gravity based on deformations of the Einstein-Hilbert action by relevant operators. In particular, astrophysics may provide a number of physical scenarios where Galileon theories could be put to the test. Here we have presented a tentative study of the problem of Galileon radiation in spherical gravitational collapse. Choosing spherical symmetry has two consequences: it simplifies the problem from a technical point of view and also eliminates the GR background radiation leaving a distinct Galileon signal. Although quite simplified, our model could be seen as a first approximation to the problem, displaying a number of features expected to be present in more realistic descriptions of astrophysical gravitational collapse.
Our results indicate the emission of two pulses in the Galileon field traveling at superluminal speed. This is not an unusual feature in modified theories of gravity in general [26] and massive gravity and Galileons in particular [13,27], where nonlinearities may lead to superluminal propagation. In the cubic Galileon theory this can be seen from the effective metric (3.2), whose structure of light cones shows that the phase and group velocity of radial perturbations exceeds the speed of light.
Galileon theories are known to modify observable effects such as weak lensing [28]. An interesting issue worth considering is the feasibility of direct Galileon field detection. In the theory studied here, the Galileon field perturbation couples to the trace of the energy-momentum tensor, unlike ordinary gravitational waves that couple to the transverse-traceless part of the energy-momentum tensor. In both cases, however, their coupling to matter have the same suppression by the Planck scale. Given its superluminal propagation, the Galileon signal should predate the electromagnetic observation of the astrophysical phenomenon sourcing it. Despite the additional Vainshtein suppression, the recent success in the direct detection of gravitational waves [29] opens up the possibility of designing experiments sensitive to these extra modes in a maybe not-too-distant future.
There are various other directions for future work, considering more realistic models of gravitational collapse and leaving behind the approximations used in this paper. One would be using a top-hat window function for the density of the collapsing object, i.e. studying the collapse of a homogeneous dust ball instead of the shell considered here. At early stages, the Galileon radiation coming from the surface of the object is expected to behave similarly to the one produced by the collapsing shell, including the superluminal behavior found above.
The radiation coming from inner layers, however, would presumably smooth the pulses out into a band profile. A full analysis valid for late times would require relaxing some of the approximations used in our analysis.
Within the context of the cubic Galileon theory, it would be interesting to explore the possibility of going beyond the perturbative approach used here. This requires solving the full Galileon field equation in the curved background produced by a spherical source. Due to the nature of the field equations, this would require the application of more powerful numerical techniques. Finding such solutions would allow to study the issue of superluminal propagation in a more general fashion. These and other problems will be addressed elsewhere. | 5,948.6 | 2016-05-26T00:00:00.000 | [
"Physics"
] |
Sedimentological characteristics and their relationship with landsliding in the Bhilangana Basin, Garhwal Himalaya,
Every year during the Indian Summer Monsoon, large number of landslides occur in the Lesser and the Greater Himalayan rock formations, triggered by intense rainfall episodes coupled with physiography and anthropogenic activities. The present study investigates the slope failure mechanism’s relationship with slope material compositions. Hence, sediment samples of 25 landslides were collected along the road corridors. These samples were collected from the Lesser and Greater Himalayan ranges and rock formations. The sediment was collected from the active landslides to understand particle size, clay content, moisture content, mineral composition, crystallographic structures, and the influence of geomorphic processes on the landslide failure processes. The samples were analyzed using a sieve, X-ray Diffractometry (XRD), and Scanning electron microscopy (SEM) to accomplish the study’s objectives. The analysis indicates that the Lesser Himalayan meta-sedimentary rock formations have a high composition of fine and medium-size particles, lesser quartz mineral compositions with calcite, and a highly crushed and fractured presence, conchoidal fractures types of morphological features. Micrographs obtained from the schist and phyllite rock of the Lesser Himalayan origin shows highly sheared and crushing, crystal overgrowth; and, in turn, have a higher susceptibility to landslides. The relationship between slope materials and instability has shown a definite pattern in the study area. The debris flow and slump have a comparatively higher percentage of clay and silt compared to debris fall, debris slide, and rockfall. The particle size composition of sediment collected from the slip zone is significantly related to the types of landslides. The present study is helpful in understanding the sediment composition and slope failure mechanism.
Introduction
Landsliding is the dominant mass-wasting process in the mountainous area (Burbank et al., 1996;Hovius et al., 2000;Meunier et al., 2008;Brunetti et al., 2014), where young streams continuously undercut the slopes.A significant challenge in landslide investigation is to identify the slope failure mechanism under different physiography and its relationship with slope material compositions.A critical relationship was found between the characteristics of slope materials and the composition of clay-silt sediment on the sliding surface; and the slope failure mechanism (Summa et al., 2010;Azañón et al., 2010;Brunet et al., 2016;Summa et al., 2018).Geomorphological investigation of slope instability is very crucial to understanding the causes of a landslide in geo-dynamically active terrain like the Garhwal Himalaya.The factors which induce landslides in the area include; seismo-tectonic movements (Brunsden et al., 1981;Owen, 1991;Owen et al., 1995Owen et al., , 1996;;Owen and Sharma, 1998;Barnard et al., 2001), heavy and prolonged spells of rainfall leading to saturation and increasing the incumbent load on the slope that substantially increase shear stress (Bartarya and Sah, 1995;Paul et al., 2000;Srivastava et al., 2013;Owen et al., 1996;Shroder and Bishop, 1998); and anthropogenic activity (Barnard et al., 2001;Begueria 2006;Glade, 2003;Remondo et al., 2005).
The reduction of shear strength is the principal cause of slope instability and often triggers landslides.Therefore, it is essential to analyze the material characteristics of the landslide slip zone to understand the fundamental mechanisms of slope instability, and to delineate susceptible areas (Baoping and Haiyang, 2007).Hence, analysis of the sedimentary depositional environment can provide crucial insight into the displacement mechanism of materials (Dufresne et al., 2016;Xiaomin et al., 2019).Shear strength reduction at the slope slip zone could be a result of mineral composition, grain size, and the processes that are operating over the slope.Therefore, a comprehensive investigation of slope material is required to analyse the mineral composition, lithology, and grain morphology, under which landslide occurs (Wiemer et al., 2015).
The mineral and chemical composition of a landslide-slip zone material, generally those of slide bodies and bedrocks, could provide insight into the landslide characteristics.It is considered to be one of the effective ways to examine the causes of instability in the landslide-slip zone, and the mechanism of slope failures (Dill, 1998).Although, the x-ray diffraction analysis of the mineralogical composition of slope material does not have a direct relationship with landslide yet; it was used in another way, such as site-specific sediment mineralogy for understanding the mechanism of landslide slip zone (Cafaro and Cotecchia, 2001;Bogaard et al., 2007;Summa et al., 2010;Xie et al., 2022).The abundance of Fe 2 O 3 minerals within the slip zone of landslide materials is saturated due to rain-water infiltration and groundwater, leading to the hydrolysis of quartz, and the formation of clay minerals, particularly kaolinite (Summa et al., 2010).Hence, the increase of clay minerals concentration in the Figure 1.Geological map of the Garhwal Himalaya (modified after Srivastava and Ahmad, 1979).
slope material, particularly at the slip-zone, acts as a lubricant and reduces shear strength; and activates landslide (Summa et al., 2010;Wang and Sassa, 2003).
Ascertaining links between slope failure mechanism and micro-surface features present on the slope materials can provide insight into slope failure.Systematic studies of quartz grains offer comprehensive information on sedimentary histories such as erosion, deposition, and the tectonic environment (Krinsley and Doornkamp, 1973;Mahaney et al., 2001).The presence of micro-surface texture digenetic changes can be analyzed to understand the alteration, and cementation of original sediments, and the nature of pore-fluid that moved through the sedimentary sequence (Tucker, 1988;Mahaney, 2002;Moral Cardona et al., 2005;Pandey, 2011).Sedimentological properties have been found to reflect the interaction between source material, energy, and climatic regimes in high mountainous areas (Sharma, 1996).Therefore, particle size, micro-fabric, and quartz grain texture analysis are very significant for the identification of processes and intensity of slope instability (Pandey, 2011;Vos et al., 2014).The term slope failures, which includes all types of the mass movement, has been used as a synonym for landslide in the present study.To understand the surface processes; that cause slope instability, sediment samples were collected from landslides and processed using Sieve analysis, X-ray Diffraction (XRD) powder, and Scanning Electron Microscopy (SEM) techniques.
Geological setting of the area
The Bhilangana basin covers both the Lesser and Greater Himalayan ranges, separated by the Main Central Thrust (MCT).The area is seismically very active, and a large number of earthquakes occur along MCT ((https://earthquake.usgs.gov/earthquakes/map/?), inferring continual neo-tectonic activity along the thrust plane (Fig. 1).Lithological formations of the area include the Higher Himalayan Crystalline (HHC) that formed of granite and gneiss of Vakirta and Munisiari groups to the north of MCT.The Lesser Himalayan rock group includes two types of formations, i.e., the Lesser Himalayan Crystalline (LHC) of Ramgarh group and the Lesser Himalayan Sedimentaries (LHS) including Subathu, Tejam, Damtha and Chandpur formations (Valdiya, 1980;Pandey, 2011).
The geotectonic setup and lithology of the Garhwal Himalaya represent Quaternary to Protozoic formations (Fig. 1) with three distinct geotectonic units.Two of which are located in the Central Himalayan Crystalline Group of rocks; and the third is Deoban/Garhwal tectonic unit.The crystalline and sedimentary rocks are separated from each other by Thayeli thrust (Rao and Pati, 1982;Valdiya, 1980).It is a low-angle thrust separating these two tectonic units and is given the site-specific name of Thayeli.A major north-south direction fault, known as the Balganga fault, offsets the Thayeli thrust from a distance of 50 km in the Balganga valley (Sakliani, 1989).
Materials and methods
During the fieldwork of September 2009, we surveyed 135 landsides, out of which 25 sites were selected for the sedimentological analysis to understand the failure mechanism (Table 2).Sediment samples collected from each landslide occurred in the different lithological formations and were well distributed in the study area (Fig. 2).As sediment samples were collected from the failed slope materials, it necessitates collecting the mixture of material across the landslide body to understand its instability process.The samples were taken from three positions for each site including slide head, intact materials, and dislodged material.In addition to the measurement of landslide geometry, in-situ rock strength was also assessed using the PROCEQ DIGI-SCHMIDT 2000 (Concrete Test Hammer Model ND/LD); also termed an original hammer.Schmidt hammer uses the energy of impact released by a spring-controlled plunger.We have taken ten impacts point in each slope material across the slide body, and the mean value of these measurements was used as rock mass strength (Mpa).Grain-size distribution analyses of sediment were carried out using a motorized sieve shaker with sieve mesh sizes varying from 63 µm to 2 mm, capable of differentiating grain size intervals from 0.2 to 200 μm.Sediment samples collected from the sites typically contain moisture, and therefore, are air-dried at room temperature for three weeks.On average, 500 gm of the sample was used for grain size distribution.Since these samples have coarse grains, the optimum weight of 500-1000 grams of sediment is usually preferred for this analysis.The duration of the mechanical shaking of samples was kept 15 minutes for each sample.The standard time used for processing in most laboratories is 15-20 minutes but it is recommended to be 10-15 minutes for samples consisting of coarse grains (Pye, 2007).
The mineralogical analysis was carried out by X-ray diffraction (XRD), using Pan Analytical PW3050 XPERT-PRO powder diffractometer (CuKα radiation and secondary monochromatic sample spinner).The procedure was applied to obtain mineralogical data of bulk rock using the Barahona Fernandez (1974) method.This technique depends on the integrated intensities of measurements of all the crystalline phases.However, the results obtained were cross-checked, comparing mineralogical library data.The scan axis position (°2Th) was taken to be 5.03 to 89.99 with continuous scan type and logging interval of one second.The sediment size was < 2µm crystal powdered mounted in a substantial sample holder.The analyses of mixed-layer clay properties were performed using Moore and Reynolds (1989) and semi-quantitative estimates succeeding Biscaye (1965); with small amendments.
Further, SEM analysis and powder diffraction (XRD) techniques were used in this study to analyze the crystallographic structure, crystallite size (grain size), and preferred orientation in polycrystalline powdered bedrock sediments.Diffraction peaks of unknown substances were compared with the diffraction database maintained by the International Centre for Diffraction Data to identify the mineralogical composition present in the sediments.Also, powder diffraction is a standard method used for determining lattice strains in crystalline materials.
SEM analyses of quartz grains have been done to identify the microsurface texture, morphological features, and processes.The quartz grains were separated from sediment samples and dipped for 20 minutes in hydrochloric acid (HCL) to remove impurities.Though quartz grains are a highly conductive element, the gold coating (Au) of these grains made for better conductivity.Carl Zeiss EVO 40 instrument, on 20 kV was used for SEM analysis.Quartz sand grains were mounted on 10mm diameter aluminium stubs using doublesided carbon adhesive discs.It was considered sufficient to photomicrograph and measurement of grains into two orthogonal orientations.The grains are mounted in a linear array, as on the left-hand stub.The measurements and photomicrographs were acquired into three orientations of grains mounted on a triangular array to study elongation, flatness, and sphericity.
The proposition is that surface textural features and assemblage records provide information about the origin, transport history, and subsequent weathering processes or diagenetic processes of materials investigated.Presence or absence of different types of features recorded, either regarding presence/ absence or relative abundance; such as absent or rare (<5% of grains), scant (5 to 25%), common (25 to 75%), and abundant (>75%).However, the analysis shows that surface textural features present on the grains investigated do not relate to unique environmental processes or conditions.It is very significant to understand the procedures used to identify and classify surface features in different environments (Tickell, 1965).Analyses of the internal surface textural features of grains, combined with mineralogical characteristics such as precipitates, coatings, adhering particles, and surface contaminants, will be very significant in analyses of the micro-environmental process (Pye, 2007).
In the present study, 104 grains were analysed from 25 landslides sediment using SEM to understand the process of slope instability and its relationship with mass movement, types of failure, and lithology.
Grain-size analysis
The grain size distribution analysis of the landslide sediments mainly, rock slide, debris slide, and debris fall occurred in the quartzite rocks have a high proportion of gravel, coarse sand, and less percentage of silt.The % of clay concentration was almost insignificant in these sediments.The grain size distribution composition is significant to analyses of slope material which usually determines infiltration capacity, soil moisture condition, water pressure, and cohesive strength.The composition and texture of the sediment show a direct relationship between the proportion of gravel, sand, and silt with the mechanism of slope failures.It is observed that a higher percentage of gravels and coarse sand are present mainly in debris fall and slump types of the landslides, which are rotational slides (Fig. 3).However, rockfall and debris fall a vertical collapse occurred in the quartzite and slate have also a high proportion of gravels.
The grain size composition of the debris fall shows a more or less similar pattern to the rockfall with a moderately high concentration of gravel.The investigated sediment samples, except DFW120, have a higher concentration of sand and silt.On further analyses with lithology, conglomerate rock shows a similar grain size composition.The compositions of landslide sediments of quartzite lithology have a similar composition to debris falls.The grain size composition of all the gravity-driven failures, such as debris fall, slump, and rockfall, have similarities in the slope material.These have almost moderate to high concentrations of gravel and a lesser amount of fine particles.The sediment composition of debris fall consists of medium to large gravels and coarse-grained sand.These sediments have the least amount of silt, i.e., less than 9% of the material.However, it has also been observed that a few debris fall sites have a more significant concentration of sand grains and a lesser proportion of gravel and silt.
On examination of the rockfall sediments, having a large part of gravel, it was found that in-situ weathering of slope material dominates the process of failures, having a little run-out distance.A Large proportion of gavels and the least concentration of fine sediments suggest that there is little run-out but are gravity-controlled vertical failures.Similarly, debris falls along the road corridors had a different pattern with lithology.The average concentration of gravel in the sediments of the slump is 41%, debris fall is 40.9%,rockfall is 38.5%, and 25.3% in a debris flow.Thus, indicating a high concentration of finegrain sediment in the debris flow.Conglomerate rocks have a high proportion of sand and silt concentration in comparison to quartzites.The sediments of debris flow have an average percentage of sand, and silt, but a comparatively higher concentration of clay.The detailed analyses show consistency in grain size composition of slope materials with the type of failures.Except for debris fall and debris flow other failed base rocks, have a similar pattern of particle size composition.
The sediment composition of rockfalls investigated in the area has a predictable pattern with similar grain size composition.Gravel and sand composition of sediments collected from quartzite, slate, and schist-phyllite bedrock have a similar composition as in rockfall and slump.Each type of landslide has a varying composition of grain size composition and texture, hence these results are.Analysis of the failed slope material shows a definite pattern of grain size composition associated with the type of landslide as well as lithology.
Rock falls, debris falls, and slumps occurred in similar lithologies, such as quartzite have similar grain size compositions with few exceptions.Therefore, gravity-driven failures is reveals lithological controls on the sliding mechanism.The analysis concludes that the volume of debris displaced due to landslides has a significant relationship with grain size and lithology in the region.
4,2 Mineralogical composition
The mineral composition and crystallographic structure of slope material in powder form were examined using the x-ray diffraction technique.As given in Supplementary Table 1, quartz mineral is abundant in all the sediment samples collected from landslide sites.However, the concentration of quartz is lesser, mainly, in the conglomerate's rocks.Silicates, carbonates (calcite>>dolomite), and feldspars minerals are also found in a lesser amount in a few samples, whereas traces of gypsum and hematite (Fe2O3) are present in the failed slope materials.It is notable that hematite concentration found in slope material representative of active slide surfaces and discontinuities probably have reactivated landslides.
The higher silica (SiO 2 ) concentrations of minerals are found in quartzite rocks of the LHC rock formations, particularly the Partapnagar brown quartzite which has a 93% SiO 2 concentration.The diffraction data reveals that high and thin SiO 2 peaks, particularly in quartzite rocks, show a well-crystalline structure and lesser lattice strain.The width of peaks in a particular phase (d) slump sediment composition is similar to debris fall, moderate to high % of coarse particles.
pattern indicates average crystalline size.Large crystalline size gives rise to sharp peaks, while peak width increases as crystalline size decrease.The peak broadening also occurs as a result of variation in' spacing caused by microstrain in the rock.A broad base of peaks shows significant' spacing of molecules.It is often called void space between facies.Peak-to-peak comparison of mineral reflection provides details of the structural deformation process and lattice strain in the diffraction data.Particularly, broader peaks exist in the dolomite rock while quartzite rocks have a narrow and high-rise peak, possibly showing the vast extent of lattice strain.The difference in these peaks appears due to a single phase because the relative intensities vary considerably among different slope materials.
The rock mass strength measured of the landslide sites compared with mineral composition shows that highly fractured and sheared bedrock has weak strength and vice-versa.Figure 4b illustrates the regression value (R 2 ) is 0.34, which indicates a reasonable relationship with mineral composition.However, the SiO 2 composition of slope materials and rock strength has a strong correlation; that varies in different lithologies.Quartzite rocks, in particular, have high strength in comparison to dolomite, slate, and schist-phyllite.Comparatively, high rock strength is found in the rockfall and debris fall slope materials.The bedrocks have large cracks, and fractures show weak strength.The slope materials are having alternate bands of quartzite and dolomite strata, are highly weathered, and have poor (14.3MPa) rock strength.It shows a direct relationship between types of lithology and weathering of material related to mineral influences slope failures.The volume of displaced material varies according to the strength of the rock and slope inclination.Slope lithology in failed sites has relatively low rock mass strength, which represents debris flow and debris fall, while sites with high rock mass strength are prone to rockfalls and rock slides.
The relative proportions of quartz minerals were subsequently assessed using lithology and SEM analysis.The peak of each sediment sample shows higher variations in the concentration of minerals and phases (Fig. 4).Those rocks are having a more significant proportion of quartz minerals show thin and high peaks in these sediments.Some sediment sample has more than 2000 counts of phases in the samples.Rockfall sediment shows varying proportions of quartz minerals as high as 91 to 94%.Lithologies, with a high concentration of quartz minerals, have greater rock strength in comparison to a lesser concentration of SiO 2 .Figure 5a illustrates the sediment analysis of schist-phyllite rocks of debris fall (DFL132) and the composition of minerals.It has 74.3% quartz (SiO 2 ), magnetite (Fe 2 O 3 ) 14.4%, and Dilanthanum Tris (molybdate) 10.4% minerals; Figure 5b presents the composition of debris flow (DFW120) in quartzite lithology.
SEM Analysis
Meticulous examination of quartz grain micro-surface features done using SEM showed that quartz grains are ambiguous and also challenging to distinguish between morphological processes.The majority of the grains' outlines are angular and sub-angular and may be of glacial and glacial-fluvial origin.These slope material samples were collected from altitudes between 1600-2700m above sea level.Grain outline shows the roundness or angularity of the features of the grain subdivided into angular grains with sharp edges subangular with slightly blunt edges, and rounded grains with smooth edges (Fig. 6).Grain outline mainly relates to the mode of transportation process, distance, time, and up to a certain extent with particle size.Though, it is equally a function of the original grain shape and size from the source rock (Goudie and Watson, 1981;Kleesment, 2009;Costa et al., 2013;Vos et al., 2014).Sub-angular to rounded grains are produced in upper flow regimes as it requires severe abrasion to round the particle edges (Mahaney, 2002).It indicates that flow regimes at this altitude were many times higher in the Bhilangana River as compared to the present.The angular grains' outline occurs in glacial-dominated landscapes where they are crushed, abraded, and plucked in high-energetic subaqueous environments.These areas have limited transport distances of sediment, causing grain breakage without rounding edges (Helland and Holmes, 1997).It indicates that glaciers extended downstream to much lower altitudes in the past to have left these signatures on the quartz grains.The glacial process is mainly dominated by grinding and abrasion during ice transport and glacio-fluvial activity.The digenetic/ alteration in the environment is very widely defined, comprising all processes, mostly chemical, altering the grain surface after deposition and before metamorphism of the source rock (Vos et al., 2014).Morphological features, which occupy a large proportion of grain surface, include conchoidal fractures, multi-stepped striations, grooves, dishshaped depressions, large pits, ridges, and crystalline overgrowths with or without planar crystalline facies (Supplementary Table 2).Small-scale features that have been identified and measured are small pits and various types of grooves, and mineral precipitates.Usually, the grain surface is characterized by the presence of a vast number of closely spaced pits; and dominant small projections are often defined as consuming a frosted advent, whereas surfaces are having a smaller number of irregularities often denoted to pitted structures.Grain outline is possibly altered by chemical interactions, such as solution and precipitates of minerals.Conchoidal fractures are typically curved and shelllike breakage patterns, of the grains lacking clear cleavage directions.These micro-surface features are the result of powerful impact or pressure on the grain surfaces.Crystal lattice generates a ribbed appearance, cracks, and fractures due to the dominating impact of pressure waves into the grains.Small-scale pits (<1 μm) have often been observed on these fracture planes as mineral insertions, which are weakening the crystal lattice.Hence fractures and cracks developed on the grains (Vos et al., 2014).
Conchoidal fracture planes are occupied by the arcuate and straight steps (Cardona et al., 1997).The depths of these steps are usually varied to several micrometres, with an irregular spacing of successive steps up to 5μm.The steps were formed due to the imminence pressures of impacts when the conchoidal fracture plane intersects with the cleavage planes of the quartz crystal.However, these features are inherently associated with conchoidal fracture planes.Meandering ridges are also noticed on the grain surface as the cleavage planes intersect on slightly curved conchoidal fractures.
However, the sediment collected from the landslides is highly crushed, deformed, and metamorphosed in the phase of the orogenetic process of the Himalayas.The small surface features are varying with lithological formations.The grains collected from quartzite rocks are highly weathered due to the mechanical weathering process.The micrograph shows the presence of fractures, crystal overgrowths, adhering particles, arcuate steps, and quartz grains collected from rockfall (Fig. 6d) in the Pratapnagar brown quartzite.The presence of large cracks and fractures, crystal overgrowth, and minor stratification with conchoidal shape and grooves marked on the grain surfaces.Therefore, it indicates that slope failures have occurred due to weak rock mass strength, highly disintegrated minerals, cracks, and fractures (Fig. 6).
Micrographs of quartz grains studied from slump sites in quartzite rock of the Ghansali formations show angular facies, crystal overgrowth, medium size groves with minor fractures, disintegrated granules and foliated structures (Fig. 7).Dolomite rock sites where debris fall has occurred show less disintegrated and angular flakes, but the presence of pits and mediumsized grooves on the surface, indicating active chemical weathering (Fig. 8).Sediments of dolomite and limestone rocks have small to medium pits, grooves, and carbonate precipitate in grain texture.Angular, sub-angular facies, disintegrated flakes, striation, and crystal overgrowth suggest glaciofluvial transportation of sediment.Micrographs of sediment collected from slate and schist-phyllite rocks show disintegrated angular flakes, fractured rocks, irregular grain shape, and texture.Debris fall and rockfall types of failure occur due to weak lithology and highly weathered surfaces in such rock groups (Fig. 9).The presence of large cracks, medium relief, and crystal overgrowth might suggest sedimentation strain due to seismicity in the region as well.These grains have rounded to a sub-rounded shape, which shows the transportation of unconsolidated materials and deposition below the elevation of 1400 meters in the Lesser Himalayan meta-sedimentaries rock formations.The results of the analysis have compared frequency histograms (Fig. 10).
Discussion and Conclusion
The grain size distribution, mineral composition, and micro-surface texture analysis indicate a strong relationship with slope failure mechanisms in diverse lithologies.However, the search for a link between slope instability and compositional parameters, such as textural, mineralogical, and micro-surface features, is still a developing field of research (Summa et al., 2013).In the present study, compositional variables and the heterogenetic nature of the data makes the analyses rather complicated.
The grain size distribution of landslide sites shows a definite pattern of material composition and its relationship with the types of landslides.Sediments are having a large proportion of granules, coarse sand, and lesser silt mainly found in rockfall, debris fall, and slump.The medium to fine size silt and sand, with a sufficient proportion of clay composition, influences debris flow, mudflow, and soil creep and are associated with slumps and debris fall in the area.The mineralogical composition of fine sediment fraction analyses shows that variations in the concentration are a cause of landslides (Fig. 11).The high SiO 2 composition is significantly related to rock mass strength (Mpa).The sediment sample, which has > 80% of quartz minerals, has rock strength above 30 Mpa (R 2 0.43).Further, the rock strength was correlated with the size and volume of landslides and found it was necessary for slope failures that occurred in the lower Himalayan meta-sedimentary rock formations.The fine sediment composition of slope materials and the chemical, mineralogical, and textural compositions of sediment, as well as the study of the process interaction of material involved, are very crucial in influencing the physical-mechanical characteristics of the landslide.
The Fe 2 O 3 and calcite develop a suitable orientation of particles, thus providing some degree of lubrication, whereas mixed layers are prone to the aggregation of particles in the sub-humid environment where the materials are deposited.Diffraction data of clay mineralogy provided clear foresight of the mineral composition of sediments.The analysis shows that minerals having large' spacing between molecule arrangements, are observed from the weak rock strength and reach the critical limit of tensile strength, causing failures.The results of mineralogical analysis and rock mass strength have a strong relationship and to a certain extent, are supportive in understanding slope failure processes.The quartzite rocks exposed to the weathering process are highly disintegrated, thus reducing rock strength, and causing rockfalls and slumps in the area.The textural changes induced by water chemistry depend on mineralogical composition to a large extent.
The analysis of quartz grain surface textures shows that precipitates of silica and carbonates indicate the influence of regional climate.These areas have undergone various phases of climatic change and tectonic activity in recent geologic history.Hence, the glacial and deglacial processes have played a significant role in the supply of sediments deposited in the river valleys.The debris flows sediment samples collected near Ghuttu village show angular pattern, stairation surface, and grooves indicative of a glacial environment.The analyses conclude that slope material is deposited during a glacial advance.During the monsoon season, debris flow, debris fall, and debris avalanches frequently occur in these unconsolidated lithofacies due to lubrication.Grain textures in these areas have undergone a lot of geodynamic processes during recent geological history through neo-tectonics.
SEM analysis has added many advantages to geomorphology, including the determination and origin of depositional landforms, the source of sediments, the energy of the environment and processes of diagenesis, weathering, and development through time.It provides details of the origin of fine silt and clay particles in the geological column, fracture-abrasion mechanisms in the field and laboratory, and analysis of grain modifications under different weathering regimes.The microscopic study of quartz grain is used to understand texture, pattern, internal morphology, and weathering pattern prevalent in the area, and its correlations with slope instability.It provides a close relationship with rock strength and the volume of displaced material.The study of microtextures is, therefore, a useful technique to reconstruct the sedimentary history of the displaced materials, and should be judiciously used more often for studies of clastic sediments.
The present study is an experiment that analyses the significance of sediment characteristics in the slope instability processes.It also provides new stimuli to further site-specific investigations such as grain size distribution, mineralogical and micro-surface texture, and their relationships.The mineral composition is closely related to producing sliding surfaces and discontinuities in the slope materials.The present study finds that the slope instability process is controlled by site-specific material composition, the presence of minerals and the depositional environment.The geotechnical parameters such as liquid limit, plastic limit, and angle of shearing resistance on the sliding surface may require such analysis to determine the relationship with the failure mechanism.However, it is distinguishing between the cause and effect even though some risk factors were identified, which are inducing slope failures.The present study will be very useful for the site-specific slope instability analysis and selection of appropriate stabilization measures.The in-depth analyses of these parameters will contribute to the knowledge of the geotechnical characteristics of landslide investigations.
Figure 3 .
Figure 3. Cumulative Particle size % of the slope material collected from landslides: (a) sediment of debris fall, it has lesser fine particles and high % coarse material, (b) sediment of debris flow, it has lesser % coarse material moderate concentration of fine particles, (c) rockfall sediment has high proportions of coarse particles; and(d) slump sediment composition is similar to debris fall, moderate to high % of coarse particles.
Figure 4 .
Figure 4. X-ray diffraction of the quartzite rock specimen, showing a sharp and high peak of SiO 2 show well defined crystalline structure and lattice strain from the Lesser Himalayan crystalline rocks, using Cu Kα radiation anode
Figure 5 .
Figure 5. X-ray diffractograms of clay minerals: (A) diffraction peaks of dolomite rocks showing order reflection and broad base, with less SiO2 mineral composition, and (B) well-defined peaks of red quartzite rock, a high concentration of quartz with a mixture of trace minerals.
Figure 6 .
Figure 6.Micrographs showing the surface texture of quartzite rock specimen collected from debris fall, Lesser Himalaya: (A) Grain with conchoidal shape and fractures partly smoothened by abrasion.Probably it is a grain of glacial origin, deposited into glacio-fluvial environment, (B) abundance of arcuate steps and quartz crystal overgrowth, (C) solutions pits calcite overgrowth, (D) fracture plane and mica adhering particles, (E) highly fractured angular facieses of glacio-fluvial sediments with carbonate evaporates, and (F) grooves and conchoidal flacks of schist-phyllite angular flakes and medium cracks.
Figure 7 .
Figure 7. Micrograph of debris flow: (A) sub-angular shape of quartz grain with the abundance of arcuate steps and smooth surface due to abrasion, (B) small irregular pits, medium conchoidal fractures, (C) fine edge fractured amphibole textures with minor cracks, (D) angular outline of grain specimen of rockfall, it has fine edge fractured amphibole textures with minor cracks; fine edge fractured amphibole textures, (E) large conchoidal fractures with minor cracks schist-phyllite platy structures, crystal overgrowth, minor fractures and rounded grooves, and (F) fractured and crystal overgrowth, small pits of carbonate evaporates shows strong chemical weathering environment.It has straight steps with high relief.
Figure 8 .Figure 9 .
Figure 8. Micrograph of slump sediment: (A) meandering ridges with curved scratches, high relief, (B) fractures plates and siliceous precipitates angular foliated structures of glacio-fluvial origin, (C) presence of large size grooves, sharp edges and highly weathered crystalline structures, disintegrated material of quartzite rocks, and (D) large size cracks and grooves in the amphiboles and small crystal overgrowths.The minerals show characteristics of the physical weathering process with disintegrated minerals of mica-schist, minor fractures, and small crystal growth are persistent.
Figure 10 .
Figure 10.Debris fall (DFL132) occurred in Bhilangana basin: (a) photographed of debris fall showing active slip zone and slide head in the disposed of materials, (b) highly fractured brown quartzite rock of Pratapnagar formation, exposed on the slide head where the slip zone initiated, (c) diffraction peak of clay mineralogy of the sediment collected from the site, showing a sharp and thin peak of SiO 2 .The mineral composition of the slope material is 74.3% quartz, magnetite 14.4%, Dilanthanum Tris (molybdate) 10.4% and Sodium cobalt oxide hydrate 0.9%.Due to the presence of Fe 2 O 3 , chemical dissolution in slope material by rainfall infiltration induced active slip zone at slide head, (d) micrographs showing the fractured presence on the quartz grains as seen in the (b), small pits, adhering particles and sharp flacks, and (e) frequency histograms showing the micro-surface features analyzed from the sediment.
Figure 11 .
Figure 11.Analysis of landslide pattern and mineral composition (a) relationship between the composition of SiO 2 and rock mass strength, and (b) rock mass strength has significantly influenced area and volume of failures.
Table 2 .
Sediment sample collected from the slope failures in Bhilangana basin, during Sept.-Oct.2009. | 7,611.8 | 2023-02-28T00:00:00.000 | [
"Geology"
] |
The phylogeographic structure of Hydrilla verticillata (Hydrocharitaceae) in China and its implications for the biogeographic history of this worldwide-distributed submerged macrophyte
Background Aquatic vascular plants are a distinctive group, differing from terrestrial plants in their growth forms and habitats. Among the various aquatic plant life forms, the evolutionary processes of freshwater submerged species are most likely distinct due to their exclusive occurrence in the discrete and patchy aquatic habitats. Using the chloroplast trnL-F region sequence data, we investigated the phylogeographic structure of a submerged macrophyte, Hydrilla verticillata, the single species in the genus Hydrilla, throughout China, in addition to combined sample data from other countries to reveal the colonisation and diversification processes of this species throughout the world. Results We sequenced 681 individuals from 123 sampling locations throughout China and identified a significant phylogeographic structure (NST > GST, p < 0.01), in which four distinct lineages occurred in different areas. A high level of genetic differentiation among populations (global FST = 0.820) was detected. The divergence of Hydrilla was estimated to have occurred in the late Miocene, and the diversification of various clades was dated to the Pleistocene epoch. Biogeographic analyses suggested an East Asian origin of Hydrilla and its subsequent dispersal throughout the world. Conclusions The presence of all four clades in China indicates that China is most likely the centre of Hydrilla genetic diversity. The worldwide distribution of Hydrilla is due to recent vicariance and dispersal events that occurred in different clades during the Pleistocene. Our findings also provide useful information for the management of invasive Hydrilla in North America. Electronic supplementary material The online version of this article (doi:10.1186/s12862-015-0381-6) contains supplementary material, which is available to authorized users.
Background
Aquatic vascular plants are a distinctive group, differing from terrestrial plants in their growth forms and habitats. They have multiple evolutionary origins from terrestrial environments and show a complex evolutionary history [1,2]. Many genera and species of aquatic plants are distributed worldwide [3,4]. Recently, the historical biogeographic scenarios of some genera, including their areas of origin and dispersal routes, have been inferred in the context of phylogenetics based on molecular evidence (e.g., [5][6][7][8][9]). However, the biogeographic history of cosmopolitan species of aquatic plants is seldom studied and needs to be explored through phylogeographic studies based on a broader sampling scheme.
An exponential growth of plant phylogeographic studies has been observed in Europe and North America in the past two decades, and a similar trend has recently been found in China and adjacent regions [10,11]. Common genetic discontinuities, locations of refuges, and routes of colonisation have been revealed in some regions by comparing phylogeographic structures among species [11][12][13][14][15][16][17][18][19]. In these plant phylogeographic studies, the majority of surveys were conducted on tree species and terrestrial plants, whereas studies on aquatic plants have been relatively scarce [10]. These studies on aquatic plants focused primarily on two groups: seagrasses (e.g., [20][21][22]) and emergent macrophytes in freshwater environments (e.g., [23][24][25][26][27]). Few studies have focused on freshwater submerged species (but see [28,29]), whose evolutionary processes are most likely distinct from those of emergent species due to their occurrence in exclusively aquatic habitats [2,30], and seagrasses due to their lower population connectivity in discrete and patchy habitats [31]. Therefore, phylogeographic studies on freshwater submerged macrophytes will provide new insights to increase our understanding of plant evolution.
Here, we focus on the submerged plant genus Hydrilla, a monotypic genus of the family Hydrocharitaceae, which is distributed worldwide. The single Hydrilla species H. verticillata (L.f.) Royle (hydrilla) is found on all continents except Antarctica [32,33]. This species is native to Asia, but it is uncertain whether it is truly native to Europe, Australia and Africa. Hydrilla was first recorded in North America from 1960 and South America from 2005 [4,32,34]. Similar to most aquatic plants, hydrilla possesses a variety of reproductive strategies to ensure its growth and establishment, including reproduction through seeds, fragmentation, turions on leaf axils and tubers (subterranean turions) [35]. Hydrilla grows in various types of aquatic habitats (such as lakes, rivers and ponds) from tropical to temperate regions, and apparent morphological and karyological variations have been observed in different populations worldwide [32,[36][37][38][39][40]. Monoecious and dioecious strains and diploid, triploid and tetraploid plants have also been reported in hydrilla [32,37]. Furthermore, high levels of genetic differentiation have been revealed among worldwide samples of hydrilla based on isoenzyme patterns [37,38], random amplified polymorphic DNA (RAPD) profiling [41,42], and DNA sequences [40,43]. However, previous investigations did not include a sufficient number of samples to explore the evolutionary processes of this submerged species worldwide.
In this study, we first examined the phylogeographic structure of an extensive sample population of hydrilla from China using sequences of the chloroplast trnL-F region. We then inferred the biogeographic history of hydrilla by combining the trnL-F sequences from previous studies conducted worldwide. Our objectives were (1) to examine the genealogical patterns of hydrilla in China, and (2) to infer the original area and dispersal route of hydrilla. This study will provide a good example for us to understand the evolutionary processes that occur in submerged macrophytes.
Genetic variation and phylogeographic structure
A total of 681 sequences were obtained, with lengths ranging from 1,066 to 1,105 bp. The length of the aligned sequences was 1,130 bp, and 32 polymorphic sites were observed, including 18 indels and 14 base substitutions. The sequences were collapsed into 9 haplotypes (A1, A2, B1-B4, C1, C2, and D1). The two most common haplotypes were C1 (occurring 359 times; 52.7 %) and B1 (occurring 227 times; 33.3 %), which were present in 78 populations and 49 populations, respectively. Haplotypes D1, B4, A1, and B3 were detected in 11, 6, 5 and 2 populations, respectively. The remaining three haplotypes, A2, B2, and C2, were each present in a single population. Of the 123 populations we surveyed, 93 populations were monomorphic and consisted of a single haplotype. In the nine groups we defined based on river basins, the highest diversity was present in the Yangtze River Basin (Hd = 0.625, Pi = 0.0038) and the river basins in Southeast China (Hd = 0.730, Pi = 0.0037) (Additional file 1). However, no polymorphism was detected in the three river basins located in Northeast China and North China. The Hd and Pi values for all of the populations surveyed were 0.608 and 0.0038, respectively (Additional file 1).
A haplotype network was constructed with four distinct groups based on all haplotypes from worldwide samples (see below for the definition of haplotypes) (Fig. 1b). The first group included four haplotypes (C1, C2, H8 and H9) and two Chinese haplotypes (C1 and C2) involved all of the basins except for RB8 in South China, whereas the frequency of occurrence was low in the three basins (RB6, RB7, and RB9) located in the southern part of China. The second group only included one haplotype, D1, and was detected in two basins (RB4 and RB5). In RB5, the second group was restricted to the middle and lower reaches of the Yangtze River. The third group consisted of six haplotypes (B1-B4, H3 and H4), of which four Chinese haplotypes (B1-B4) occurred mostly south of the Yangtze River in five basins (RB5-RB9). The fourth group comprised haplotypes A1, A2 and H7, of which two Chinese haplotypes (A1 and A2)were only found in six populations located in south and southeast China in four basins (RB5-RB8) (Fig. 1a). A permutation test showed that N ST (0.842) was significantly greater than G ST (0.799, P < 0.01), indicating that closely related haplotypes tended to occur in the same area. An AMOVA revealed that 17.97 % of the total variation occurred within populations, and 82.03 % occurred between populations. The global F ST value (0.820) indicated a significant genetic structure among the 123 hydrilla populations. When we grouped the populations into basins, an AMOVA showed that large amounts of variation occurred both among the basins (40.09 %) and among the populations within basins (43.14 %) and that 16.77 % of the variation occurred within populations. The mismatch distribution of the overall populations was multimodal (not shown), and a sudden expansion model for hydrilla was therefore rejected.
Phylogenetic relationships
The accessions collected from worldwide by Madeira et al. [43] were collapsed into 9 haplotypes: H1 (including samples from China, north Vietnam, Nepal, Pakistan, and India and dioecious US), H2 (Burundi), H3 (New Zealand), H4 (Australia), H5 (Korea and monoecious US), H6 (Thailand, Vietnam, and Taiwan), H7 (Indonesia, Malaysia, and Vietnam), H8 (Japan), and H9 (Poland). Four of the haplotypes were the same as haplotypes identified in our samples: H1 = B1, H2 = B2, H5 = B4, and H6 = A1. Thus, a total of 14 haplotypes were obtained and employed for phylogenetic reconstruction using three outgroups. ML analysis and Bayesian inference produced a similar topology (Fig. 2). The monophyly of hydrilla was strongly supported by both analyses (bootstrap support (BS) = 100 %, posterior probability (PP) = 1.00). Four distinct clades with robust support were revealed among the 14 haplotypes of hydrilla, consistent with four groups of the haplotype network. Haplotypes of the A lineage (A1/ H6, A2, and H7) were located in south and Southeast
Divergence time estimates
The stem and crown ages of hydrilla were estimated to be 36.19 Ma (95 % HPD: 33.74-40.93 Ma) and 6.71 Ma (0.12-22.42 Ma), respectively, based on the combined data ( Fig. 3). Due to no bootstrap support for the internal nodes of hydrilla in the multigene tree, which was caused by low polymorphism (Additional file 2), the trnL-F sequences were used to estimate interior divergence times. The crown age of clade A + B was estimated to be 4.57 Ma (95 % HPD = 2.06-7.15 Ma) based on the trnL-F sequence data (Fig. 2). The crown node ages of clades A, B and C were dated to 1.31 Ma (95 %
Genetic variation in hydrilla
At the population level, more than three-fourths of the populations are composed of only a single haplotype. Although the percentage may be overestimated due to limited samples of each population and a single cpDNA fragment used, lack of intra-population variation seems frequent. This is most likely attributed to the strong ability of hydrilla to reproduce asexually. Hydrilla populations can expand rapidly via various vegetative propagules, including plant fragmentation, turions and tubers [44][45][46]. As vegetative reproduction is common in aquatic plants, modest variations within populations have also been observed in other species, e.g., Ranunculus bungei [29], Podostemum ceratophyllum [47], and Hippuris vulgaris [48]. Moreover, certain studies based on nuclear markers suggested that founder effects played an important role in the establishment of populations in aquatic plants [26,49,50]. To explore the role of founder effects in shaping population structure of hydrilla, further studies using nuclear markers are needed.
This study suggests that China is most likely the central area of genetic diversity for hydrilla. Both Madeira et al. The pie charts at each node were obtained using DEC analysis, and the smaller pie charts above and below each node were obtained through S-DIVA and BBM analysis, respectively. The colours correspond to possible ancestral areas; black with an asterisk represents other ancestral ranges; and white with "mix" indicates too many possible ancestral areas to determine. Lowercase letters represent different regions: a) Europe (Poland); b) East Asia (China/Korea/Japan); c) Africa (Burundi); d) South Asia (India/Nepal); e) Southeast Asia (Vietnam/ Thailand/Malaysia/Indonesia); f) Oceania (Australia/New Zealand) [43] and Benoit [40] identified three clades from worldwide samples of hydrilla based on chloroplast trnL-F sequences, corresponding closely to those identified by Madeira et al. [41,42] using RAPD. As China is the geographic centre of the distribution range of hydrilla, the small number of accessions from China (less than 5 individuals) included in these studies is insufficient. Combined with the trnL-F sequences of samples obtained worldwide, our results revealed four clades in hydrilla, and China was the only area in which haplotypes from all four clades occurred (Fig. 2). Furthermore, a low level of genetic variation was observed in hydrilla samples from other areas, e.g., in Europe, nearly identical isoenzyme patterns were observed in plants from Ireland and Poland [51] and the same trnL-F sequences were present in plants from Ireland and Latvia [40]; in Africa, similar isoenzyme patterns or genetic types were revealed in plants from Uganda, Rwanda and Burundi [38,52]; in South Asia, samples from Nepal, Pakistan and India grouped into the same cluster according to random amplified polymorphic DNA (RAPD) analysis [41] and exhibited the same trnL-F sequences [43]; in Southeast Asia, individuals from Vietnam, Thailand, Malaysia and Indonesia grouped into the same cluster according to RAPD analysis [42] and included two clades of trnL-F sequences [43], Fig. 2; and in Australia, samples from five localities included two clades of trnL-F sequences [40]. Based on 109 samples from various areas in Asia and the Indo-Pacific region, the highest genetic diversity was found in China with microsatellite markers [52]. Therefore, the highest genetic diversity of hydrilla was most likely detected in populations from China.
Phylogeographic structure of hydrilla
An important characteristic of hydrilla is the high value of global F ST (0.820), indicating that most genetic variation is found among populations. The results of an AMOVA suggested that half of the detected genetic variations should be ascribed to genetic differentiation among the basins. This finding was supported by the significant phylogeographic structure revealed, in which the haplotypes of lineage C mostly occurred in the northern part of China, lineage B occurred in the southern part of China, lineage D haplotype was restricted to the Huai River and the middle and lower reaches of the Yangtze River, and lineage A was restricted to the southeast corner of China (Fig. 1). According to the distribution of haplotypes in basins, the isolation of individual basins appears to constitute a barrier to inter-basin gene flow. High genetic differentiation among basins was also reported in some species of aquatic plants, e.g., Batrachium bungei [49], Podostemum irgangii [53], and Podostemum ceratophyllum [47]. However, unlike these species, hydrilla can form tubers, which may survive for several days after removal from water [54] and remain viable even after ingestion and regurgitation by waterfowl [55]. It is possible that waterfowl migration could transport viable tubers across water basins. The high genetic differentiation of hydrilla populations observed among basins may be attributed to other factors associated with the process of colonisation. Two phylogeographic studies on aquatic plants with extensive sampling have been conducted in China (Zizania latifolia [26] and Sagittaria trifolia [25]). Similarly, no significant phylogeographic structure was revealed in these two species, and the highest diversity was reported from Northeast China, a finding that is different from hydrilla's pattern. Because the two species are emergent and their evolutionary processes are likely to differ significantly from submerged groups [2], it is necessary to conduct comparative phylogeographic studies focusing on submerged macrophytes.
Biogeographic history of hydrilla
The divergence time estimates for hydrilla showed a remarkably long branch between the stem node of this genus in the late Eocene and its crown node in the late Miocene (Figs. 2 and 3), suggesting long-term stasis or extinct lineages before diversification in hydrilla, similar to what has been found in several other genera of Hydrocharitaceae, such as Najas, Ottelia, and Blyxa [9]. The East Asian origin of hydrilla inferred through ancestral area reconstruction is supported by the fact that China is most likely the centre of genetic diversity for this genus. The first diversification of hydrilla into three lineages (clade A + B, clade C and clade D) was dated to the late Miocene (Fig. 3) and may have been triggered by the existence of warm-cool alternations and a cold, dry climate due to strengthened East Asian winter monsoons during the late Miocene and Pliocene [56][57][58][59]. Clade A + B dispersed from East Asia to other areas and then diverged into clades A and B during the early Pliocene (Fig. 2). The diversification events for clades A, B and C were all dated to the Pleistocene epoch, associated with Quaternary glacial/interglacial cycles. Due to the interior divergence times in hydrilla were estimated by the trnL-F sequences despite its high polymorphism (Additional file 2), their accuracy needed to be further tested by more sequence data.
The three analyses (S-DIVA, BBM and DEC) inferred vicariance at nodes 17, 22 and 25, of which node 17 is the crown node of clade A and 22 is the crown node of clade B with robust support (Fig. 2), indicating a vicariant event during the diversification in clade A and in clade B. The crown ages of both clades were dated to the Pleistocene epoch, in which cool glacial periods and warm interglacial periods were alternative. Climate change could be responsible for the vicariant events (e.g., [27]). For example, in clade A East Asian populations could have been isolated from Southeast Asian ones during more than one glacial period due to the emergence or disappearance of large land areas between these two areas caused by great sea level fluctuation [60]. The S-DIVA, BBM and DEC analyses inferred dispersal events at different nodes except node 16. Most dispersal events were relatively recent, suggesting that waterfowl dispersals are likely given transoceanic distribution. Waterfowl are considered the most significant dispersal agents for aquatic plants [61,62]. Although the ability of seeds to maintain viability after passing through the gut has been reported in some groups of submerged macrophytes, such as Potamogeton and Najas [63,64], whether hydrilla seeds can survive digestion by waterfowl has not been tested thus far. Some dispersal events, e.g., from East Asia to Southeast Asia in clade A and from East Asia to South Asia in clade B, are coincident with two major flyways for Anatidae in Asia [65], indicating the role of waterfowl in hydrilla distribution.
Implications for the invasion of hydrilla
Introducing the natural enemies of weeds into their native range is an effective way to control invasive weeds. Thus, it is important to pinpoint the likely origin of invasive weeds. Two biotypes of hydrilla (dioecious and monoecious) have been recognised in the United States [32]. These biotypes are thought to have been separately introduced [35,66]. The dioecious plants were reported to have been introduced from Sri Lanka to Florida [67], and the South Asian geographic origin of the US dioecious hydrilla was confirmed in genetic studies [40,41,43]. The occurrence of the common haplotype B1/H1 in China (Figs. 1a and 2) indicates that the southern part of East Asia is also a possible original site for the US dioecious hydrilla. The monoecious plants found in the US were possibly introduced from Korea based on genetic similarity [41,43]. The occurrence of the common haplotype B4/H5 in eastern China (Figs. 1a and 2) suggests eastern China as one of the original areas for the US monoecious hydrilla. Due to the independent origin of the two biotypes of hydrilla, the search for natural enemies of hydrilla needs to be conducted in each original range, especially in the common area of China.
The northernmost monoecious hydrilla population occurs in the Lucerne/Pipe Lakes complex in Washington, at 47.37°north latitude [66]. The northernmost dioecious hydrilla population was found in Idaho but lacks detailed location information; it is most likely located at approximately 42°north latitude [40]. The two biotypes both belong to clade B in the phylogenetic tree (Fig. 2). In their native range, the hydrilla populations belonging to clade C occur farther north than the populations belonging to clade B (Fig. 1a). The northernmost population we collected occurs at 49.12°north latitude, a latitude similar to that of the US-Canada border. Based on their latitudinal distribution, hydrilla plants from clade C could easily occur in the border area between the US and Canada. Thus, importing hydrilla from the northern part of East Asia and Europe should be forbidden to avoid a new invasion of hydrilla into North America.
Conclusions
Our study reveals that China is most likely the centre of genetic diversity in Hydrilla, and our findings point to an East Asian origin of Hydrilla. The study provides empirical evidence, based on a phylogeographic analysis, that reveals the complex biogeographic history of diversification and colonisation in worldwide species of submerged macrophytes. Our results will be more persuasive once more extensive samples from other countries are included. Comparative studies on other submerged macrophytes that are distributed worldwide would be valuable in better understanding the diversification and colonisation of this distinct group of plants.
Plant materials
A total of 681 individuals of hydrilla were collected at 123 sites throughout its distribution range in China, from the northeast to the southwest (Fig. 1a, Additional file 1). Three to 12 shoots per population were randomly sampled from different individuals at intervals of at least 10 m. Young, healthy plant fragments of approximately 10 cm in length were collected and dried with silica gel for subsequent DNA extraction. Voucher specimens from each population were deposited in the herbarium of Wuhan University (WH).
DNA extraction, amplification and sequencing
Total genomic DNA was extracted from silica-dried plant fragments using the DNA Secure Plant Kit (Tiangen Biotech, Beijing, China). Primers "c" and "f" reported by Taberlet et al. [68] were used to amplify and sequence the chloroplast trnL-F region. Polymerase chain reaction (PCR) was performed using 10-30 ng of genomic DNA, 0.1 μM each primer, 0.2 mM each dNTP, 2 mM MgCl 2 , and 0.6 U of ExTaq DNA polymerase (TaKaRa) in a volume of 25 μL under the following conditions: 3 min at 95°C, followed by 35 cycles of 30 s at 95°C, 30 s at 55°C, and 90 s at 72°C, and then a final 5 min extension at 72°C. Amplifications were conducted in a Veriti 96-Well Thermal Cycler (Applied Biosystems, Foster City, USA). The PCR products were purified and sequenced in both directions by the Beijing Genomic Institute in Wuhan, China. All sequences of different haplotypes were deposited in GenBank (Accession Nos. KM982392-KM982400).
Phylogeographic analyses
Sequences were aligned using the program Mafft 6.7 [69], and manual adjustment was performed in Se-Al 2.0 [70]. The number of haplotypes (H) and polymorphic sites (S), haplotype diversity (Hd), and nucleotide diversity (Pi) were calculated using DNASP 5.10 [71]. To interpret the genealogical relationships among sequences, a median-joining network [72] based on haplotypes was generated from the cpDNA sequence data using NET-WORK 4.5.1.6 (http://www.fluxus-engineering.com). In the network analysis, gaps with two or more base pairs were coded as single mutation events. When overlapping indels occurred, the overlap portion was considered a single event [73].
We defined groups of 123 populations of hydrilla based on basin boundaries. Nine groups were defined from the northeast to the southwest, corresponding to nine main basins: the Amur-Heilong River Basin (RB1), the Liao River and Hai River Basin (RB2), the Yellow River Basin (RB3), the Huai River Basin (RB4), the Yangtze River Basin (RB5), river basins in Southeast China (RB6), the Pearl River Basin (RB7), river basins in South China (RB8), and river basins in Southwest China (RB9) (Fig. 1a). An analysis of molecular variance (AMOVA) was used to partition genetic variation among and within groups, as implemented in ARLEQUIN 3.1 [74]. The occurrence of significant phylogeographic structure was tested by comparing two measures of population differentiation, G ST and N ST , based on 1,000 permutations in PERMUT.
We examined pairwise mismatch distributions to detect historical demographic expansions using DNASP. Populations at demographic equilibrium should present a multimodal or random and rough distribution of pairwise differences, whereas populations experiencing a sudden demographic expansion are expected to display a unimodal and smooth distribution [75,76].
Biogeographic analyses
We combined the sequences of 30 samples collected worldwide [43] into our dataset for phylogenetic analyses. Identical sequences were collapsed into a single haplotype. Based on phylogenetic studies of Hydrocharitaceae [9,77,78], three closely related species, Vallisneria natans, V. spinulosa and Najas marina, were included as outgroups. We conducted maximum likelihood (ML) analysis in the program GARLI [79], beginning with random trees and using 10,000,000 generations per search. Bootstrap support was estimated from 1,000 bootstrap replicates in GARLI. Bayesian inference was implemented in MrBayes 3.1.2 [80]. Two independent Markov Chain Monte Carlo (MCMC) analysis runs were conducted simultaneously, beginning with a random tree, with each run including four chains (one cold and three hot). Two million generations were run, with sampling at every 1,000 generations. Tracer 1.4 [81] was employed to check whether the chains converged, and the first 25 % of samples were discarded as burn-in. The best-fit model of nucleotide substitution for the ML and Bayesian analyses was identified under the Akaike information criterion (AIC) implemented in Modeltest 3.7 [82].
The divergence time between clades in hydrilla was estimated in two steps. First, we estimated the age of the stem and crown nodes of hydrilla based on combined 18S + rbcL + matK + trnK 5' intron + rpoB + rpoC1 + cob + atp1 sequence data from 12 genera within Hydrocharitaceae and three outgroups from Chen et al. [9]. We amplified and sequenced these eight fragments in four hydrilla individuals with haplotypes A2, B1, C1, and D1, representing four distinct lineages in the phylogenetic analysis (see results). All of the obtained sequences were deposited in GenBank (Accession Nos. KM982360-KM982391) and combined into the dataset. Datasets for cpDNA and 18S sequences were found to be combinable according to the incongruence length difference test [83] (p > 0.05). The divergence time estimate was conducted in BEAST 1.7.4 [84] using the dataset including 16 taxa. The parameter set and calibration points were the same as those used by Chen et al. [9]. Due to the combined dataset did not provide a topology that had nodal support within hydrilla (see Fig. 3), which was the same as any of these fragments (results not shown), we used the trnL-F sequences to infer the divergence time of internal nodes in hydrilla. Second, we employed the age of the stem and crown nodes of hydrilla to estimate interior divergence times in hydrilla based on the trnL-F sequences. We applied the GTR model of nucleotide substitution with Gamma Categories set to six under an uncorrelated lognormal relaxed clock model [85]. MCMC analyses of 300,000,000 generations were implemented, in which every 1,000 generations were sampled. The first 10 % of generations were discarded as burn-in, and the parameters were checked using the program Tracer.
Based on the Bayesian framework, three analyses were used to reconstruct the possible ancestral ranges of hydrilla. A statistical dispersal-vicariance analysis (S-DIVA) and a Bayesian binary MCMC (BBM) were implemented in the program RASP (Reconstruct Ancestral State in Phylogenies, [86]). Another event-based method dispersalextinction cladogenesis (DEC) was implemented in the program LAGRANGE 2.0.1 [87,88]. Six areas were defined according to the distribution range of hydrilla: a) Europe (Poland); b) East Asia (China/Korea/Japan); c) Africa (Burundi); d) South Asia (India/Nepal /Pakistan); e) Southeast Asia (Vietnam/Thailand/Malaysia/Indonesia); and f) Oceania (Australia/New Zealand). Only the most closely related genus Vallisneria was chosen as outgroup, and its ancestral area was restricted to East Asia and Southeast Asia according to the results of Chen et al. [9].The invaded region of North America was excluded from these analyses due to the human introduction of hydrilla to the continent. The number of maximum areas was set to range from 2-6. In the BBM analysis, a fixed JC + G (Jukes-Cantor + Gamma) model was chosen with a null root distribution. The MCMC chains were run for 5,000,000 generations, and every 100 generations were sampled. In the DEC analysis, the dispersal probability between areas was set to the same value. | 6,395.2 | 2015-05-24T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Analysis of Spectral Sensing Using Angle-Time Cyclostationarity
This work presents a novel spectral sensing method for the detection of signals presenting nonlinear phase variation over time. The introduced method is based on the angle-time cyclostationarity theory, which applies transformations to the signal to be sensed in order to mitigate the effects of nonlinear phase variation. The architecture is employed for sensing binary phase shift keying (BPSK) signals, being also compared with time cyclostationarity. The obtained simulation results clearly demonstrate the efficiency of the proposed approach, while presenting improved performance in terms of the detection rate of primary users increased by about 8 dB.
Introduction
The development and widespread use of wireless communication devices has led to development of several studies concerned with the electromagnetic spectrum [1]. Research works demonstrated that the electromagnetic spectrum access relies on more relevant aspects other than its simple limited availability [2]. From this point of view, the dynamic spectrum access (DSA) has been proposed as a novel spectrum allocation policy [3]. In the specific case of radio systems, the transceiver should be able to detect free portions of the electromagnetic spectrum [4], by employing spectrum sensing techniques [5].
In general, three classic methods are defined in the literature for spectrum sensing: detection by energy, analysis of cyclostationary characteristics, and coupled filters [6]. In spectrum sensing by cyclostationary characteristics analysis, detecting the free portions of the spectrum is based on statistical moments of the received signal [7]. This method is regarded as being the most prominent one in scenarios characterized by low signal-to-noise ratios (SNR), because there is no need for the previous knowledge of the signals to be sensed [2].
Recently, some studies analyzed the sensing performance through the use of cyclostationarity in distinct communications scenarios. The authors in [8] proposed the use of a cyclostationary detector based on the softmax regression model, with the objective to improve detection performance under low SNRs in additive white Gaussian noise (AWGN) channels. The work developed in [9] assessed the gains obtained through cooperative sensing of the spectrum in real conditions using mobile sensors randomly displaced in an environment while using cyclostationary analysis. The goal is to achieve better results associated with the degenerative effect on communication channels. The author in [10] investigated the performance of a multiple input, multiple output, orthogonal frequency-division multiplexing (MIMO-OFDM) radio system where the cognitive radio equipment senses the communication channels continuously through compressive sensing with cyclostationary detection. The use of cyclostationary analysis for sensing signals whose sampling rate is lower than the Nyquist rate in AWGN channels is suggested in [11]. Besides, the work in [12] developed a cyclostationary detector for binary offset carrier (BOC) signals, which are widely used in current and next generation global navigation satellite systems. The aforementioned detection technique is also assessed in AWGN channels.
In most of the spectrum sensing architectures based on cyclostationary analysis, the signal to be sensed has a phase angle that varies linearly over time [8,11,12]. However, in communication systems, the received signal has a nonlinear phase variation caused by a time-variant Doppler, thus distorting the cyclostationary features of the analyzed signals [13,14]. Thus, the obtained results become inaccurate, justifying the development of spectrum sensing techniques that can be more effective in this scenario. In the literature, in scenarios where Doppler deviations are considered, the solution to mitigate its effect consists of using multiple receiving antennas [10,15] or in employing cooperative spectrum sensing [9,16,17], which implies higher cost for the system implementation.
This work aims at sensing signals with nonlinear phase behavior with good accuracy using the angle-time cyclostationary (ATCS) analysis. The introduced technique consists of a novel feature extractor to provide a generalized representation of the conventional cyclostationarity concept, which is referred to as time cyclostationarity hereafter.
The use of ATCS processes has been quite successful in the field of mechanical engineering, mainly for extracting signal features in the assessment of variables related to the rotational movement of engines, where speed varies over time [13,[18][19][20]. In particular, this work addresses the use of ATCS theory associated with a detection architecture in order to decide whether a communication signal exists or not in a given range of the spectrum. The improved performance of the proposed approach is also validated through a proper comparison with cyclostationarity sensing methods by simulation results. The method does not increase the computational complexity compared to time cyclostationary, being also highly parallelizable analogously to the cyclostationary detection [21,22]. In addition, highly flexibility exists, with the possibility to use it in a cooperative sensing system or with multiple antennas to provide even greater robustness to the effects of Doppler deviations.
The remainder of this work is organized as follows. The theory of angle-time cyclostationary processes, which can be used for spectral sensing, is described in Section 2. Section 3 addresses the proposed technique. Section 4 presents the simulation results, which are discussed in detail. Finally, the main conclusions are given in Section 5.
Angle-Time Cyclostationary Procedures
Random signal processing generally adopted a model in which the signals are Wide Sense Stationary (WSS) [2]. However, in signals found in wireless communication systems, statistical parameters vary over time. A more effective method for modeling the statistical behavior of these signals is to assume that they are cyclostationary [23]. In this case, some statistical moments can vary over time, but periodically. However, in scenarios characterized by nonlinear phase signals, the cyclostationary characteristics are shaded [18], and the use of angle-time cyclostationary analysis can be more appropriate.
Initially, this section introduces the fundamental concepts about cyclostationary and angle-time cyclostationary analysis. In the following, it is shown how these analysis can be used for spectral sensing. Finally, we present a mathematical demonstration that cyclostationary processes are a special class of angle-time cyclostationary processes.
Cyclostationary Analysis
A given signal x(t) is said to be second-order cyclostationary if its autocorrelation function is periodic in time [24]: where T is the cyclostationary period, τ is the time delay, and R x (t, τ) is the autocorrelation function defined by [25]: where x * (t − τ) is the conjugate complex of x(t − τ) and E{·} denotes the expected value operator. When the theory regarding second-order cyclostationary processes is applied to spectral sensing, the main function used is the spectral correlation density (SCD) represented by S α x ( f ), which is defined as the double Fourier transform of the autocorrelation function of a cyclostationary process, i.e.,: [26]: where α is the cyclic frequency and f is the frequency, both measured in Hz. In this case, the first Fourier transform maps the time for the cyclic frequency α, while the second one maps a time delay τ (in seconds) for the frequency f . It is effectively demonstrated in [24] that the SCD can be calculated as the correlation between the spectral components f and f + α of a signal x(t), that is: where The evaluation of the SCD as expressed by Equations (3) and (4) generates a surface on the plane ( f , α), which is symmetric both in terms of f and α [27]. Considering such existing symmetry, a projection of the SCD on a plane orthogonal to f can be obtained for α ≥ 0, which is called alpha profile [2]:
Angle-Time Cyclostationary Analysis
Analogously to the case of cyclostationary processes, a given signal x(t) is said to be second-order angle-time cyclostationary if its respective angle-time autocorrelation function is periodic with respect to the angle, that is [13]: where R x (θ, τ) is the angle-time autocorrelation function defined by [19]: and t(θ) is a time instant that corresponds to a given angle θ. When angle-time cyclostationary processes are used, the most important analysis tool is the order-frequency spectral correlation (OFSC) function, which is defined as the double Fourier transform applied to the angle-time autocorrelation function, resulting in [13]: In this case, the first Fourier transform maps the phase θ (in radians) for the cyclic angular frequency α θ (dimensionless), whereas the second Fourier transform maps a time delay τ (in seconds) for the frequency f (in Hz). According to [13] it can be demonstrated that Equation (8) can be rewritten as: where F W [·] is the Fourier transform over a finite time window W and x α θ (t) is a transformed representation of the signal x(t), which is calculated as [19]: Besides, the instantaneous angular speed in rad/s, is given by: Fromθ(t) , it is possible to obtain the angular sector spanned during the time interval W in the form: The OFSC definition presented in Equation (9) is similar to the SCD one given in Equation (4). Besides, unlike the SCD, the OFSC corresponds to the statistical correlation between the signal x(t) and its respective transformed version x α θ (t), which is calculated from Equation (12).
Analogously to the cyclostationarity and considering the symmetries that exist in the OFSC, the alpha-angle profile can be defined as the projection of the OFSC in a plane orthogonal to f for α θ ≥ 0, i.e.: Spectral sensing based on the Cyclostationary or Angle-Time Cyclostationary analysis relies on the principle that the stationary noise has spectral line only for the cyclic frequency α = 0, as shown in Figure 1 [23], which represents the alpha-angle profile calculated for a zero-mean Gaussian noise with unit variance. In the other hand, in the case of modulated signals, the alpha-angle profile always presents spectral lines for at least one value α = 0, which can be used for the sensing task.
Angle-Time Cyclostationary Analysis for Communication Signals
This work proposes the use of angle-time cyclostationary analysis for spectrum sensing. In particular, the OFSC and the alpha-angle profile are adopted to determine the presence or absence of communication signals. Thus, it is demonstrated in this subsection that the OFSC is a generalization of the SCD.
Let us consider a typical passing band communication signal x(t). It is assumed that the phase of this signal represented by θ(t) can be written in terms of a quantity that varies linearly with time, while another one presents a differentiable nonlinear variation, that is: where ω o corresponds to the angular frequency of the carrier signal in rad/s, and ℘(t) is any nonlinear variation with respect to time, caused by the distortions of the communication channel. After some manipulation, the transformed version of the signal x(t) corresponding to x α θ (t), as defined in Equation (12), can be calculated as: where: Thus, it is possible to obtain the OFCD as: Assuming F W [ν(t)] = V(ω), applying the frequency shifting property of the Fourier transform to term F W ν(t)e −jα θ ω o t , and substituting β = α θ ω 0 , it is possible to define the OFSC as: Equation (18) corresponds to the calculation of a spectral density cross-correlation function between signals x(t) and ν(t). A practical way to check this similarity is obtained when ℘(t) = 0, i.e., when the channel does not cause interference in the phase of the received signal. In this case, ν(t) = ω 0 x(t) and Φ(W) = ω 0 W, thus making the OFSC identical to the SCD.
In this sense, the OFSC can be seen as a generalization of the SCD function. For cases where the phase of signal x(t) is linear over time, both metrics are equivalent. However, if the phase of x(t) contains nonlinear components, applying the OFSC will lead to other results than those provided by the SCD, also reinforcing the periodic characteristics of x(t), which are lost when a conventional cyclostationary analysis is employed [13].
To illustrate such concepts, the calculation of the SCD and OFSC are presented in Figure 2a,b, respectively, for an amplitude modulation, double sideband full carrier (AM-DSB-FC) modulated signal with coherent nonlinear phase variation. In this case, the spectral lines of the alpha profile are attenuated, which would impair the detection of this signal during the spectrum sensing. On the other hand, these same spectral rays still exist in the alpha-angle profile, thus denoting the robustness of this metric in scenarios characterized by nonlinear phase over time.
Proposed Sensing Architecture
The spectrum sensing architecture using the angle-time cyclostationarity analysis proposed this work relies on the detection of amplitude peaks for α θ > 0 in modulated signals, since the noise has no amplitude peaks in the alpha-angle profile for α θ = 0.
In this context, a detection approach is introduced, based on a decision metric, called sensing metric, calculated from the alpha-angle profile as: Assuming the existence of a communication signal in the aforementioned spectrum range, the value of the sensing metric ε will tend to unit because the alpha-angle profile is normalized in terms of the maximum value of the OFSC. However, if a given spectral band is not occupied, the observed signal will only be composed of white Gaussian noise, thus the value assumed by the sensing metric will tend to lower values as a consequence.
Thus, assuming a suboptimal threshold ξ, the decision on the occupation of a particular spectrum will consist of the following binary hypothesis test:
•
If ε < ξ, then the spectral band under analysis is free, and so transmission may occur; • If ε ≥ ξ, then the analyzed spectral band is occupied; In this paper, the suboptimal threshold for decision making ξ is obtained through a curve that relates decision thresholds to the false alarm probabilities of the sensing architecture. This curve is obtained by making the analyzed signal just a AWGN noise with zero mean and unit variance. From the desired False Alarm Probability and using this curve, the threshold to be used is chosen.
Estimation of the OFSC
In this work, the OFSC calculation is carried out through the estimation from a discrete sequence of finite size. It can be demonstrated that the OFSC estimator for a discrete sequence {x(n)} L−1 n=0 with L samples can be determined from the Welch periodogram as [13]: where: x θw (n) = w s (n)x(n)θ(n)e −jα θ θ(n) , being Φ the angular window given by: Besides, w s (n) corresponds to a shifted version in a multiple of R samples within a window {w(n)} N w −1 n=0 , i.e., w s (n) = w(n − sR), where S is given by: where (·) is the result obtained with the floor operator, DTFT is the discrete-time Fourier transform, and ||w|| 2 is the energy associated with the adopted window.
Simulation Results
This section presents simulation results to compare the performance of the angle-time cyclostationarity and time cyclostationarity when sensing communication signals. For this purpose, a signal x(t) obtained from BPSK (Binary Phase Shift Keying) modulation subjected to white Gaussian noise is employed, whose phase at the receiver terminal can be determined as: where f c is the carrier frequency and ℘(t) is the instantaneous phase variation of signal x(t), which is caused by a time-variant Doppler represented by: In this case, the term ∆ f c = 2π f ad A d,max corresponds to the maximum Doppler deviation with respect to f c and f ad is the frequency for which the deviation occurs. The sensing probability of signal x(t) was determined considering a nonlinear variation of the signal phase with ∆ f c = 12 Hz and f ad = 0.5 Hz/s. In all cases, the carrier frequency is f c = 4096 Hz and the sampling frequency is f s =32,768 Hz.
The alpha profile was estimated through the cyclic periodogram detection (CPD) algorithm proposed in [28] with the parameters listed in Table 1. The alpha-angle profile was estimated through the Welch periodogram described in Section 3.1 with the parameters presented in Table 2.
Characterization of the Sensing Metric
To investigate the behavior of the sensing metric, defined in Equation (19), according to the communication channel, simulation tests were carried out for an SNR range from −15 dB to 0 dB with steps of 1 dB. Besides, for comparison purposes, a reference curve was obtained considering the use of the alpha profile in calculating the sensing metric rather than the alpha-angle profile. The curves were plotted in Figure 3 considering a total of 300 simulations for each value of the SNR, from which the average values were calculated. From Figure 3, it can be stated that the values of the sensing metrics for both techniques tend to decrease as the noise increases, as it is expected that the performance of the sensing architecture is affected when the SNR decreases. However, it is observed that the value of the sensing metric when using the angle-time cyclostationarity is superior that associated with time cyclostationarity, thus emphasizing the robustness of this sensing technique in scenarios characterized by nonlinear variation of the received signal phase.
Probability of Detection and False Alarm
Considering standard IEEE 802.22 [29], which discusses the requirements of cognitive radio systems, a constant false alarm probability of 10% is desired. This value can be obtained adjusting the comparison threshold to be used in the decision making of the spectrum sensing architecture. The search for this threshold is performed from the curves plotted in Figure 4, which show the relationship between a given comparison threshold as a function of the false alarm probability for the time cyclostationarity and angle-time cyclostationarity.
In Figure 4, the comparison thresholds ξ CS = 0.358 and ξ AT−CS = 0.464 were chosen for the time cyclostationarity and angle-time cyclostationarity, respectively, while the performance of the sensing architectures was properly assessed in Figure 5. Each point of the curve in Figure 5 was obtained through 250 simulations, while the average detection rate was determined for each group. It is noted that the angle-time cyclostationarity presents superior performance, being this an expected result due to the behavior of the sensing metrics as a function of the SNR as presented in Figure 3. Once again, this technique proves to be robust when dealing with received signals whose phase variation is nonlinear.
Conclusions
This paper presented a novel spectrum sensing technique based on the angle-time analysis of communication signals, where this approach is also compared with the conventional time cyclostationary analysis. From the simulation results, it was demonstrated that the angle-time cyclostationarity is highly effective in scenarios where the signal phase does not vary linearly. The main contributions of this work can be stated as follows: (i) application of the angle-time cyclostationarity to spectrum sensing in communication systems; (ii) proposal of a mathematical representation in order to demonstrate that the angle-time cyclostationarity is a generalization of the conventional cyclostationarity; (iii) introduction of a novel sensing metric for the angle-time cyclostationary analysis in terms of the alpha-angle profile; and (iv) development of a novel spectrum sensing technique based on the angle-time cyclostationarity.
Future work includes the extension of this method to sense other types of modulation, such as quadrature phase shift keying (QPSK), quadrature amplitude modulation (QAM), minimum-shift keying (MSK), among others; and development of an automatic modulation classification architecture based on the angle-time cyclostationary analysis. | 4,436.6 | 2019-09-28T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
APPLICATION OF DIGITAL TERRESTRIAL PHOTOGRAMMETRY IN ARCHITECTURAL CONSERVATION: THE MOSQUE OF ABDULLAH IBN SALAM OF ORAN
Studies on the architectural heritage can now be supported by three-dimensional reconstruction of actual buildings. The 3D digital model can be an effective medium for documenting the current state of historic buildings but also to create a resource for researchers who conduct their analysis on historical evolution. Architectural photogrammetry has its own specifications in relation to other photogrammetric applications, however it meets these expectations. The traditional approach requires the use of metric cameras but with the development of computational techniques, this requirement is overcome and opens the way for the use of non-metric camera. The use of the shots that is no longer restricted to the parallel configuration of bundles, the images may be convergent, horizontal or oblique. Combining and modelling several cameras increasingly powerful in resolution and stability, has great scope and the same workflow can be used in varied applications. ISPRS and ICOMOS created CIPA because they both believe that a monument can be restored and protected only when it has been fully measured and documented and when its development has been documented several times, i.e. monitored, also with respect to its environment, and stored in proper heritage information and management systems. In this paper the 3D modelling of an important cultural site using terrestrial photogrammetric techniques for architectural preservation is presented. The site is the mosque of Abdullah Ibn Salam, Built in 1880 at the initiative of Simon Kanoui, also known as the Great Synagogue of Oran was inaugurated in 1918 only. It was one of the largest and most beautiful synagogues in North Africa. It was built with stone imported from Jerusalem. This place of worship became in 1975 the mosque of Abdullah Ibn Salam who was a rich Jew of Medina who was converted to Islam. The structure is modelled using 321 oriented photos taken in five series of shots that cover all the façade and the interior of the building where more than 9200 points are created. Also some orthophotos of the important elements are produced and used as materials in the final stage which is the edition in a 3D modelling software. And a video virtual tour is generated using this software.
Studies on the architectural heritage can now be supported by three-dimensional reconstruction of actual buildings.The 3D digital model can be an effective medium for documenting the current state of historic buildings but also to create a resource for researchers who conduct their analysis on historical evolution.Architectural photogrammetry has its own specifications in relation to other photogrammetric applications, however it meets these expectations.The traditional approach requires the use of metric cameras but with the development of computational techniques, this requirement is overcome and opens the way for the use of non-metric camera.The use of the shots that is no longer restricted to the parallel configuration of bundles, the images may be convergent, horizontal or oblique.Combining and modelling several cameras increasingly powerful in resolution and stability, has great scope and the same workflow can be used in varied applications.ISPRS and ICOMOS created CIPA because they both believe that a monument can be restored and protected only when it has been fully measured and documented and when its development has been documented several times, i.e. monitored, also with respect to its environment, and stored in proper heritage information and management systems.In this paper the 3D modelling of an important cultural site using terrestrial photogrammetric techniques for architectural preservation is presented.The site is the mosque of Abdullah Ibn Salam, Built in 1880 at the initiative of Simon Kanoui, also known as the Great Synagogue of Oran was inaugurated in 1918 only.It was one of the largest and most beautiful synagogues in North Africa.It was built with stone imported from Jerusalem.This place of worship became in 1975 the mosque of Abdullah Ibn Salam who was a rich Jew of Medina who was converted to Islam.The structure is modelled using 321 oriented photos taken in five series of shots that cover all the façade and the interior of the building where more than 9200 points are created.Also some orthophotos of the important elements are produced and used as materials in the final stage which is the edition in a 3D modelling software.And a video virtual tour is generated using this software.
Heritage in Algeria
Algeria is one of the countries that contain a large and diverse architectural heritage.It is inherited from different civilizations that have succeeded, from the Phoenician era to the Islamic civilization, also the ancient civilizations of the Sahara which have left their footprints in the Tassili and ahagar.The Algerian government attaches strategic attention to the architectural heritage, at most, the ancient heritage of our ancestors.Now, the preservation and preventive maintenance of the national architectural heritage is a priority or even a challenge for the Ministry of Culture which has drawn up a strategy in the sense of preserving this piece of our history and identity.The celebration of Heritage Month, held this year under the slogan "Heritage and Identity", provides information on the willingness and commitment of officials from the Ministry of Culture to maintain and preserve cultural and architectural heritage that are an essential part of the collective memory of the nation and this is reflected in the field by the start of several restoration sites and ancient monuments.
A promising methodology
The use of the close range photogrammetry for the architectural heritage preservation purpose require a knowledge how, hardware and software (A.N. Andrés et al. 2012) not always available in sufficient way to ensure their use in a under developed country which is the case in Algeria.This obliged us to opt for a low cost solution that assure the standard accuracy and quality required for this kind of survey, and also can be easily introduced in to educational institutions (M. A. N. Andrés and Pozuelo 2009;Gomez-Lahoz and Gonzalez-Aguilera 2009;Colosi et al. 2009) 2. SITE DESCRIPTION Oran (Wahran in Arabic), is the second largest city of Algeria and one of the largest in the Maghreb.It is a port city on the Mediterranean, north-western Algeria, and the chief town of the wilaya of the same name bordering the Gulf of Oran.
In 1877 the Jewish Consistory, at the initiative of Simon Kanaoui a rich merchant, decides to build a synagogue in Oran.
It is located in the old Boulevard Joffre, renamed Boulevard Maata Mohamed El Habib.The land was given freely by the municipality.In 1880 the first stone was posed.
It was one of the largest synagogues in North Africa.Built with stone imported from Jerusalem.And on May 12, 1918 the synagogue was inaugurated (Figure 1).(Badia, 1997) This site became in 1975 the mosque Abdellah Ben Salem(Figure 2), who was a rich Jew of Medina converted to Islam and remained faithful to his new faith to the end of his life.
Viewed from outside, the building is very important.The facade where a rose-colored stained glass windows illuminate the interior is adorned on each side of two towers of twenty meters high in which two wings are attached to the domes complete the harmonious whole.Within three large stained glass doors open onto overcome the nave (Badia, 1997).This one is separated from the aisles by arches decorated with arabesques and support columns of red marble (Figure 3).
PRELIMINARY WORKS
Since this is our first experience of using the close range photogrammetry for the cultural heritage in Algeria some preliminary works were done to understand the optimal way to use this technique.
Essentially the preliminary work was the calibration and the positioning accuracy assessment of this technique , also the use of targets give us a better accuracy so, what is the optimal size of the target to use? (Sanz et al. 2010).
Calibration and accuracy assessment
Concerning the camera calibration, the development of the digital cameras and the techniques of digital images analysis, let any digital camera can be calibrated and used for metric purposes, so its distortions, focal length and the exact position of the principal point must be determined with a sufficient accuracy (Sanz et al. 2010;Lerma et al. 2010;Barazzetti et al. 2011).
Figure 3.The interior of the mosque.
For the calibration of our cameras the software present two methods, the first is using The Calibration Grid which is a pattern of dots specifically designed for the Camera Calibrator.
The second one is the method called Self/Field Calibration where an object is photographed responding to some special conditions so the software can extract the calibration parameters of the camera.
In our application two cameras are used SONY DSC W200 with a resolution of 12.1 Megapixels with a focal length of 7.6 to 22.8mm.The second one is the Olympus SP 500 UZ with a resolution of 6Megapixels and a focal length of 6.3 to 63 mm.The grid calibration method gives a focal length of 7.7018mm for the first one and 6.4040 mm for the second one.Also the other calibration parameters are extracted such as the format size, principal point and the lens distortions.
The field calibration gives slightly different results so the focal length of the Sony camera is 7.8035mm and 6.3677mm for Olympus.
To check the accuracy of these results, a calibration polygon of 36 points was created using a LEICA TC 1101 total station (Figure 4).Table 1.First calibration results using three points (results are in mm).Table 1 present the statistics on the residuals in millimetres between the coordinates obtained from the photogrammetric measurement and the total station measurements, where Dx, Dy, Dz are the residuals in the X, Y and Z directions and D on the residual vector.The external orientation of the photogrammetric model was done using the minimum required of three points.The use of more points can increase the accuracy, so several tests were done using additional points, we notice that for five or more points the results are similar ( Table 2. Second calibration results using five points (results are in mm).
Therefore, the field calibration technique is recommended since it gives a 1mm RMS on manually measured points (without the use of subpixel point measuring option which increases the accuracy), and the use of five points for the absolute orientation is recommended.
Targets size
The used software (Photomodeler) propose the creation of coded targets to provide an accurate sub-pixel point marking, the calculation of the size of these targets is based on a criteria where the central dot of the target must have about 10 pixels of diameter on the image (Barazzetti et al. 2011).So if we use the Sony camera at a distance of 30 meters from the object the target size will be 372.68mmwhich is very large and can hide some interesting detail.
For this reason a test to determine the optimal target size have been performed.In this test four targets of different sizes (6, 15, 25 ad 35 mm) are printed on A4 sheet, 20 targets sets were distributed on a façade and all the targets positions were surveyed by a total station, this façade has been photographed from different distances with a fixed height/base ratio (Figure 5).The photos taken from the same distance are oriented together to get the 3D position of the targets; the absolute orientation is done using five of the surveyed points.We conclude that the targets of 6mm give the best result but cannot be used for a distance greater than 20 meters because they becomes barely visible, for a distance less that 40m the other targets gives a slightly similar results so the 15 mm target is preferred since it hide less details.
Figure 5. Targets distributed on the façade.
DATA ACQUISITION
The survey of the mosque of Abdullah Ibn Salam was carried out with a photogrammetric approach.The images were captured using the SONY DSC W-200 camera with a resolution of 12Megapixels (4000*3000) and a single zoom position which was calibrated with a field calibration method and the focal length was 7.8035mm with a FOV of 51° ( field of view).
The images were acquired obeying to the 3x3 rules (Kasser and Egels 2001).In the field a methodological approach must be followed to guarantee the total and right cover of the whole site.
For the parallelepiped or cylindrical objects the photos must be taken from all around the object (the ring method) (Figure 6).
For a façade, two series of shots were taken from a line parallel to the façade the first with an angle of 45° and the second with -45° (Figure 7) Figure 7. photography positions for a façade.
For a room two or more photos are taken from the centre of each wall to cover all the room corners (Figure 8).
Figure 8. photography positions for a room.
In the case of a corridor, if it is sufficiently wide it will be similar to the room case with multiple photographing stations along the long side (Figure 9).But, if the corridor is narrow the horizontal separation between photograph stations will provide low intersection angles which decrease the accuracy, so to resolve this problem we have used a vertical separation where two shots were taken from each position with different heights (Figure 10).
Figure 10.photography positions for a narrow corridor.
According to these rules 332 photos were taken to cover the mosque (façade, gate and entrance, prier hall 1 st and 2 nd floor).These photos were shot in five days by reason of 3hours per day when the mosque is unoccupied (avoiding the prier time).
ORIENTATION AND MODELLING
The first step and before the orientation process, is the photos selection where we eliminate the unusable photos (convergence problem, blurred...).
Due to the relatively height number of photos the project was divided into eight parts, with overlapping points, which simplifies the orientation and reduce the calculation time and the risk of orientation failure, at the end the eight parts are merged to get a unique project covering all the mosque.Each part is processed using PhotoModeler through the following workflow: Once each part is finalized, we proceed to the merge step where we get a unique model based on 312 photos and contain about 9200 point.The obtained model is scaled and oriented using 3D scale and orientation tool (Figure 12).The orientation is base on 2 points for each direction.
•
The lake of some modelling tools (essentially extrusion) that can reduce the photo number.
Figure 13.The misplaced texture and the lightening problem.
The edition
To overcome these problems we export the resulting model to Cinema 4D, one of the most known 3D modelling and edition software.
In this software we proceed first to the geometrical completing of the model, and all the missed surfaces.Then we create apply texture for the essential elements such as red marble columns, white and yellow paint, stones, orthophotos for sculptures, frescos, arabesques doors and windows (Figure 15).
Orthophotos
Many orthophotos of the most important elements such as façade, frescos and the stained glasses (Figure 14).
Texture problems due to the lightening conditions
The use of only one distance for the model scaling The orientation is base on 2 points for each direction.
The lake of some modelling tools (essentially uce the photo number.
CONCLUSION
This is the first experience of using the close range photogrammetry for the cultural heritage preservation in Algeria.This experience permits us to determine the potentialities of applying photogrammetry in this field, also to reach the limits of this technique.The field/self calibration technique is preferred to assure a better geometrical accuracy; as well the accuracy is depending to camera resolution, the lens quality and the camera positions.
In the general case for architectural modelling, where the maximum shot distance is about 40 meters, the best target size is 15mm.Lots of products can be obtained using the close range photogrammetry, such as a metric document readable and interpretable by architect or archaeologist for archiving and restoration, orthophotos for safeguarding patterns and frescos, photorealistic 3D models for virtual visits, E-learning and tourism promotion.Of course other solution exists (Haala and Kada 2010), like 3D laser scanning, but the low cost of the close range photogrammetry is a great advantage essentially for the developing countries.
Figure 4 .
Figure 4.The calibration polygon.The coordinates obtained by the photogrammetric way using the different calibration results are compared to the coordinates obtained using the total station.
Figure 9 .
Figure 9. photography positions for a wide corridor.
Figure 11
Figure 11.a.A model after line marking (up) b. a model after surfaces creation (bottom).
Figure 12 .
Figure 12.The result merged projects
Figure 13 .
Figure 13.The misplaced texture and the lightening problem.
Figure 14 .
Figure 14.Example of the extracted orthophotos. | 3,866.8 | 2016-10-14T00:00:00.000 | [
"Computer Science"
] |
CDK9 Inhibitor Induces the Apoptosis of B-Cell Acute Lymphocytic Leukemia by Inhibiting c-Myc-Mediated Glycolytic Metabolism
B-cell acute lymphocytic leukemia (B-ALL), a common blood cancer in children, leads to high mortality. Cyclin-dependent kinase 9 inhibitor (CDK9i) effectively attenuates acute myeloid leukemia and chronic lymphoblastic leukemia by inducing apoptosis and inhibiting cell proliferation. However, the effect of CDK9i on B-ALL cells and the underlying mechanisms remain unclear. In this study, we showed that CDK9i induced the apoptosis of B-ALL cells in vitro by activating the apoptotic pathways. In addition, CDK9i restrained the glycolytic metabolism of B-ALL cells, and CDK9i-induced apoptosis was enhanced by co-treatment with glycolysis inhibitors. Furthermore, CDK9i restained the glycolysis of B-ALL cell lines by markedly downregulating the expression of glucose transporter type 1 (GLUT1) and the key rate-limiting enzymes of glycolysis, such as hexokinase 2 (HK2) and lactate dehydrogenase A (LDHA). Moreover, cell apoptosis was rescued in B-ALL cells with over-expressed c-Myc after treatment with CDK9i, which is involved in the enhancement of glycolytic metabolism. In summary, our findings suggest that CDK9 inhibitors induce the apoptosis of B-ALL cells by inhibiting c-Myc-mediated glycolytic metabolism, thus providing a new strategy for the treatment of B-ALL.
INTRODUCTION
B-cell acute lymphoblastic leukemia (B-ALL) is one of the most frequently occurring malignancies in children, with a peak incidence between 1 and 4 years of age. Considering the improvements in multimodal chemotherapy regimens over the past few decades, the 5-year survival rate for pediatric B-ALL is now close to 90% (Malard and Mohty, 2020). However, a proportion of patients still shows no response to existing therapeutic drugs and suffered from the side-effects of long-term multi-drug treatment. In addition, existing therapeutic drugs cannot further improve the prognosis of refractory and relapsed B-ALL (Kuhlen et al., 2019). Therefore, new strategies for the treatment of B-ALL should be identified.
The inhibition of the cell cycle is one of the key mechanisms in the development of drugs for leukemia treatment (Ghelli Luserna di Rora et al., 2017). Therefore, chemotherapeutic drugs mainly interfere with DNA synthesis and inhibit cell cycle on leukemic cells. Cyclin-dependent kinases (CDKs) are one family of serine/threonine protein kinases and regulate the cell cycle division and gene transcription. CDKs can be divided into two categories according to their function, namely, CDKs that regulate cell cycle and CDKs that modulate gene transcription (Malumbres, 2014;Lemmens and Lindqvist, 2019). Cyclin-dependent protein 9 (CDK9) belongs to the CDK cyclin family, which includes in CDK4, CDK6, and CDK7. CDK9 modulates the transcription elongation and mRNA maturation of genes but does not regulate cell cycle of cells (Asghar et al., 2015). CDK9 phosphorylates Ser-2 and Ser-5 of the carboxyl terminal domain (CTD) of RNA polymerase II (RNA Pol II), which is involved in transcription elongation (Laitem et al., 2015;Gressel et al., 2017). CDK9 participates in the development and progression of many types of tumors by recruiting p-TEFb to the promoters of oncogenes in a BRD4dependent manner (Franco et al., 2018). Therefore, CDK9 could serve as a potential therapeutic target in most of malignant tumors (Sonawane et al., 2016). CDK9 inhibitors have a significant inhibitory effect on acute myeloid leukemia (AML) and chronic lymphocytic leukemia (CLL) (Yin et al., 2014;Boffo et al., 2018). SNS-032, a CDK9 selective inhibitor, has entered clinical trials for the treatment of AML, CLL, and multiple myeloma (Tong et al., 2010;Walsby et al., 2011). AZD4573, another highly selective inhibitor of CDK9, has been validated in hematological malignancies (Cidado et al., 2020). However, the effect of CDK9 inhibitors on B-ALL cells and the underlying mechanism remain unknown.
Tumor cells favor anaerobic glycolysis as energy source even under sufficient oxygen condition, which is known as the Warburg effect. As the initial step in glucose metabolism, glycolysis consists of several reactions that are involved in several key rate-limiting enzymes, such as hexokinase (HK), phosphofructokinase (PFK), and pyruvate kinase (PK) (Counihan et al., 2018). CDK6 links the cell cycle and cell metabolism of tumors by phosphorylating two key enzymes, 6-phosphofructokinase (PFK1), and pyruvate kinase M2 (PKM2) and leads to the inhibition of glycolytic pathway and fuels the pentose phosphate (PPP) and serine pathways (Wang et al., 2017). CDK9 inhibition stops the gene transcription and results in the downregulated expression of a large proportion of genes, such as c-Myc and Mcl-1 . The oncogene c-Myc controls many aspects of cell biological processes, such as cell growth, proliferation, differentiation, and apoptosis (Garcia-Gutierrez et al., 2019). As a metabolic sensor, c-Myc stimulates the glycolysis, mitochondrial biogenesis and glutamine metabolism by directly modulating the expression of metabolism-related genes in tumor cells Dejure and Eilers, 2017). However, whether CDK9 inhibitors induce cell apoptosis in leukemia by suppressing c-Myc-mediated glycolysis is largely unknown.
The oncogene c-Myc encodes a transcription factor c-Myc, which links altered cellular metabolism to tumorigenesis. c-Myc regulates genes involved in the biogenesis of ribosomes and mitochondria, and regulation of glucose and glutamine metabolism.
In the study, we discovered that CDK9 inhibitors induced the apoptosis of B-ALL cells by restraining glycolysis, which was enhanced by co-treatment with glycolysis inhibitors in vitro. Moreover, cell apoptosis was reversed in B-ALL cells with overexpressed c-Myc after treatment with CDK9 inhibitors, which are involved in the enhancement of glycolytic metabolism. Therefore, these findings provide a potential treatment strategy for B-ALL in the clinic.
Clinical Samples
Bone marrow samples from patients with childhood B-ALL were collected in Shanghai Children's Medical Center (SCMC). Sample usage and protocols were approved and supervised by the SCMC Ethics Committee. All the samples were analyzed in a blind manner and stored in SCMC. B-ALL cells were seeded at a density of 10 6 cells/ml in STEMSPAN (Gibco) medium supplemented with 20 ng/ml recombinant human IL3 (rhIL3), 10 ng/ml rhIL7, 10 ng/ml rhIL6, 10 ng/ml rhIL2, 10 ng/ml rhIGF-1, 20 ng/ml rhFlt3L, and 10 ng/ml rhVcam1. B-ALL cells were treated with or without 1 µM of SNS-032 for 24 h, and the percentage of apoptosis was analyzed by flow cytometry.
Culture of Cell Lines
Human B-ALL cell lines, SEM, RS4;11, NALM6, and REH, were purchased from the American Type Culture Collection (Manassas, VA, United States) and cultured in RPMI-1640 medium supplemented with 10% fetal bovine serum (FBS, Gibco) and 1% penicillin-streptomycin (Gibco). Cell lines were routinely detected by mycoplasma contamination test and were assessed using short tandem repeat (STR) DNA profiling.
Drug Sensitivity Assay
A total number of 12,000 cells per well were seeded in a 96-well plate and then treated with different concentration of drugs (SNS-032 and AZD4573, obtained from Selleck, Houston, TX, United States) for 72 h. Cell viability was evaluated using CTG (Promega CellTiter-Glo TM Luminescent Cell Viability Assay Kit) according to the manufacturer's protocol. The absorbance optical density of 405 nm was recorded using a microplate reader (Synerge2; BioTek Instruments, Winooski, VT, United States), and the half maximal inhibitory concentration (IC50) was calculated using GraphPad Prism.
RNA-Seq Analysis
Total cellular RNA was isolated using the TRIzol reagent. Briefly, mRNA was reversed to cDNA for constructing the library. Then, the cDNA library was measured by RNA sequencing. The raw reads were filtered, and clean reads were mapped using Bowtie2 and HISAT. The gene expression level (FPKM) was calculated according to the RSEM, and the data were analyzed.
Apoptosis Analysis
Cell apoptosis was measured using the Annexin-V apoptosis detection kit (BD Bioscience, San Jose, CA, United States) according to the manufacturer's protocol. The percentage of Annexin-V positive cells were detected by flow cytometry (BD Biosciences), and the data were analyzed using the FlowJo Version 10.0 software.
Cell Proliferation Analysis
After the cells were treated with drugs for 24 h, EdU was added into the cells and incubated for 2 h. Cell proliferation was conducted using the Click-iT EdU flow cytometry assay kit (Beyotime) according to the manufacturer's protocol. Then, the stained cells were analyzed by flow cytometer.
Glucose Uptake Assay
The glucose uptake ability of the cells was detected by incubating with 2-NBDG (Invitrogen). Briefly, the cells were harvested and washed with PBS. Then, fluorescent 2-NBDG was added to the cells, which were then incubated at 37 • C for 30 min in 5% CO2 incubator. After centrifugation, all media were removed and washed with PBS once, and the samples were analyzed using a flow cytometer.
Extracellular Acidification Rate (EACR)
Metabolic flux analysis with a XF Glycolytic Stress Test Kit (#103017-100, Seahorse Bioscience) was performed using a Seahorse XF 96 instruments (Seahorse Bioscience). An equal number of REH cells was plated and treated with inhibitor for 24 h. Cartridge was equilibrated overnight prior to the assay day. Exactly 5 × 10 5 cells were changed to base media supplemented with 10 mM glucose, 1 µM oligomycin, and 50 mM 2-DG. Results were analyzed using GraphPad Prism. Basal glycolytic rate and glycolytic capacity were calculated according to the manufacturer's instructions.
Lactate Concentration Assay
Lactate was quantified using Glycolysis cell-based Assay Kit (Cayman) according to the manufacturer's protocol. Briefly, 5 × 10 4 cells were cultured in RPMI-1640 medium supplemented with 0.25% fetal bovine serum (Gibco) for 24 h in 96 wells, and then treated with drugs for 24 h. Exactly 10 µl of culture supernatant was added to the lactate assay buffer. The reaction was incubated for 30 min at room temperature. The absorbance at 490 nm was assessed using a microplate reader.
Measurement of Metabolic Indicators
Cell lines were seeded in RPMI 1640 complete medium with drugs. Cells were incubated with metabolic dyes, such as mitochondrial membrane potential probe MitoTrackerTM orange (Life Technologies, M7511), total ROS probe DCFDA (Life Technologies, C369). The samples were analyzed by flow cytometry.
Metabolite Analysis
A total number of 2 × 10 6 cells were harvested and washed with pre-cooled PBS. Cells were cracked in ice-cold 80% methanol and centrifuged at 1,500 rpm for 10 min to obtain the supernatant. Finally, the supernatant was detected by liquid chromatography-tandem mass spectrometry (LC-MS/MS).
Construction of Overexpression Stable Cell Lines
The c-Myc overexpressed and vector plasmids were gifted from Dr. Li T (SCMC, China). The package and concentration of virus were conducted as previously reported (Wu et al., 2018). REH cells were cultured with the enriched viral medium for 48 h, and then selected with puromycin to construct the stable cell lines for 48 h. The overexpression efficiency was verified by Western blot analysis.
Statistical Analysis
Statistical analysis was conducted using GraphPad Prism 7.0 (GraphPad Software). Data were presented as mean ± SD of three independent experiments. Differences between samples were analyzed using two-tailed Student's t-test. Results with values of P < 0.05 were considered statistically significant.
CDK9 Inhibitor SNS-032 Induces the Apoptosis of B-ALL Cell Lines in vitro
To determine the cytotoxic effects of CDK9 inhibitor (CDK9i) on B-ALL cells, we detected the cell viability in B-ALL cell lines after treatment with a gradient concentration of SNS-032 for 72 h. The IC50 values of NALM6, REH, SEM, and RS411 were 200, 200, 350, and 250 nM, respectively ( Figure 1A). We then measured the cell proliferation by staining with EdU and found that EdU-positive B-ALL cells dramatically decreased after treatment with SNS-032 for 24 h (Figure 1B), and this result is consistent with a previous report in AML (Wang et al., 2019). Then, we detected the apoptosis of B-ALL cells after SNS-032 treatment for 24 and 48 h and found that the apoptotic rates significantly increased in all of B-ALL cell lines (Figures 1C,D), indicating that SNS-032 induces the cell death of B-ALL. We also determined the cell apoptosis in samples from patients with B-ALL and confirmed that the apoptotic rates increased in SNS-032-treated sample compared with that of DMSO-treated sample (Figures 1E,F and Supplementary Table 1). Additionally, the cell cycle and apoptosis of human normal peripheral blood mononuclear cells (PBMCs) were measured by flow cytometry, and the results showed that SNS-032, to a certain extent, induced apoptosis of PBMC, but did not affect the cell cycle of PBMC (Supplementary Figures 1A,B), indicating that SNS-032 may lead to side effects, such as myelosuppression. Furthermore, we used qRT-PCR to detect the expression of cell proliferation-and apoptosis-related genes. SNS-032 remarkably downregulated the expression of cell proliferation genes, such as MCM4 and MCM7, and the expression of anti-apoptosis genes, such as Bcl2 and BCL2L (Figures 1G,H). Moreover, Western blot was used to evaluate the protein expression of Bcl-2 and cleaved Caspase 3. The data displayed that Bcl-2 was down-regulated, and cleaved caspase 3 was markedly up-regulated in SNS-032-treated B-ALL cells (Figures 1I,J), indicating that SNS-032 activates the cell apoptosis signal pathway of B-ALL. The cytotoxic effects of SNS-032 were confirmed through degraded CDK9 protein, which inhibited phosphorylating serine 2 and 5 in the CTD of RNA Pol II in B-ALL cells (Supplementary Figure 1C). Taken together, these data suggested that CDK9i induces cell apoptosis and suppresses cell proliferation of B-ALL in vitro.
SNS-032 Perturbs the Cellular Metabolic Pathways of B-ALL Cells in vitro
Tumor cells reprogram their metabolism from catabolism to anabolism to prompt cells enter the cell cycle and fuel cell proliferation (Faubert et al., 2020). CDK9 inhibition promotes prostate cancer cells switch to fatty acid oxidation by inducing metabolic stress (Itkonen et al., 2019). However, the inhibitory effect of CDK9i on the energy metabolism of B-ALL cells remains unclear. To address this question, RNA sequencing (RNA-seq) was performed in B-ALL cells after treated with SNS-032 or DMSO for 24 h. The SNS-032-and DMSO-treated cell populations were clustered by correlation analysis and principal component analysis (Figures 2A,B). The transcript profile of SNS-032-treated cells was globally changed, where 1,294 genes were upregulated and 545 genes were downregulated compared with those of DMSO-treated cells (Figures 2C,D). In addition, the up-and downregulated genes were analyzed by Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis. We found that SNS-032 significantly down-regulated the mRNA expression of p53 signaling pathway, PI3K-Akt signaling pathway, and metabolic pathways (Figure 2E). We further analyzed the metabolic pathways and found that pyruvate metabolism, glycolysis/gluconeogenesis, purine/pyrimidine metabolism and oxidative phosphorylation were greatly changed after treatment with SNS-032 (Figures 2F-H). Altogether, these findings indicated that CDK9i perturbs the glycolytic metabolism of B-ALL cells in vitro.
SNS-032 Prompts the Apoptosis of B-ALL Cells by Inhibiting Glycolysis
To clarify the effects of CDK9i on the glycolytic metabolism of B-ALL cells, we detected the glucose uptake of B-ALL cells by incubating fluorescence-labeled 2-deoxy-glucose analog (2-NBDG). SNS-032 treatment remarkably reduced the glucose uptake activities in B-ALL cells (Figures 3A,B). In addition, the mitochondrial membrane potential (MMP), ATP content, total reactive oxygen species (ROS) and intracellular lactate concentration were markedly decreased in all four cell lines after SNS-032 treatment (Figures 3C-F). The decreased of glucose uptake, MMP and total ROS were confirmed in SNS-032-treated primary B-ALL cells (Figures 3G-I). To directly determine the glycolytic capacity of B-ALL cells, the extracellular acidification rate (ECAR) was measured by Seahorse. We uncovered that SNS-032 treatment dramatically inhibited the glycolysis of B-ALL cells (Figure 3J). To further prove whether SNS-032 suppresses glycolysis, we used SoNar, a metabolic sensor, to monitor the dynamic of metabolic change and found that SoNar-high cells prefer glycolysis (Zhao et al., 2015;Zou et al., 2018). The alteration of the ratios in SoNar B-ALL cells was easily tested by flow cytometry. The results displayed that the ratios of SoNarhigh cells notably decreased in SNS-032-treated cells (Figure 3K). Moreover, the intermediates of glycolysis in SNS-032-treated B-ALL cells were measured by LC-MS/MS, and the data revealed that the levels of metabolic intermediates of glycolysis, such as glucose-6-phosphate, glyceraldehyde-3-phosphate, pyruvate, and lactate, considerably dropped in SNS-032-treated B-ALL cells ( Figure 3L). Hence, SNS-032 restrains the glycolysis of B-ALL cells in vitro.
Metabolic shift occurs in the survival, invasion, and metastasis of cancer cells. Glycolysis, which is the main energy source of tumor cells, is inextricably coupled with cell proliferation and death (Buchakjian and Kornbluth, 2010;Kishton et al., 2016). To testify that SNS-032 results in the cell death of B-ALL cells by restraining glycolysis, cell apoptosis after co-treatment with a glycolysis inhibitor, 2-Deoxy-D-glucose (2-DG), was measured by flow cytometry. We uncloaked that the cell apoptosis induced by SNS-032 was markedly enhanced in 2-DG co-treated cells ( Figure 3M). Additionally, the cell apoptosis induced by SNS-032 was significantly improved in GLUT1 inhibitor WZB117 co-treated cells ( Figure 3N). Overall, these results indicated that SNS-032 leads to the apoptosis of B-ALL cells by partially inhibiting glycolysis.
CDK9 Inhibitor AZD4573 Facilitates the Apoptosis of B-ALL Cells by Inhibiting Glycolysis
To further confirm that CDK9i restrains the glycolytic metabolism of B-ALL cells in vitro, we used AZD4573, a highly selective CDK9 inhibitor, to evaluate the effects of CDK9i on the cell apoptosis of B-ALL cells. As shown in Figure 4A, the IC50 values of NALM6, REH, SEM, and RS411 are 5, 10, 10, and 1 nM, respectively. In addition, we found that AZD4573 induces the apoptosis of REH cells in a dose-dependent manner ( Figure 4B). Meanwhile, AZD4573-treated REH cells exhibited lower glucose uptake activities compared with those of DMSOtreated cells (Figures 4C,D). Furthermore, AZD4573 treatment decreased levels of MMP, ROS and the ATP content in a dose-dependent manner (Figures 4E-G). Moreover, AZD4573 treatment reduced the ratios of SoNar-high cells in B-ALL cells ( Figures 4H,I), indicating that AZD4573 restrains the glycolysis of B-ALL cells in vitro. More importantly, the cell apoptosis induced by AZD4573 was increased in cells co-treated with glycolysis inhibitors 2-DG and WZB117 (Figures 4J,K). We also confirmed that AZD4573 resulted in the glycolysis inhibition of B-ALL cells by degrading CDK9 and phosphorylating serine 2 and 5 in the CTD of RNA Pol II (Supplementary Figure 2). Hence, CDK9 inhibitors induce cell apoptosis by partially suppressing the glycolysis of B-ALL cells in vitro.
CDK9i Curbs the Glycolysis of B-ALL Cells by Downregulating the Expression of Metabolic Enzymes
As the initial step in glucose metabolism, glycolysis consists of several reactions that are involved in several key rate-limiting enzymes, such as hexokinase (HK), phosphofructokinase (PFK), Values were shown as mean ± SEM. * p < 0.05, * * p < 0.01, * * * p < 0.001 and * * * * p < 0.0001. and pyruvate kinase (PK) (Faubert et al., 2020). The RNA-seq data was re-analyzed to prove whether CDK9i suppressed the glycolysis of B-ALL cells by down-regulating the expression of metabolic enzymes. We discovered that SNS-032 remarkably down-regulated the key rate-limiting enzymes of glycolysis, such as GLUT1, HK2, and LDHA ( Figure 5A). We performed qRT-PCR to validate the expression levels of glycolysis-related enzymes, and the results exhibited that SNS-032 dramatically downregulated the expression of GLUT1, HK2, and LDHA (Figures 5B-D). We then detected the protein expression levels of the rate-limiting enzymes in the glycolytic pathway. The results exhibited that SNS-032 markedly downregulated the expression levels of GLUT1, HK2, and LDHA ( Figure 5E). Moreover, AZD4573 downregulated the expression levels of GLUT1, HK2, and LDHA ( Figure 5F). These findings indicated that CDK9i restrains the glycolysis of B-ALL cells by reducing the expression of metabolic enzymes.
CDK9i Engenders the Cell Apoptosis of B-ALL by Suppressing c-Myc-Mediated Glycolysis
CDK9 inhibition prevents productive transcription and downregulates the expression of many genes, such as c-Myc and Mcl-1 . C-Myc stimulates the anabolism of cancer cells by directly modulates the expression of several glycolysis genes, such as GLUT1, PKM2, and LDHA (Liang et al., 2016;Fang et al., 2019). We deduced that CDK9i induces cell apoptosis by downregulating the expression of c-Myc-mediated glycolysis genes. To prove this hypothesis, we first confirmed that SNS-032 suppressed the mRNA and the protein expression of c-Myc in B-ALL cells (Figures 6A,B). To check whether the SNS-032-induced reduction of glycolysis in leukemia cells is mediated by c-Myc, we over-expressed c-Myc on REH cells by lentivirus infection. The overexpressed c-Myc protein in REH cells was verified by Western blot (Figure 6C). SNS-032 treatment did not affect the overexpression of c-Myc ( Figure 6D). We also demonstrated that the glycolytic enzymes were reversed by overexpressing c-Myc upon treatment with SNS-032 ( Figure 6D). Furthermore, the levels of glucose uptake, MMP, total ROS, and intracellular lactate were partially rescued in c-Myc-overexpressing B-ALL cells after treatment with SNS-032 (Figures 6E-I), implying that CDK9i blocked the glycolysis of B-ALL cells by reducing c-Myc expression. Additionally, EdU-positive proliferating cells were evidently restored in c-Myc-overexpressing B-ALL cells after intervention with SNS-032 (Figures 6J,K). More importantly, the cell apoptosis was abolished in c-Myc-overexpressing B-ALL cells after intervention with SNS-032 (Figures 6L,M). These data suggested that CDK9i induces the apoptosis of B-ALL cells by partly inhibiting c-Myc-mediated glycolytic gene expression.
DISCUSSION
Leukemic cells infiltrated and destructed the bone marrow, and then disrupted the normal hematopoiesis, leading to the death of patients. Through multi-modal combination chemotherapy or hematopoietic stem cell transplantation, the 5-year overall survival rate of patients with childhood B-ALL has reached over 80% (Malard and Mohty, 2020). However, a proportion of B-ALL patients is not sensitive to chemotherapy and still suffer from relapse, leading to treatment failure (Teachey and Pui, 2019;Malard and Mohty, 2020). Therefore, new drugs should be developed to improve the treatment rates and overcome drug resistance and B-ALL relapse. In preclinical studies, CDK9 inhibitors have demonstrated anti-tumor effects in many different types of tumor (Morales and Giordano, 2016). SNS-032, a selective and potent inhibitor of CDK 2, 7, and 9 and AZD4573, a highly selective inhibitor of CDK9, exhibited the inhibitory effects of hematological malignant cell lines in vitro and clinical therapeutic activity in patients with MM and CLL (Tong et al., 2010;Walsby et al., 2011). A study reported that CDK9 is overexpressed in B-ALL through hub analysis. They also observed that the RNA and protein expression levels of CDK9 were high in MOLT4 and REH leukemic cell lines in the Human Protein Atlas database. These data indicated that CDK9 could serve as potential biomarkers and predictors of leukemogenesis in B-ALL (Jayaraman et al., 2015). In the present study, we found that SNS-032 and AZD4573 induced cell apoptosis of both B-ALL cell lines and patients' samples in a dose-and timedependent manner in vitro. Notably, the IC50 values of AZD4573 were lower than those of SNS-032, indicating that CDK9 is a highly selective inhibitor with high potential in the treatment of B-ALL. Moreover, our data indicated that triggering programmed cell death, resulting in B-ALL cell apoptosis, is the key to the treatment with CDK9 inhibitors. Thus, CDK9 could also serve as a novel target for B-ALL therapy.
Enhanced glycolysis is prerequisites for the rapid proliferation of tumor cells (Faubert et al., 2020). CDKs affect the catalytic activity of metabolic rate-enzymes and modulate the cell cycle arrest and apoptosis of tumor cells (Wang et al., 2017;Icard et al., 2019). However, the effect of CDK9 inhibitors on the cellular metabolism of B-ALL cells is unknown. In the study, we first observed that SNS-032 perturbs the cellular metabolic pathways of B-ALL cells, especially the glycolytic pathway. Therefore, we inferred that CDK9 inhibitors induced the cell apoptosis of B-ALL cells by suppressing glycolysis. By using Seahorse and LC-MS/MS to detect the metabolism of drugtreated cells, we uncovered that SNS-032 can significantly restrain the glycolysis of B-ALL cells by repressing glucose metabolism, thus reducing the metabolic intermediates, such as ATP and lactate, which are the energy sources and main materials for cellular anabolism (Ganapathy-Kanniappan, 2018;Abdel-Wahab et al., 2019). To further confirm our results, we used SoNar probe to dynamically detect the metabolic change and revealed that the ratios of SoNar-high cells significantly decreased upon treatment with SNS-032 and AZD4573, suggesting that CDK9 inhibitors suppressed the glycolysis of B-ALL cells. The results of RNA-seq indicated that SNS-032 restrained the glycolytic process by downregulating the expression of key enzymes, such as HK2, PFK, and LDHA. Moreover, the glycolysis inhibitors WZB117 and 2-DG enhanced the cell apoptosis of B-ALL cells induced by SNS-032 and AZD4573, suggesting that CDK9 inhibitors resulted in the apoptosis of B-ALL by partially inhibiting glycolysis. CDK9 inhibition promotes prostate cancer cells switch to fatty acid oxidation by inducing metabolic stress (Itkonen et al., 2019). In the present study, we did not observe the fatty acid metabolism on the top of SNS-032-inhibited pathway. Notably, SNS-032 not only affected glycolysis but also the purine/pyrimidine metabolism and oxidative phosphorylation of B-ALL cells, thus requiring further mechanism exploration. As CDK9 inhibitors, SNS-032 and AZD4573 stop the gene transcription and results in the downregulated expression of a large proportion of genes, such as c-Myc and Mcl-1 . Based on the results of RNA-seq, we found that SNS-032 dramatically reduced the expression of c-Myc. Moreover, the protein level of c-Myc decreased in B-ALL cells after SNS-032 and AZD4573 treatment. The metabolic reprogramming of tumor cells attributes to the regulation of target genes expression mediated by c-Myc (Dang et al., 2009;Hsieh et al., 2015). Thus, we infer that glycolysis is inhibited by CDK9 inhibitors because of the reduction of c-Myc level. The rescue experiment was performed to observed that the therapeutic effect of CDK9 inhibitors by overexpressing c-Myc in B-ALL cells. The cell apoptosis was abolished in c-Mycoverexpressing B-ALL cells after treatment with CDK9 inhibitors, accompanied by the relief of glycolysis, suggesting that the inhibitory effect in glycolysis of CDK9 inhibitors was mediated by downregulating c-Myc. This finding can be supported by flow cytometry with FITC Annexin V and PI staining that c-Myc overexpression could suppress SNS-032-induced apoptosis in REH cells. Meanwhile, the overexpression of c-Myc in REH cell enhanced glucose utilization, lactate production, and cell proliferation and inhibited apoptosis. Many glucose metabolism genes, such as GLUT1, HK2, PFKM, and LDHA, were documented to be directly regulated by c-Myc (Miller et al., 2012). To further explore the potential mechanism of c-Myc-mediated apoptosis upon SNS-032 treatment, we examined the protein expression of glycolysis-related gene. Our data demonstrated that the overexpression of c-Myc upregulated the mRNA and protein expression of GLUT1, HK2, and LDHA, thereby increasing glycolysis in REH cells.
In addition, the expression of glycolysis-related genes was inversely correlated with the c-Myc expression level in the REH cell line treated with SNS-032, suggesting that c-Myc exerted an antagonistic effect on SNS-032-induced apoptosis by regulating glycolytic-related protein expression. In our study, whether c-Myc can reverse SNS-032 induced apoptosis by directly binding to the promoters of glycolytic genes requires further exploration.
CONCLUSION
Taken together, by detecting the therapeutic effect of CDK9 inhibitors on B-ALL cell lines, we confirmed that CDK9 inhibitors induced the cell apoptosis of leukemic cells by inhibiting the c-Myc-mediated glycolysis and revealed the mechanism of CDK9 inhibitors in the treatment of B-ALL (Figure 7). This study provides a new treatment strategy for B-ALL in clinical practice.
DATA AVAILABILITY STATEMENT
The data presented in the study are deposited in the GEO repository, accession number GSE166339.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the SCMC Ethics Committee. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
C-WD, GY, and SG designed the study, analyzed and interpreted the data, and wrote the manuscript. W-LH and TA performed the experiments, and analyzed and interpreted the data. JX analyzed the RNA-seq data. HZ, W-WZ, NZ, R-YS, M-HL, J-MZ, and CJ performed the experiments. K-WL, KQ, and LC discussed the results and contributed to data interpretation. All authors read and approved the final manuscript. | 6,293 | 2021-03-04T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Deep-Learning-Based Acoustic Metamaterial Design for Attenuating Structure-Borne Noise in Auditory Frequency Bands
In engineering acoustics, the propagation of elastic flexural waves in plate and shell structures is a common transmission path of vibrations and structure-borne noises. Phononic metamaterials with a frequency band gap can effectively block elastic waves in certain frequency ranges, but often require a tedious trial-and-error design process. In recent years, deep neural networks (DNNs) have shown competence in solving various inverse problems. This study proposes a deep-learning-based workflow for phononic plate metamaterial design. The Mindlin plate formulation was used to expedite the forward calculations, and the neural network was trained for inverse design. We showed that, with only 360 sets of data for training and testing, the neural network attained a 2% error in achieving the target band gap, by optimizing five design parameters. The designed metamaterial plate showed a −1 dB/mm omnidirectional attenuation for flexural waves around 3 kHz.
Motivation and Relevant Works
Noises exist everywhere in our daily lives. To name a few, they could be from nearby traffic, the cooling fans of electronic devices, other people's conversations, or some operating machines. Depending on the sources, noises can have a wide variety of frequency spectra, and to human perception, not all noises in different frequency bands sound equally loud, even under the same pressure level measured by instruments. Figure 1a shows the equal loudness contours according to the ISO 226 standard.
Each contour indicates the same loudness in terms of "phons" with respect to a 1 kHz pure tone. For example, 60 phons means the same loudness of a 60 dB, 1 kHz tone, and 0 phons indicates the minimum threshold of hearing. In general, human ears are less sensitive to lower frequencies. Engineers usually adopt frequency filters to account for such sensitivity variation. For example, the A-weighting curve is the most-commonly used to assess the noise level [1]. The curve defined in the ANSI S1.42 standard is shown in Figure 1b, and the filtered level is denoted in dBA. It is noticed that dips occur in the contours around 3 kHz in Figure 1a (marked by the shaded interval), indicating the increase of the sensitivity of hearing, which is also reflected by the positive gain in the A-weighting curve (Figure 1b). Such a boost is ascribed to auditory canal resonance (inside our ears), and the frequency band is important for understanding spoken language (for us and for smart assistant devices). This is because the consonants are found in the frequency range of 2-4 kHz, and they play important roles in speech intelligibility. Not only speech will be easily masked out in the presence of noises around 2-4 kHz, noise in that frequency band is also what we are most sensitive to. Therefore, addressing noise in this particular band is crucial. As such, in this study, we focused on the prevention of structure-borne noises in this particular frequency range, where they originate from either vibration sources attached to the structure or acoustic waves in the fluid domain coupled to the structure, and these noises can be mitigated by blocking the elastic wave propagation in the structure. Conventional noise control approaches may include large impedance mismatching, tuned mass dampers, or employing damping materials such as rubber [2,3] or foam [4][5][6]. The associated downsides include either the total weight of the structure would increase, not favorable for lightweight structures, or the damping materials could suffer from aging.
In the past couple of decades, periodic acoustic and elastic wave metamaterials, also known as phononic crystals, have emerged to be capable of manipulating elastic waves from frequency bands as low as 10 Hz for seismic waves [7][8][9][10], auditory bands [11,12], and ultrasonic bands [13][14][15][16][17][18][19], to GHz bands such as piezoelectric waves [20][21][22][23]. Not only the phononic crystals have shown competence in vibration and noise control, as well as signal filtering applications, they have also been reported to be able to arrest crack propagation [24,25], enhancing the fracture resistance and durability of structures. Soft dielectric elastomers, along with other tunable mechanisms that can reconfigure the phononic structure in real-time [15,[26][27][28], have also been investigated. Particularly for structural plate wave control in auditory bands, a few studies based on local resonant structures have been proposed [29][30][31], to synthesize band gaps for flexural waves in thin plates. However, these existing studies aimed at frequencies below 500 Hz for vehicle noise and vibration control purposes, and also, the resonant components largely increase the weight of the structure. Alternatively, sheet metal with a periodic stamped pattern could be a practical approach to attenuate the acoustic noise in the frequency band of 2-4 kHz, and the design of such metamaterials was considered in this study.
Regarding the design methodology, the most-widely adopted approaches are still physics-driven combined with analytical and numerical software, usually through a trialand-error process, to synthesize metamaterials achieving user-desirable dynamic properties such as phase and group velocities, wave polarization, and band gaps [23,[32][33][34][35]. These methods highly depend on designers' insights about the physics system to search for the solution to the design parameters, and they become intractable to conduct when a large number of geometric and material parameters are involved. To efficiently explore the design space, existing optimization-based approaches have been proposed for material and structural design problems. Notable examples include the level set method [36,37], the (bidirectional) evolutionary structural optimization (ESO) method [38,39], and the topology optimization with solid isotropic materials with penalization (SIMP) method [40,41]. However, these studies aimed to design materials satisfying the structural performance under external loads, and hence, the objective function was formulated with structural compliance in the physical space, not in a transformed space, such as the frequency-wavenumber spectrum, e.g., the dispersion curves. In terms of the design of acoustic metamaterials, common approaches such as the genetic algorithm (GA) [42][43][44][45] or topology optimization combined with conventional optimizers [46][47][48][49][50][51] have been used to determine the effective material properties, the refractive index, as well as the phononic frequency band gap. Although these GA or gradient-based optimizers work in a transformed space, i.e., the frequency domain, the GA-based approaches are quite computationally intensive due to their nature of population searching, while the gradient-based approaches suffer from the need for a good initial guess. Moreover, these methods require an iterative search each time when the target objective is changed. As a result, there is an urgent need to develop novel design methodologies for the acoustic metamaterial community.
In the past couple of years, deep learning started being used in phononic and photonic metamaterial designs [52][53][54][55][56][57][58]. By inputting frequency domain characteristics, the deep neural network (DNN) returns design parameters in the physical domain. Some of them used geometric dimensions for the design parameters [53], while some used direct images as the output [56]. For phononic band gap design, only bulk elastic waves in square lattices have been considered. Until now, these studies focused on the fundamental methodology, and there is still a gap between existing studies and the realistic engineering scenarios. Especially, given that plates and sheet metals are the most-commonly used structural components in various fields such as aerospace and the automotive industry, there are still yet studies investigating deep learning in the design of phononic band gap metamaterials for blocking plate waves. In this study, the Mindlin plate formulation [59] was employed to model the phononic elastic plate for AI-assisted metamaterial design. Compared with the Kirchhoff-Love plate theory, which is only valid for thin plates (with a plate thickness t much smaller that the wavelength Λ), the Mindlin model remains accurate for relatively thick plates as the shear deformation along the thickness is taken into account by analytical profiles. It has been shown to be able to accurately model relatively thick phononic plates with the plane wave expansion method [60]. For finite-element analysis, the Mindlin plate theory greatly reduces the computational cost compared with the full 3D elastodynamics formulation. This is particularly important for AI-based design, including deep learning, GA, and other algorithms, for which a large amount of forward calculations is required. The efficient formulation makes the AI-based design practical and competitive compared with conventional optimization methods.
Contribution and Scope
In this study, we present a deep-learning-based methodology to determine the optimal design of phononic metamaterial plates for attenuating structure-borne noise. By forming the band gap in the auditory frequencies, flexural waves in the frequency range generated by vibration sources are stopped from propagating in the metamaterial, further reducing the acoustic pressure disturbance caused by flexural waves and, ultimately, diminishing the environment noise level. We show that, with appropriate training, the DNN can return the optimal design parameters for the phononic metamaterial, based on the desired center frequency of the band gap, and bandwidth, as the input. The main contributions of this study are summarized as follows: • To the best of the authors' knowledge, this is the first study using deep learning in the design of phononic plates for synthesizing the flexural wave band gap. • To the best understanding of the authors, this is the first study to employ the Mindlin plate formulation in modeling the phononic elastic plate for AI-assisted metamaterial design. • Although this study aimed at a specific engineering scheme, the proposed design framework can be easily adapted to different circumstances. By demonstrating the explicit procedures from the initial performance requirements to the final results, this study fills the gap between existing studies and the application end, expediting the real applications in practice.
The remainder of the paper is organized as follows. Section 2 describes the inverse design problem discussed in this study, as well as the dataset generation details for deep learning. Section 3 presents the methodology and the DNN training details. Section 4 reports the training results and the related discussions. Section 5 evaluates the performance of the metamaterial design given by the DNN, via numerical experiments. The summary is given in Section 6.
Problem Statement
In this study, we propose a phononic plate wave metamaterial, as well as the design approach based on deep learning. The target was a frequency band gap centered at 3 kHz with a 60% relative bandwidth (normalized by the center frequency). The proposed metamaterial was made of sheet metal with a periodically machined slot pattern. To ensure the maximum isotropy of elastic waves propagating in the plate, we considered a pattern with a honeycomb lattice periodicity and a three-fold rotational (or cyclic, C 3 ) symmetric unit cell. Since the conservative acoustic system possesses time-reversal symmetry, the frequency ( f )-wavevector (k) dispersion, namely f (k), is an even function in k-space, f (−k) = f (k). Together with the C 3 spatial symmetry, the f -k dispersion then acquires C 6 symmetry, i.e., identical wave properties repeat every 60 • in the azimuth angle of the wavevector k, which is the highest possible rotational symmetry in 2D periodic materials, ensuring equal attenuation in almost all directions. As a comparison, metamaterials with a square lattice pattern [29][30][31]56] can only have up to C 4 symmetry.
The candidate unit cell patterns were composed of a morphable geometry controlled by five design parameters, as shown in Figure 2a In Figure 2a, a indicates the lattice constant, i.e., the smallest pitch between adjacent repeated patterns, and w 1 , h 1 , w 2 , and h 2 label the widths and heights of two distinct rectangular slots. The region to be removed was then obtained by the union of C 3 cyclic duplicates of the rectangular patterns. Figure 2b shows the machined pattern of the unit cell. The proposed rectangular slot design was for the ease of manufacturing, as it can be easily machined with, for example, water jet or laser cutting. Under different application scenarios, users can propose different shapes such as including fillets to avoid stress concentration, as long as the pattern possesses C 3 symmetry and has enough morphing latitude. With the candidate patterns decided, the remaining task was to determine the five design parameters for the desired band gap frequency range. A flowchart illustrating the procedures of this study is shown in Figure 3. The band gap of the metamaterial with a given set of design parameters can be extracted from the f -k dispersion relationship, i.e., the phononic band structure, which can be calculated by using the finite-element (FE) methods. Such procedures to obtain the band gap from the design parameters are the forward calculations, as shown in the left of Figure 3. Next, a DNN was built to capture the relationships between the band gap and the design parameters. Batch forward calculations with combinations of design parameters and the resultant band gaps provided the dataset for the DNN training and testing. The DNN was trained to return the design parameters as the output, upon a given band gap frequency range as the input. The procedures are shown in the middle of Figure 3. The final stage was the inverse design, as illustrated in the right of Figure 3. The user enters the desired band gap, and the DNN suggests a set of design parameters. To confirm the validity of the parameters, we conducted the forward calculation with the returned parameters and examined if the band gap matched the input.
The remainder of this section focuses on the numerical model of the wave physics used in the forward calculations.
Theoretical Background
The dynamics of an isotropic homogeneous linearly elastic solid medium is described by the Navier equation: where λ and µ are the elastic Lamé constants, u the displacement vector, and ρ the mass density. The metamaterial we considered was made of steel, which had λ = 115 GPa, µ = 76.9 GPa, and ρ = 7850 kg/m 3 . It has been shown that the dynamical behavior of an elastic wave propagating in phononic plates can be very well described by the Mindlin plate theory together with Bloch's theorem [60]. As we only considered the flexural waves (i.e., the fundamental antisymmetric Lamb mode) with a wavelength greater that the plate thickness, employing Mindlin's plate formulation (assuming a parabolic shear strain profile through the plate thickness) can greatly reduce the computational cost in FE analysis without a loss of accuracy (compared with the full 3D elastodynamics model). Therefore, during the network training and design phase, where a large amount of computations were required, we adopted the Mindlin formulation, and for the final evaluation stage (Section 5), the full 3D formulation was used to validate the design. For waves in a periodic structure with lattice constant a, Bloch's theorem [61] states that the wavefunction is an a-periodic function times the plan wave function e ik·x . For a 2D periodic structure, there are two linearly independent lattice vectors a 1 and a 2 , as shown in Figure 4a. For a wave with Bloch wavevector k propagating in the structure, one can focus on only the unit cell and apply the k-dependent periodic boundary conditions (also known as the Floquet periodic boundary conditions) on the three opposite pairs of boundaries: where σ is the stress tensor. Solving for the eigenfrequency with different ks gives the f -k dispersion relationship. For the periodic structure, f (k) is periodic in k-space, with the reciprocal lattice vectors b 1,2 satisfying a i · b j = 2πδ ij , where δ ij is the Kronecker delta. Let a 1,2 and b 1,2 be the column vectors of matrices A and B, respectively, then from the above relation, we obtain B = 2πA −T and b 1,2 . The smallest period in k-space is known as the first Brillouin zone, as shown in Figure 4b. Considering the time-reversal and rotational symmetries, the triangular region bounded by the three high-symmetry points Γ, K, and M in k-space contains the complete information of f (k) and is known as the irreducible Brillouin zone. As the extreme values of the frequency of each band occur on the boundary of the irreducible Brillouin zone, it is sufficient to calculate f (k) along the path Γ-K-M-Γ, and the obtained spectrum is known as the band structure. In practice, this is obtained by solving the eigenfrequencies of the unit cell with Floquet periodic boundary conditions (Equation (2)) with all ks along the path in a batch parametric sweep. Note that, as can be seen from Figure 4b, the Γ-K segment happens to be the k x -dispersion and Γ-M the k y -dispersion (observing the C 6 symmetry of f (k)), while, along path K-M, both the azimuth angle and the length of the wavevector k simultaneously evolve.
The formation of a band gap in the phononic band structure is usually ascribed to two mechanisms, Bragg scattering [62] and local resonance [63][64][65]. The former occurs when integer multiples of the wavelength Λ = 2π/k match twice the lattice spacing d between planes of scatters along the normal direction, nΛ = 2d sin θ, which, in the ω-k dispersion curves, appears as the splitting of two counter-propagating waves on the Brillouin zone boundary. The band gap due to local resonance, on the other hand, is due to the coupling between the propagating wave and the locally resonant mode of the unit cell structure. In the band structure, it is shown as the repulsion between a flat band (the locally resonant mode, having nearly zero group velocity) and the propagating mode. For strongly modulated phononic structures, both mechanisms can take place, and the band gap is usually the product of the combined effect mixed with other factors. Due to the complexity, even though the band-gap-forming physics are well acknowledged, no analytical model can accurately predict the band gap frequency of an arbitrary phononic structure, but only numerical computations, let alone an inverse function for design purposes. That is when a systematic optimization method needs to be used.
Finite-Element Analysis and Data Generation
In general, for forward calculations generating the band gap information, all numerical methods capable of calculating the band structure from the design parameters can be used, including the plane wave expansion (PWE) method [20,66,67], the finite-difference time domain (FDTD) method [68][69][70][71], multiple scattering theory (MST) [72][73][74], transfer matrix [52,75], and the finite-element (FE) method [76][77][78]. The FE method can model a complicated geometry and strongly discontinuous material interfaces with a fast convergence speed and was adopted in this study. The previously described model calculating the phononic band structure was assembled and computed using the commercial FE package COMSOL Multiphysics. Quadratic mixed interpolation of tensorial components (MITC)-type Mindlin shell elements [79] with a maximum element size of a/16 were used to discretize and model the phononic plate. The plate thickness was fixed at t = 0.4 mm in this study, which can be adapted or left as a design parameter for different application scenarios. The Floquet periodic boundary conditions were implemented using the built-in interface of the software. Alternatively, this can be realized by mapping variables from one boundary to the opposite one and constraining the displacement and reaction force following Equation (2) using the weak formulation [80]. The ARPACK eigensolver [81] was used to compute the eigenfrequencies f .
To prepare the dataset for DNN training, the range for each parameter needs to be defined. For the lattice constant a, there is no strict restriction aside from it has to be a positive length. To give a reasonable range, we refer to Bragg's condition and took d = a. However, before calculating the f -k dispersion of the metamaterial, one would not have information about Λ for the target frequency. Nonetheless, since only an approximated range is required, a simple estimation based on the flexural wave dispersion in a uniform thin plate (i.e., without the phononic pattern) can be used [82]: where E = 200 GPa and ν = 0.3 are the Young's modulus and Poisson's ratio of steel, which can be readily converted from the Lamé constants. Substituting in Λ = 2a, t = 0.4 mm, and the material constants yields a simple relationship, f a 2 ≈ 1 Hz · m 2 . For f = 3 kHz, it gives a ≈ 18 mm. Note that, in a strongly modulated phononic structure, the f -k dispersion could be vastly distinct from that in the uniform plate. Therefore, a wide range of a was considered for the data generation, 5 mm < a < 55 mm. Furthermore, Equation (3) was derived from the Kirchhoff plate model for a quick estimation, which is valid for thin plates only. If the estimated Λ (or a) is comparable with the plate thickness t, one could use a more accurate model such as Mindlin's [59] or the Rayleigh-Lamb frequency equation [82] for the estimation of a. For the rest the parameters, namely the dimensions of the rectangular slots, w 1 , h 1 , w 2 , h 2 , the restrictions were simply to guarantee the pattern does not penetrate the hexagonal unit cell boundary, e.g., h 1,2 < a/ √ 3, w 1 + w 2 < a/2. Observing the parameter ranges and restraints, several values per parameter were adopted, resulting in a total number of 660 combinations of parameters. Among them, ill-defined shapes (such as rectangle corners penetrating the unit cell border or a rectangle intersecting with its C 3 duplicates, resulting in unconnected domains) were excluded from the dataset. In the end, 360 sets of parameters (i.e., distinct metamaterial patterns) were used to generate the band structures for DNN training and testing.
Methodology
In this section, the method of inverse design based on deep learning is discussed. A neural network was built and trained to predict the design parameters for the metamaterial to meet the desired band gap.
Data Preparation
As mentioned earlier, only the flexural mode in the plate has a significant contribution to the noise in the air domain. Thus, only the band gap for the flexural mode was considered. Due to large separation in the wave speeds of different modes, in general, extra effort is required to find a common band gap for all modes, which is unnecessary as the additional restraints would only make the metamaterial difficult to accommodated in various applications. In the following paragraph, a filtering strategy is presented to preserve only the flexural mode in the band structure for DNN training.
There are three fundamental (lowest order) modes of elastic waves in a homogeneous plate, namely the longitudinal mode (fundamental symmetric Lamb mode S 0 ), the flexural mode (fundamental antisymmetric Lamb mode A 0 ), and the fundamental shear horizontal mode (SH 0 ). For waves propagating in the x-direction (the thickness is along z), roughly speaking, these three modes have the particle displacement polarized in x, z, and y, respectively. For the considered phononic plate, due to the presence of the vertical cutting edges, the S 0 and SH 0 modes are mixed, while the A 0 mode is still decoupled from them (due to the mirror symmetry with respect to the mid-plane). For each eigenmode in the band structure, the z-polarization ratio (z-PR) can be evaluated to distinguish the flexural modes from the rest: which averages the z-polarization of a mode shape throughout the unit cell, where A cell is the area of the unit cell (as the 2D shell formulation was used) andẑ is the unit vector along the z-direction. The integrand is the square of the directional cosine between u and the z-axis; therefore, it is always true that 0 ≤ z -PR ≤ 1. The S 0 /SH 0 mixed modes (A 0 mode) have small (large) z-PR values. A threshold of 0.6 was used to filter out the S 0 /SH 0 modes. Figure 5a,b plot a typical band structure before and after filtering, respectively. It was calculated using one of the parameter combinations from the dataset, with a = 15 mm, w 1 = 6 mm, h 1 = 0.75 mm, w 2 = 1.05 mm, and h 2 = 6 mm. The two branches with large slopes (implying fast wave speeds) in Figure 5a indicate the S 0 and SH 0 modes and were removed from the band structure in Figure 5b.
In the filtered band structure for flexural modes, it is seen that the band gap is located between the third and forth eigenfrequencies (counting from the lowest, for any k). This is generally true for the considered type of phononic plates. The band gap is then bounded by the minimum and the maximum of the third and forth eigenfrequencies, respectively. The upper and lower bounds of the band gap, f + and f − , are indicated as the red and green bars in Figure 5b, respectively. The band gap can also be expressed in terms of the center frequency f c and the normalized half-bandwidth δ, such that f ± = (1 ± δ) f c , or inversely from the f ± obtained from the calculated band structure; they are written as ( f c , δ) were adopted as the input to the DNN for band gap information (instead of f ± ), as they are physically more independent quantities. Figure 6 plots the distribution of f c and δ for the 360 metamaterial configurations from the generated dataset. The distribution of the center frequencies of the band gap f c spans from below 1 to 14 kHz, and the half-bandwidth δ ranges from 12.5-30%.
Network Architecture
As shown in Figure 7, we implemented a fully connected neural network that consisted of one input layer, one output layer, and five hidden layers. The input layer contains the information of the user-desired band gap, which is represented by the center frequency f c and the normalized half-bandwidth δ. The hidden layers were composed of five layers of hidden neurons with the rectified linear unit (ReLU) activation function. Batch normalization (BatchNorm1d) layers were included in the first three hidden layers to accelerate the network training by reducing internal covariate shift. The number of neurons in the five hidden layers was 8, 32, 64, 32, and 10, respectively. Note that, in order to ensure a proper convergence, typically, the training of a DNN requires an observation in the early stage of the loss curves. Therefore, a preliminary work was conducted to determine the aforementioned activation functions and the loss functions by investigating how the loss decreases. Furthermore, the implemented network architecture was chosen according to the findings in [52,83]. The output layer then predicts the geometric dimension of the desired metamaterial unit cell, i.e., a, w 1 , h 1 , w 2 , and h 2 .
Network Training
The building and training of the DNN were performed using the open-source machine learning package PyTorch on Google Colaboratory clusters. There were 70% of the dataset (252 out of 360 sets) designated as the training group, whereas 21% (75/360) were assigned to the validation group. The remaining 9% (33/360) were used for testing. During the training process, 12 batches were chosen for each epoch. A total of 400 epochs were planned, with the learning rate dropping starting at 0.01. In terms of optimizer selection, Rectified Adam (RAdam) was adopted. It benefits from Adam's quick convergence advantage, while also having good convergence results. The mean absolute error (MAE) of the predicted design parameters was used as the loss function for training, explicitly whereŷ j and y j indicate the predicted and the ground-truth values of the five design parameters, respectively.
Neural Network Testing Results and Discussion
After being trained with 252 sets of data, the DNN was put under test with the 33 sets of data from the testing group. This section presents the testing results and some discussions.
Error Metrics for Parameters
While the MAE of the design parameters was used as the loss function during DNN training, it does not serve as a good indicator in interpreting the testing results since (1) the parameter MAE is in SI units, which is numerically small for the considered phononic plate, and does not provide physical significance without proper normalization and (2) it does not give errors for individual parameters, which may contain useful information and provide physical insights.
Three error metrics for the different design parameters are proposed for the testing, which are , for a and w 1 y − y a / Since the errors for individual parameters were not summed/averaged, not taking absolute values would allow us to better characterize the trend of the predicted parameters (if they are overall under-or over-estimated compared with the ground-truth). The three error metrics were normalized by different factors. For parameters a and w 1 , which are the two largest geometric dimensions in the unit cell, the error was simply normalized by the ground-truth value.
For h 1 , h 2 , and w 2 , the errors were normalized by their maximum feasible values instead of the ground-truth values. This was to avoid a meaningless large error indicator, which would fail to reflect the true level of inaccuracy in the presence of the small ground-truth values of these parameters (compared with a). For example, given a ground-truth value of h 1 = a/200, a predictionĥ 1 = a/100 would yield a 100% error. However, practically, both parameters result in a narrow slit, or a thin crack, for which their difference is unobtrusive from the unit-cell-level point of view, and they have a negligible difference in terms of the band gap frequency. Accordingly, the errors in h 1,2 and w 2 were normalized by a/3 and a/ √ 3, respectively. In addition to the errors in design parameters, we also re-input the predicted parameters into the forward calculations (FE analysis) to retrieve the band gap for the metamaterial design given by the DNN (f c andδ) and compared them with the input values ( f c and δ). The errors were evaluated with the following metric: Error = |ŷ − y|/y. Figure 8 shows the error distribution for each design parameter, to which the error metrics in Equation (7) were applied.
Network Prediction Results
For parameters a and w 1 , all predicted values lied within a 10% error, with a mean absolute value of 1.30%. The maximum error for parameter h 1 was 12% with a mean absolute value of 3.70%. For parameters w 2 and h 2 , large errors were observed with a maximum absolute value of 20% and a mean absolute value of 7.17%. The overall mean absolute value for all errors in the five parameters (equally weighted) was 4.13%. It turned out that the DNN could accurately return predicted a, w 1 , and h 1 that were close to the ground-truth values, but not for w 2 and h 2 .
On the other hand, for the resultant band gap center frequency f c and half-bandwidth δ, the mean absolute errors were 2.26% and 1.75%, respectively, which are appreciably better than the errors in the design parameters and are satisfactory for practical applications.
To interpret such seemingly contradictory results, three case studies selected from the testing dataset (with the unit cells shown in Figure 9a-c), for which the DNN predictions (Figure 9d-f) contain representative results that help clarify the mentioned anomaly, are presented and discussed in detail in the following paragraphs. In short, this can be ascribed to two causes: (1) in some parameter ranges, the unit cell pattern is insensitive to some of the parameters, and (2) multiple designs lead to the same band gap frequency, while the DNN returns a parameter set distinct from the prepared dataset. In both circumstances, the DNN-predicted results showed large errors in the design parameters, while the resulting band gap still met the user input with a small error. The normalized mean absolute error, evaluated using Equation (7), of the five parameters and the normalized absolute errors in band gap for the three cases are listed in Table 1, and the shapes of the ground-truth unit cells from the dataset and the predicted ones are shown in Figure 9. In Case 1, the parameter error (1.25%) was mainly contributed by w 2 and h 2 (the dimensions of the second rectangle slot), for which, as shown in Figure 9a,d, the two shapes are barely distinguishable, as the second rectangle is small compared with the unit cell. Furthermore, the predicted w 2 and h 2 were under-and over-estimated, respectively, resulting in almost identical areas of the slots. The calculated band structures were similar, and thus, the error in band gap frequencies were negligible (0.13% for f c and 0.98% for δ).
Case 2 tells a different story. Not only a large mean error in the parameters was observed (7.44%), but the predicted unit cell (Figure 9e) also visibly differed from the ground-truth unit cell (Figure 9b). Nonetheless, the resultant band gaps were almost identical (0.18% error in f c and 0.69% in δ). By checking the error of each parameter, it was noticed that the predicted a was over-estimated by 5.21%. The parameter a is considered as the most-important parameter controlling the band gap since it is proportional to the Bragg wavelength, for which a +5.21% error should result in a lower band gap frequency (larger Bragg wavelength). However, the parameters h 1 and h 2 were under-estimated by errors of −9.76% and −15.71%, respectively. This resulted in a faster speed of sound (flexural wave speed ∝ (bending rigidity/density) 1 /2 ), which compensated the lowered frequency, resulting in almost identical band gaps. In general, a parameter set (five degrees of freedom (5 DOFs)) yielded a unique unit cell shape, and the band structure ( f (k), infinitely many DOFs) was also unique (one-toone, injective mapping). However, as we only extracted the band gap information (2 DOFs) from the band structure, such continuous mapping from 5-DOF to 2-DOF parametric spaces cannot be injective and is generally infinitely-many-to-one. In other words, there are infinitely many parameter combinations that could yield the same user input band gap. The fact that the DNN returned a parameter set different from the prepared ground-truth values, but matching the band gap, exactly showcases that the DNN captured the physics that link the band gap to the parameters and can give designs never seen in the dataset. Figure 10a,b plot the band structures of the prepared ground-truth unit cell and the predicted one, respectively, for the discussed Case 2. It is seen that they have distinct (unique) f (k) dispersion curves (see, e.g., their third and fourth bands), but share identical band gap frequencies (labeled by the red and green bars).
Case 3 is among the worst cases found in the test results. Not only did it show a 6.47% error in the parameters, unlike in Cases 1 and 2, the error in f c in Case 3 reads as a 4.71% and 0.37% error in δ. The band structures of the prepared ground-truth unit cell and the predicted one are plotted in Figure 10c,d, respectively. The reason the DNN did not perform as good as in other cases could be due to the lack of training samples in the higher frequency range. As shown in Figure 6a, although f c spans up to 14 kHz in the dataset, the majority of the data lie below 6 kHz. In Case 3, the target f c is around 8 kHz, for which there are only a handful sets of training data near 8 kHz., thus the relatively poor performance. However, even for the worst case, the error in the band gap frequency should be acceptable in practical applications, considering 4.71% is even smaller than a semitone interval (i.e., between two neighboring keys on the piano, 2 1 /12 − 1 = 5.95%).
To briefly summarize the test results, it is shown that, to evaluate the performance of the DNN, one should inspect the errors in the band gap ( f c and δ), instead of comparing the predicted parameters with the prepared "ground-truth", as there are infinitely many combinations of parameters that can yield the same user-desired band gap. For our trained DNN, the mean absolute errors of f c and δ were 2.26% and 1.75%, respectively, which are good for most applications, even though the DNN was trained with a limited amount of samples.
Baseline Reference
Four alternative machine learning algorithms, i.e., support vector regression (SVR), random forest regression (RFR), extreme gradient boosting (XGB), and K-nearest neighbors (KNN), were used as the baseline references to evaluate the performance of the proposed DNN employed in this study. The same training and testing datasets mentioned in Section 3.3 were adopted. In the baseline tests, the inputs were the desired band gap frequency and bandwidth, ( f c , δ), and the outputs were the five design parameters (a, w 1 , h 1 , w 2 , h 2 ). The five obtained design parameters returned were then sent back to the forward calculation, to calculate the band gap (f c ,δ) of the metamaterial design suggested by each baseline method. The performance of each algorithm was then evaluated by the error in f c , i.e., (f c − f c )/ f c , and the error in δ, i.e., (δ − δ)/δ, as listed in Table 2. All four baseline methods underwent the fitting procedure using grid search to ensure a proper hyperparameter tuning. Furthermore, in order to predict the five parameters, a multi-output regressor was utilized if needed. The scoring for the grid search was the MAE, which was the same criterion used for DNN training. In SVR, the kernel in use was the radial basis function (RBF). Through several attempts in the grid search, the best parameter set was obtained for each machine learning method. The detailed hyperparameters of each model are reported in Table 3. According to Table 2, SVR performed the worst among the five algorithms on both metrics. The DNN had the best accuracy in f c and a decent error in δ, which was slightly higher than that for RFR. However, the proposed DNN outperformed RFR by 1.78% in predicting f c . In terms of the overall error rate, KNN had the second-best performance. Compared to the DNN, neither f c nor δ differed by more than 1%. XGB was marginally inferior to RFR in terms of both error metrics. This shows that, overall, the proposed DNN model achieved the best performance for band gap prediction. Table 4 reports the training time, memory demand, prediction time, and number of parameters of the proposed approach, compared against the baseline methods. The values of time and memory were obtained by computing the mean out of five repeated trials, for each algorithm. Although the proposed DNN achieved the best performance when designing the unit cell, the DNN required the longest training time. Once the training finished, the proposed DNN achieved a prediction time of 0.0019(s) to complete one design query, which was superior to SVR, RFR, XGB, and KNN. Since the network training is usually conducted offline, the efficiency during the inference stage possesses greater interest in practice. With regard to the memory demand of the proposed approach, the instantaneous peak amount of RAM utilized during prediction was 0.0130(MB), which was lower than for SVR, RFR, XGB, and KNN as well. Notice that, due to the small size of our dataset, KNN had the first runner-up performance in terms of the prediction time and peak memory during inference when compared to the proposed DNN. However, it should be noted that the computation of KNN will scale up rapidly with the increasing number of training samples, since KNN needs to compute the distances between the design query and each training sample. This demonstrates the advantage of the proposed DNN approach over KNN. To ensure fairness, all the values in Table 4 are reported using the same CPU, which was the Intel Xeon Processor in 2.20 GHz with a single core, two threads, 2 MB L2 Cache and 12.7 GB RAM. The memory size of the trained DNN was 31(KB) only.
Validation of the Metamaterial Performance
So far, we have demonstrated that the trained DNN is capable of the inverse design of the phononic metamaterial that can achieve the desired band gap, In this section, we validate the performance of the obtained metamaterial design in terms of attenuating flexural waves via numerical transmission simulations.
Model Description
As mentioned in Section 1.1, noise in 2-4 kHz is most perceived by human ears. Therefore, the target band gap of the metamaterial was determined to be around 3 kHz with a 60% bandwidth (2.1-3.9 kHz). That is, f c = 3 kHz and δ = 0.3 for the DNN input. The design parameters returned from the DNN were a = 21.2 mm, w 1 = 8.48 mm, h 1 = 1.58 mm, w 2 = 2.16 mm, and h 2 = 11.2 mm. Two supercell (stacking of a finite number of unit cells) transmission FE models were built, one along Γ-K (with supercell length 8a = 170 mm along the x-direction, equivalent to azimuth angles θ = nπ/6, n ∈ Z; see Figure 11a) and the other along Γ-M (with supercell length 4.5 √ 3a = 156 mm in the ydirection, equivalent to azimuth angles θ = (n + 1 2 )π/6, n ∈ Z; see Figure 11b), respectively. A harmonic line load of unit strength along the z-direction F z = e i2π f τ (where τ indicates time) was applied on one side of the supercell. Perfectly matched layers (PMLs) with proper absorption wavelength were placed at the two terminals to eliminate unwanted reflections on the model boundaries. Periodic boundary conditions were set on the lateral boundary pairs, as shown in Figure 11. Mixed hexahedral and pentahedral prism solid elements, using quadratic serendipity shape functions, with a maximum element size of a/16 (t/2) in the xy-plane (z-direction), were used for discretization and modeling. The 3D elastodynamic formulation was considered. Frequency responses from 50 Hz to 5 kHz with a step size of 50 Hz were computed.
Simulation Results
The transmission loss (TL) is defined by the ratio between the power (P) of the incident and the transmitted (out-going) waves, in decibels, as TL = 10 log 10 P i P o In the FE models described previously, the out-going power can be easily obtained by integrating the mechanical energy flux I · n on the exit boundary (adjacent to the PML; see, e.g., the right side in Figure 11a), where n is the unit normal vector of the boundary, and in the mechanical energy flux, I = 1 2 Re(σ · v), where σ and v are the stress tensor and the particle velocity, respectively, where the cycle average was taken over a harmonic period.
However, the incident power cannot be obtained in a similar way on the opposite boundary since it would include the power of the waves reflected by the phononic structure. Instead, we created a reference model that had the same dimensions as those shown in Figure 11, but without any phononic pattern (a homogeneous plate), and the reference out-going power was used as the incident power in Equation (8). The transmission loss plots in the two directions (Γ-K and Γ-M) are shown in Figure 12a,b, respectively.
Both models showed significant attenuation from 2.1 to 4 kHz, and 180 dB and 150 dB transmission losses were found near the bottom of the curves for the Γ-K and Γ-M models, respectively. Note that TL = 180 dB indicated that only 10 −18 of the incident power was transmitted to the other side of the phononic structure, equivalent to a transmission coefficient of 10 −9 in displacement, velocity, or acceleration. Recall that the two models have distinct lengths of the phononic barriers. If normalizing the transmission loss by the length of the phononic barrier, roughly 1 dB/mm of transmission loss was found in both directions. Given these two directions are only 30 • apart in azimuth angle and the transmission loss has the C 6 symmetric pattern, an omnidirectional 1 dB/mm peak transmission loss in the xy-plane was expected for the considered phononic metamaterial. u z -level = 20 log 10 |u z | |u z,re f | , where the sampling locations for u z,re f are labeled in Figure 12c,d. The coefficient 20 (instead of 10) in Equation (9) was due to that energy being proportional to the square of the displacement amplitude. The exponentially decay in u z of approximately a −1 dB/mm attenuation in the band gap appeared in both cases, while outside the band gap, only minor attenuation (due to scattering and impedance mismatching between homogeneous and phononic plates) was found. These FE numerical experiments confirmed that the phononic metamaterial designed using the proposed deep learning approach can inhibit flexural wave propagation in the desired band gap with a peak attenuation of −1 dB/mm omnidirectionally.
Conclusions
In order to ease the process of phononic metamaterial design and to expedite the development cycle, we proposed an inverse design workflow incorporating deep learning. Compared with the conventional design approaches, such as trial-and-error, the gradientbased optimizer, or the genetic algorithm, a huge advantage of the proposed approach is that, once the deep neural network (DNN) is trained, it can be used multiple times (e.g., designing new metamaterials for different frequency ranges) without requesting any forward calculation (which usually involves costly computations, e.g., FE analysis). The considered phononic metamaterial aimed at blocking flexural wave propagation in the auditory bands by devising the band gap in the frequency-wavevector dispersion relationship. To generate the training dataset, the Mindlin plate formulation was employed in the FE forward calculations, which greatly reduced the computational cost. After the DNN training is finished, the user simply needs to input the center frequency and the bandwidth of the desired band gap, and the network will return a set of design parameters for the unit cell so that the metamaterial meets the desired band gap. For the metamaterial type considered in this study, we reported the average errors in the center frequency and the bandwidth to be 2.26% and 1.75%, respectively, and the baseline comparison indicated that the proposed DNN outperformed the other machine learning algorithms in achieving the target band gap. The results demonstrated that, for the human-sensitive auditory band (i.e., 2-4 kHz), the proposed metamaterial design approach achieved a −1 dB/mm omnidirectional attenuation for flexural waves around 3 kHz. It is also worth mentioning that the DNN returned decent predictions even with a limited amount of training samples.
With the promising results presented in this study, we expect that the proposed deep learning model can be adapted for metamaterial design with more parameters, including not only geometric dimensions, but other properties such as material selections. Furthermore, as the same band gap can be realized by more than one set of design parameters, multiple requirements in addition to the desired band gap can be considered at the same time, such as the minimum mass for lightweight structures or structural integrity for loadbearing structures, by formulating composite/joint objective functions. Future endeavors will be dedicated to the aforementioned aspects, as well as experimental validations of the proposed metamaterials. Data Availability Statement: The proposed approaches are implemented in COMSOL Multiphysics and Python. All the simulation datasets and the algorithms introduced in this study should be able to reproduce the results with the details provided in Sections 2 and 3. The corresponding author may be contacted if there are additional queries for implementations. | 11,068.4 | 2023-02-24T00:00:00.000 | [
"Engineering",
"Physics"
] |
Attention-Based Bi-Prediction Network for Versatile Video Coding (VVC) over 5G Network
As the demands of various network-dependent services such as Internet of things (IoT) applications, autonomous driving, and augmented and virtual reality (AR/VR) increase, the fifthgeneration (5G) network is expected to become a key communication technology. The latest video coding standard, versatile video coding (VVC), can contribute to providing high-quality services by achieving superior compression performance. In video coding, inter bi-prediction serves to improve the coding efficiency significantly by producing a precise fused prediction block. Although block-wise methods, such as bi-prediction with CU-level weight (BCW), are applied in VVC, it is still difficult for the linear fusion-based strategy to represent diverse pixel variations inside a block. In addition, a pixel-wise method called bi-directional optical flow (BDOF) has been proposed to refine bi-prediction block. However, the non-linear optical flow equation in BDOF mode is applied under assumptions, so this method is still unable to accurately compensate various kinds of bi-prediction blocks. In this paper, we propose an attention-based bi-prediction network (ABPN) to substitute for the whole existing bi-prediction methods. The proposed ABPN is designed to learn efficient representations of the fused features by utilizing an attention mechanism. Furthermore, the knowledge distillation (KD)- based approach is employed to compress the size of the proposed network while keeping comparable output as the large model. The proposed ABPN is integrated into the VTM-11.0 NNVC-1.0 standard reference software. When compared with VTM anchor, it is verified that the BD-rate reduction of the lightweighted ABPN can be up to 5.89% and 4.91% on Y component under random access (RA) and low delay B (LDB), respectively.
Introduction
The appearance of the fifth-generation (5G) network brings about technological innovation for media services in wireless systems such as Internet of things (IoT) applications, autonomous driving, and augmented and virtual reality (AR/VR) services [1][2][3][4][5]. In accordance with support of an ultrahigh-speed data transmission, the quality of service can be improved. With the development of the display industry, the aforementioned media services are expected to rely on continued video evolution toward 8K resolutions [6]. The 5G technology includes the following four typical characteristics: high speed, low delay, large capacity, and mobility [7]. To achieve these features for high bit rate video, the demand for high-performance video compression technology has increased exponentially.
After successful standardization of the previous video coding standards, e.g., advanced video coding (H.264/AVC) [8] and high-efficiency video coding (H.265/HEVC) [9], versatile video coding (VVC) [10] has been finalized by the joint video exports team (JVET) of ITU-T Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG) in July 2020. The VVC has been designed to provide a significant bit rate reduction compared to HEVC. In addition, VVC is expected to be utilized not only for high-resolution video such as ultrahigh-definition (UHD) video, but also in various types of video content and applications, e.g., high dynamic range (HDR), screen content, online gaming applications, 360 • video for immersive AR/VR. Therefore, VVC is the most suitable video coding standard for media services over 5G networks.
As with existing video coding standards, VVC also follows the classic block-based hybrid video coding framework consisting of several core modules. In the hybrid coding scheme of VVC, the quadtree with a nested multitype tree using binary and ternary splits (QT-MTT) block partitioning structure has been applied to split a picture into different types of blocks [10]. The QT-MTT block partitioning has been designed to support more flexible shapes and larger block sizes than the structure in HEVC. The coding unit (CU) is a basic unit for signaling prediction information. For each CU, intra prediction and/or inter prediction is performed, followed by transform, quantization, and entropy coding. Furthermore, the in-loop filtering process improves the quality of the reconstructed frame.
The core of video coding is removing redundancy inherent in video signals to the extent that it is not perceived visually. One of the most important processes in video coding is the prediction process that finds pixel-based redundant information, sets it as the prediction signals, and removes the predictable signals. Intra prediction removes spatial redundancy between adjacent pixels or blocks within a frame. Inter prediction removes temporal redundancy with blocks in previous and/or future neighboring frames. In particular, due to the characteristics of the video, there is a high probability that a block similar to the block currently being coded exists in adjacent frames. For this reason, inter prediction mode contributes more significantly to overall coding efficiency than intra prediction mode. Inter prediction methods in VVC can be classified into the whole blockbased inter prediction [11] and the subblock-based inter prediction [12]. Furthermore, in both whole block-based and subblock-based inter prediction schemes, adaptive motion vector prediction (AMVP) mode and merge mode are performed.
Two kinds of inter prediction modes are allowed in video coding standards, namely uni-prediction and bi-prediction.Compared to the uni-prediction mode that utilizes one prediction block, the bi-prediction mode that combines the signals of two prediction blocks makes a more accurate prediction block. The precise prediction signal generation results in improved coding efficiency. Taking four sequences with varying the resolution and the degree of motion, we investigate the ratio of selected modes for whole CUs in B-slices in VVC test model VTM-11.0 NNVC-1.0 [13] in Figure 1. As shown in Figure 1, in all results, inter bi-prediction mode occupies the largest proportion, followed by inter uni-prediction mode and intra mode. Generally, the smoother the motion change like camera moving, the more advantageous it is for bi-prediction.For this reason, particularly, in sequences BQTerrace and BQSquare containing stable motions, bi-prediction mode is observed by a heavy percentage. This investigation demonstrates that the performance of bi-prediction method contributes to the overall coding efficiency. In VVC, the bi-prediction block is generated by utilizing three bi-prediction modes. The bi-prediction modes consist of simple averaging, a weighted averaging called BCW (bi-prediction with CU-level weight), and averaging with bi-directional optical flow (BDOF)based refinement. In the averaging mode, two reference blocks are linearly combined by average function as with the bi-prediction mode of HEVC. In VVC, the BCW mode and the BDOF mode are newly adopted to enhance the accuracy of bi-prediction block generation. The BCW mode is the extended method of simple averaging to allow weighted averaging of the two prediction signals. The BDOF mode is the pixel-wise refinement method to compensate the precise motion missed by the block-based motion compensation based on the 4 × 4 sub-block.The BDOF samples of the CU are calculated under several assumptions. The assumptions are that the luminance of objects is constant according to the optical flow, the objects are moving with constant speed, and the motion with the surrounding sample is the same.
Compared with simple averaging bi-prediction mode, the coding efficiency can be substantially improved by enhancing the prediction accuracy with the strengthened methods. However, this linear fusion-based approach such as the BCW mode may still have limitations in representing diverse motion variations. In the majority of natural circumstances, the changes of pixels in one block may be inconsistent. The linear function-based imprecise fusion for irregular variations can bring a lot of residual signals. In addition, the BDOF mode is still limited to obtain precise prediction block when the actual motion deviates from the aforementioned assumptions of optical flow. In addition, the BDOF mode aims to adjust the bi-prediction samples finely based on sub-CU with several conditions and assumptions. Hence, this strategy is still limited to obtain precise prediction block when the actual motion deviates from the aforementioned assumptions of optical flow.
Recently, the enhancement accuracy in various low-level tasks [14][15][16][17] has improved significantly by learning the non-linear patch-to-patch mapping function directly with the convolutional neural network (CNN). In the field of video coding, in particular, many re-searchers have been explored CNN-based in-loop filtering [18][19][20] and post-filtering [21][22][23] tasks actively. Aside from these, a number of CNN-based studies in video coding tasks, such as intra prediction [24,25] and inter prediction [26,27] have been proposed.
Until now, the deep learning-based works for inter bi-prediction [28,29] aimed to increase the performance of traditional bi-prediction in HEVC. By applying BCW and BDOF, the bi-prediction module of VVC was improved compared with that of HEVC. Because the target anchor has been strengthened, the existing simple CNN network-based strategy has a limitation to substitute the bi-prediction process of VVC completely. Hence, to replace all three bi-prediction modes of VVC, a more complex and advanced network is necessary.
In this paper, we propose an attention-based bi-prediction network (ABPN) to generate a fine bi-prediction block as a replacement of bi-prediction methods in VVC. The proposed ABPN can extract an elaborate fused feature by generating attention map between two reference blocks. In addition, the proposed network is designed by utilizing a local and global skip-connection structure. The stacked residual blocks [30] take a role of local skipconnection blocks and the global skip-connection is structured by adding the predicted residual block to a block averaging two input blocks. This architecture makes it possible to construct a deeper network. Furthermore, we adopt knowledge distillation (KD) [31] to make the proposed ABPN a lightweighted network effectively. The major contributions of this paper are summarized as follows. • We propose an attention-based bi-prediction network (ABPN). Different from the existing bi-prediction methods in VVC, the proposed method can enhance the quality of bi-prediction block by using a CNN-based manner. Because the proposed ABPN can reduce the bit rate while providing higher visual quality compared to the original VVC, the efficiency of transmission over 5G networks can be increased considerably. • The depth of the proposed network is deeper than that of the networks proposed in existing deep learning-based bi-prediction studies. Because there are more biprediction modes of VVC than HEVC, it is an mandatory choice for replacing all modes. In this paper, we utilize a learning technique named KD which distills the knowledge from a larger deep neural network into a small network. It allows the number of parameters to be reduced while keeping the quality of the result similarly. • The proposed ABPN is integrated into VTM-11.0 NNVC-1.0 anchor on JVET neural network-based video coding (NNVC) standard. The experimental results demonstrate that the proposed method achieves superior coding performance compared with the VTM anchor.
The rest of the paper is organized as follows. In Section 2, we introduce the related works. In Section 3, we present the proposed methodology. The experimental results and discussions are shown in Section 4. Finally, Section 5 makes the concluding remarks for this paper.
Traditional Bi-Prediction Method in Video Coding
The weighted prediction (WP) is a global compensation method in frame-level to efficiently deal with brightness variation. In H.264/AVC and H.265/HEVC standards, the WP coding tool is applied with the weighting parameters of reference pictures. Even in VVC standard, WP method is supported to compensate the inter prediction signal. The WP method in VVC allows weight and offset to be signalled for each reference picture in each of the reference picture lists L0 and L1. Although there are global variations for the whole frame in video contents, the irregular local changes also exist. Therefore, blocklevel compensation of the inter prediction signal is required to improve the accuracy of the prediction.
In VVC, a novel weighted bi-prediction method for CU-level, named bi-prediction with CU-level weight (BCW), and a refinement method with bi-directional optical flow (BDOF) were applied. In contrast with HEVC, VVC adopted the block-based blending method for increasing the prediction precision by using the BCW scheme. For every CU, the BCW method is performed with several candidate weights. For the low-delay pictures, five weights are used, and three weights are used otherwise. To refine the bi-prediction signal of a CU, the BDOF method is applied. For each 4 × 4 sub-CU, a motion refinement is calculated by minimizing the difference between the L0 and L1 prediction blocks. Then, the motion refinement is utilized to adjust the bi-predicted signal. In order to avoid interactions between frame-based and block-based bi-prediction methods, the BCW and the BDOF modes are not used if the WP mode is used. Table 1 shows the problem formulation and related methods for deep learning-based inter prediction in video coding. Inspired by the success of deep learning in many computer vision tasks, numerous deep learning-based research projects for video coding have begun. To refine and/or replace the traditional inter prediction methods in video coding standards, some works have been proposed. Huo et al. [32] proposed a CNN-based motion compensation refinement (CNNMCR) scheme to refine the prediction block for uni-prediction in HEVC. Wang et al. [33] proposed a uni-prediction refinement network consisting of a fully connected network (FCN) and a CNN in HEVC which was named as a neural network-based inter prediction (NNIP) algorithm. Table 1. Problem formulation and related methods for deep learning-based inter prediction in video coding.
Uni-prediction block enhancement
Refine the uni-prediction block CNNMCR [32] HEVC NNIP [33] Bi-prediction block generation Generate the bi-prediction block CNN-based bi-prediction [28] STCNN [29] Frame extrapolation and interpolation Estimate additional reference frame from existing reference frames VECNN [34] MQ-FKCNN [35] HEVC and VVC Deep network-based frame extrapolation [36] Affine transformation-based deep frame prediction [37] HEVC Furthermore, there are several studies related to the extrapolation and interpolation of the frame, which can be used as an additional reference frame. Zhao et al. [34] proposed a deep virtual reference frame enhancement CNN model (VECNN) to replace the traditional frame rate up conversion (FRUC) algorithm in both frame and coding tree unit (CTU) level. Liu et al. [35] proposed a multi-scale quality attentive factorized kernel convolutional neural network (MQ-FKCNN) to generate additional reference frames. Huo et al. [36] have proposed a deep network-based frame extrapolation method using reference frame alignment for HEVC and VVC. Recently, Choi et al. [37] proposed a neural network-based frame estimation from two reference frames by applying an affine transformation-based scheme.
For improvement of bi-prediction in HEVC, Zhao et al. [28] proposed a CNN-based bi-prediction scheme based on a patch-to-patch inference strategy. The proposed network in [28] stacked six convolution layers with skip connection. For the luma components of prediction unit (PU) with the sizes 16 × 16, 32 × 32, and 64 × 64, this network replaces traditional averaging bi-prediction method. Later on, Mao et al. [29] proposed a CNN-based bi-prediction method utilizing both spatial neighboring regions and temporal display orders as extra inputs to further improve the prediction accuracy which named as STCNN. By applying correlation of neighboring pixels and video frames, this work replaced averaging bi-prediction method effectively. As in [28], the proposed network in [29] applied six convo-lution layers with skip connection. Furthermore, as an additional experiment, this method replaced the bi-prediction mode in HEVC by combining the BIO [38] method. However, the target of this method is still the replacement of averaging bi-prediction method.
As mentioned in the above, both works aimed at replacement of an average functionbased bi-prediction method in HEVC. Because the BCW and the BDOF methods are supported for bi-prediction including the average method in VVC, it is more difficult to replace the traditional bi-prediction.In other words, the bi-prediction process in VVC is more complicated and improved than that of HEVC because these novel bi-prediction tools lead to significant performance gain. Therefore, in order to replace the enhanced bi-prediction process, it is necessary to construct a more elaborate CNN-based framework.
Methodology
In this section, the proposed attention-based bi-prediction network (ABPN) is presented. First, the architecture of the proposed network is described. Secondly, the knowledge distillation-based training technique for network lightning is illustrated. Finally, the details of integrating the proposed ABPN into VVC will be described.
Architecture of the Proposed ABPN
In contrast to the existing bi-prediction methods in VVC, the proposed method can enhance the quality of bi-prediction block by using a CNN-based manner. Because the BCW and the BDOF methods are supported for bi-prediction including the average method in VVC, it is more difficult to replace the traditional bi-prediction.The proposed ABPN which is designed as a sophisticated structure has an advantage in a high performance video codec such as VVC.
The proposed network is illustrated in Figure 2. Given two reference prediction blocks P 0 and P 1 , where H is height and W is width, the goal of the network is to estimate the bi-prediction block P bi . The output block size of the network is H × W equal to the size of the input blocks. Two reference prediction blocks are results obtained during each motion-estimation process in different or the same reference frames. In motion-compensation process, one biprediction block is produced by blending two motion-compensated blocks after performing uni-prediction mode. The essential part of the fusion of two prediction blocks is finding accurate intermediate motion information between motions of two prediction blocks. Meanwhile, the important edge and texture details in each reference prediction blocks should be maintained. By utilizing the attention map between two reference features, we can obtain appropriate motion information for the current target block. To retain lowfrequency and high-frequency characteristics that should not be missed, we exploit the local and global residual learning structures. Each of the two input prediction blocks are fed into three CNN layers to increase the number of features, where f (·), W j i , F j i , and b j i denote the activation function, the weight, the output feature, and the bias of jth convolution layer, respectively, and i is the index of the reference block. For activation function of all convolution layers except the last layer in the overall network, the leaky rectifier linear unit (LeakyReLU) is used. The rectifier linear unit (ReLU) activation function is commonly applied in CNN. However, if the output value of the convolution layer is a negative value, the output value is converted to zero. In training, a zero value has a bad effect. The LeakyReLU function which multiplies 0.01 to negative value can resolve this problem. The attention map between two features F 3 0 and F 3 1 can be obtained by using the dot product and the sigmoid activation function, where · and Sigmoid(·) denote the dot product and the sigmoid activation function, respectively. The sigmoid activation function converts the output values to the values in the range of [0, 1]. As opposed to the softmax function which is more appropriate for inferring a class vector based on the probability, the sigmoid function is more suitable in generating an attention map. After that, the attention map is then multiplied to the embedded features obtained by first convolution layers, where denotes the element-wise product, F a 0 and F a 1 are the attended features of two reference blocks. The process of generating attended features is illustrated with blue lines in Figure 2. The embedded features and attended features are then concatenated: where [, , , ] denotes the concatenation function, and F c is the concatenated feature. The concatenated feature passes through two CNN layers: After that, F 5 is fed into N rb residual blocks wherein there are no batch normalization units [30]. The stacked residual blocks play a role as local skip-connection blocks. Each residual block can be expressed as where W l rb(k) , F l rb(k) , and b l rb(k) denote the weight, the output feature, and the bias of the lth convolution layer in the kth residual block, respectively. At this time, the input feature of first residual block F l rb(0) equals F 5 . Then, the output feature of the last residual block F 2 rb(N rb ) passes through two CNN layers: Finally, the final bi-prediction block is obtained by adding the predicted residual block to a block averaging two input blocks, where Average() denotes the average function. This global skip-connection process is illustrated with green lines in Figure 2. The proposed network can preserve important features of diverse depth of network by using local and global residual learning.
KD-Based Lightweighted Network Design
In the majority of cases, deep learning-based prediction methods greatly increase encoding and decoding computational complexity as much as they improve coding performance. In particular, in encoder, because a prediction process is performed based on CU-level, every CU in the recursive block partitioning structure should be encoded for all of the candidate prediction modes. Due to these various numbers of cases, computational complexity increasing on deep learning-based prediction mode is unavoidable. Moreover, from the decoder point of view it is important to deploy deep learning models on devices with limited resources. However, a simple network for prediction method can cause insufficient results.
To compress the size of network effectively, we adopt the knowledge distillation (KD) [31] strategy in the training stage. The KD-based training structure in the deep learning network is a teacher-student architecture. Once the massive model has been trained, this large model becomes a teacher model. To obtain an optimal lightweight model, the teacher model transfers the knowledge to small student model. Through the KD mechanism, we can acquire a more suitable model in limited circumstance.
Because the KD has been initially proposed for the recognition task [31], we design the loss function for the low-level vision task. The structure with the KD-based learning strategy is illustrated in Figure 3. Two input blocks are fed into both teacher model and student model. The teacher model is a pretrained large model, and the student model is one to be trained. In this paper, we compress the network in two respects-the number of features of all CNN layers except the last layer, and the number of residual blocks. When the number of features and residual blocks of the teacher model are defined as N f _t and N rb_t , the aims of KD-based training discover the proper number of features and residual blocks of student model, N f _s and N rb_s . For that, the output of teacher model is utilized in the loss function, and only the weights of the student model are updated. We design a loss function consisting of the distillation loss L d and the student loss L s . For both losses, we use the Charbonnier penalty function [39], where P bi_t , P bi_s , andP bi denote the prediction output of teacher model, student model, and ground truth, respectively, and ε set to 1 × 10 −3 . The final loss is defined as where α is balancing parameter. The selection of N f _s , N rb_s , and α is described in the experimental results section.
The Scope of the Proposed ABPN in VVC
In VVC, the QT-MTT block partitioning structure is applied to split a frame into various shapes and sizes of CUs. Based on CU-level, a prediction process is performed. Because applying the deep learning-based bi-prediction mode to all diverse cases of CUs increases computational complexity significantly, it is necessary to select only a few block types that greatly affect performance. Table 2 shows the number of CUs and area ratio for different CU size in the bi-prediction with the first 2 s on VTM-11.0 NNVC-1.0 anchor. Table 2, the proposed deep learning-based bi-prediction mode replaces all of the traditional bi-prediction modes on CUs with sizes 128 × 128, 64 × 64, and 32 × 32. The proposed method is applied only on the luma component of the CU. The average mode is performed to the chroma component of the CU to which the proposed method is applied.
The Strategy of the Proposed ABPN in VVC
The proposed ABPN is applied as a new bi-prediction mode in the existing motioncompensation process in VVC. Figure 4 shows the flow-chart of the bi-prediction in the motion-compensation process with the proposed method. First, the existence is checked for two reference prediction blocks. To execute an actual bi-prediction, both reference prediction blocks must exist. If only one of two exists, an existing block in itself is copied to the bi-prediction block buffer. When both prediction blocks exist and the size of the CU is 128 × 128, 64 × 64, and 32 × 32, the deep bi-prediction mode with the proposed ABPN is performed. For other sizes of CU, the index of the BCW weight is checked. In BCW method, five weights are allowed, and one of these corresponds to the weight for the average mode. This average mode is called BCW_DEFAULT. The actual BCW mode is performed for the remaining four weights other than the BCW_DEFAULT mode. If BDOF mode is set to TRUE, averaging with BDOF-based refinement mode is performed. Otherwise, simple averaging mode is performed.
Generation of Training Dataset
To evaluate the proposed ABPN, the BVI-DVC [40] sequence dataset is used for generation of training dataset. The BVI-DVC dataset consists of 200 sequences at various resolutions 3840 × 2160, 1920 × 1080, 960 × 540, and 480 × 270. This dataset contains 800 sequences at four different resolutions. The BVI-DVC dataset covers a large variety of motion types, including camera motion, human actions, animal activity, etc. Therefore, it is suitable for establishing a training dataset. The VTM-11.0 NNVC-1.0 reference model is used to compress the sequences configured with random access (RA) under five quantization parameters (QPs) {22, 27, 32, 37, 42}. In decoder of VTM anchor, for each CU size {32 × 32, 64 × 64, 128 × 128}, the L0 and L1 reference prediction blocks on bi-prediction process are extracted to utilize as the inputs of network. The corresponding ground-truth patches are cropped from raw video frames with the location of current CUs. For different block types {128 × 128, 64 × 64, 32 × 32} and QPs {22, 27, 32, 37, 42}, the dataset is constructed to train independently. Table 3 shows the number of pair of blocks for each type of training data. The dataset contains 8,972,496 pairs of blocks in total.
Training Settings
For the proposed ABPN, 15 models were trained with three CU sizes and five QPs. For training, we used the Adam optimizer [41] by setting β 1 = 0.9 and β 2 = 0.99. We adopted the cosine annealing scheme [42] and initially set the learning rate to 4 × 10 −4 . We trained with setting the size of mini-batch to 64. For each model, the number of iterations was set to 600 K. In other words, the number of epochs for each model is different as shown in Table 3. For all of the experiments, we observed that the weight converged to the optimal before-last epoch. The proposed ABPN was implemented in PyTorch on a PC with a Intel(R) Xeon(R) Gold 6256 CPU @ 3.60GHz and a NVIDIA Quadro RTX 8000-48GB GPU. We augmented the training data with random horizontal flips and 90 • rotations.
KD-Based Training
To reduce the size of the deep neural network while maintaining the performance of the network, we applied the knowledge distillation (KD) [31] strategy in the training stage. In this paper, we constructed the architecture of the student model by setting the number of features of all CNN layers except the last layer and the number of residual blocks while keeping the framework the same as the teacher model.
For the selection of the two hyperparameters and alpha in loss function, we used the model with CU size 128 × 128 and QP = 37. First, we trained a teacher model with the number of features of all CNN layers except the last layer set to 64 and the number of residual blocks set to 10. The number of parameters of the teacher model is 1,295,169. Then, we trained the candidate student models by using pretrained teacher models on diverse settings of the number of features and the number of residual blocks and α value.
For hyperparameter determination, we constructed a validation dataset by using some of the JVET common test conditions (CTC) [13] sequences with 100 frames per sequence. We selected some sequences in all of the classes (Tango2, CatRobot, BQTerrace, BQMall, BQSquare, ArenaOfValor). The validation dataset contains 164,026 pair of blocks. Figure 5 shows the number of parameters and peak signal-to-noise ratio (PSNR) with various hyperparameter settings on student models in KD-based training. As shown in Figure 5a, from the number of features 8 to 32, the PSNR increases steeply, and thereafter, it becomes smooth. Considering both the number of parameters of the model and the PSNR, we set the number of features of student model to 32. In the same manner, Figure 5b demonstrates that five residual blocks (RBs) are appropriate for a student model by considering the number of parameters and the quality of output. As shown in Figure 5c, after α value of 0.5, it is almost the same. Therefore, we selected 0.5 as the α value. As a result, the final model of the proposed ABPN consists of 32 features for each CNN layer except the last layer and five residual blocks. The number of parameters of the proposed ABPN is 231,905 in the lightweight network.
Encoding Configurations
The proposed ABPN has been integrated into VTM-11.0 NNVC-1.0 reference software. The PyTorch 1.8.1 was used for performing the proposed deep neural network-based biprediction mode in VTM. We have compared our scheme with VTM on two settings, VTM with bi-prediction baseline (only the average mode is used) and VTM anchor (all of the traditional bi-prediction modes are used). We follow the JVET common test conditions (CTC) for neural network-based video coding technology [13] in all experiments.
In the experiments, the low-delay B (LDB) and random access (RA) configurations are tested, under five QPs = {22, 27, 32, 37, 42}. At this time, sequences with the first 2 s for Class A to E are tested. The coding with the deep learning-based bi-prediction mode was conducted on the GPU in the same environment as training.
Comparisons with VVC Standard
The overall results of BD-rate [43] reduction are shown for the Y component because the bi-prediction mode with the proposed method is applied only on Y component. The running time ratios of encoding and decoding are indicated by EncT and DecT, respectively. The results of the BD-rate reduction and encoding/decoding computational complexity compared to VTM-11.0 NNVC-1.0 with bi-prediction baseline on RA and LDB are reported in Table 4. The proposed bi-prediction framework achieves significant BD-rate reductions on both RA and LDB configurations compared with traditional averaging bi-prediction method-based strategy. It is observed that 1.94% and 1.44% BD-rate savings can be achieved on average under RA and LDB, respectively. In particular, for BQSquare, the proposed bi-prediction method achieves up to 8.21% and 5.37% BD-rate reductions under RA and LDB, respectively. To describe the improvement of the proposed method in comparison with three bi-prediction modes (averaging, BCW, BDOF) of VVC, the results of the coding performance compared to VTM-11.0 NNVC-1.0 anchor are shown in Table 5. It is observed that the proposed method achieves up to 5.89% and 4.91% BD-rate savings under RA and LDB, respectively. Although the whole encoding time increases highly, the overall decoding time has a little difference. Even though the proposed method adopted the CNN-based strategy, the proposed ABPN showed only on average 85% and 143% of decoding time under RA and LDB, respectively. From this result, we can deduce that the proposed integration strategy, which rarely changes the existing standard structure, is substantially effective in terms of decoding computational complexity. Video coding standards have specified only the format of the coded bitstream, syntax, and the operation of the decoder. In other words, the operation of the encoder is not a critical issue in video coding standards. Therefore, the experimental results show that the proposed bi-prediction method is suitable and offers a good trade-off between efficiency and complexity as compared to the existing VVC standard. Table 5. BD-rate reduction and encoding/decoding computational complexity compared to VTM-11.0 NNVC-1.0 anchor.
Class
Sequence RA LDB
BD-Rate EncT DecT BD-Rate EncT DecT
Class A1 (3840 × 2160) Class B (1920 × 1080) Class C (832 × 480) Class D (416 × 240) Class E (1280 × 720) The qualitative results on BQSquare and Johnny are presented in Figure 6. The results of two sequences, the first column means the original frame and the second column means the compressed frame by VTM-11.0 NNVC-1.0 anchor. The last column means the compressed frame by applying the proposed ABPN-based bi-prediction.As shown in Figure 6, our approach is more robust in edge details than VTM anchor. It can be observed that the proposed method can reconstruct the signals similar to the original data. Figure 7 describes the difference of block partition structures between traditional bi-prediction methods and the proposed ABPN-based bi-prediction method. Because the proposed bi-prediction mode is applied to three types of block {32 × 32, 64 × 64, 128 × 128}, CUs with these types increased. In particular, because the resolution of example in Figure 7 (416 × 240) is relatively small, CUs with 32 × 32 occupy the largest proportion. Furthermore, as the small size of CUs are merged into the large size of CUs, fewer coding bits are required in the overall video coding process. As a result, the proposed scheme can contribute to the whole coding efficiency.
Analysis on Performance of KD-Based Strategy and Attention Mechanism
To further analyze the contribution of the proposed ABPN, Table 6 shows the results of the ablation study for KD-based strategy and attention mechanism. We evaluated the model with CU size 128 × 128 and QP = 37 by using the validation dataset.
In order to analyze the computational complexity, Table 6 shows the number of model parameters and the number of floating-point operations (FLOPs) [44] for the teacher model of the proposed ABPN (ABPN-T), the proposed ABPN, and the proposed ABPN without KD strategy or attention mechanism, respectively. In this table, the gigaFlops (GFLOPs) denote a billion FLOPs. From Table 6, it can be seen that the difference between ABPN-T and ABPN is large in all the results of computational complexity measures. Furthermore, it is observed that the PSNR of the proposed ABPN is higher than the proposed ABPN without KD. Therefore, we can explain that the KD-based training strategy is helpful to obtain a model with high performance in lightweighting the deep learning model. To train the proposed ABPN without attention mechanism, we used the concatenated tensor of two reference prediction blocks as an input of model. As with the proposed ABPN, the input tensor is fed into three CNN layers, two CNN layers, five residual blocks, and two CNN layers. Furthermore, as with the proposed ABPN, the KD-based training strategy is applied to train the model without an attention mechanism by using the pretrained teacher model. When comparing the proposed ABPN without attention, the proposed ABPN shows the higher PSNR than the model without attention. It demonstrates that the attention between two prediction blocks can extract more improved features. Therefore, in the fusion of two images, it is more effective to apply a complicated structure, such as an attention mechanism, than to use general CNN layers only. Figure 8 shows the visualization of output feature map of CNN layer for the proposed ABPN. We used the 128 × 128 prediction blocks in POC20 of the BQTerrace under RA and QP = 37 as the inputs. We extracted the first four output feature maps in all CNN layers. In Figure 8a,b, the upper row means a ground truth (GT) block. The GT block of Figure 8a contains the ripples. The GT block of Figure 8b contains some chairs, tables, and a person. We visualized the output feature maps of each three CNN layers for the first prediction block before the designed attention module. In addition, we selected a CNN layer between attention module and residual blocks to prove the effect of attention. Finally, we selected a previous CNN layer of the last layer. For all of the results, the feature map becomes stronger from the top layer to bottom layer. In particular, the results demonstrate that the finer details can be recovered after the attention module. Therefore, the proposed ABPN is able to generate the enriched features by applying the designed attention mechanism.
Conclusions
In this paper, we have proposed an attention-based bi-prediction network (ABPN) to effectively improve the performance of bi-prediction in VVC. The proposed bi-prediction method is able to handle various kinds of motion variations in a nonlinear mapping manner. The structure of the proposed ABPN consists of the attention mechanism, and the local and global skip-connection. With this structure, it can generate a precise fused feature. In addition, by adopting the knowledge distillation (KD)-based training strategy, we have reduced the number of parameters of the network considerably. The proposed ABPN is integrated into VVC as a novel bi-prediction method. Experimental results demonstrate that the proposed ABPN can significantly enhance the overall coding performance.
Compared to the VTM-11.0 NNVC-1.0 that uses only the averaging mode, the proposed method yielded 1.94% and 1.44% BD-rate reductions on average for the Y component under RA and LDB, respectively. The proposed ABPN achieved 0.86% and 1.37% BD-rate savings on average for the Y component under RA and LDB, respectively, compared with the VTM-11.0 NNVC-1.0 anchor, which uses all of the bi-prediction modes. As a consequence, the proposed method can improve the effectiveness of video transmission scheme for video sensor network over the 5G network.
In this work, the bi-prediction block was generated by the proposed ABPN. In future work, the inter prediction scheme can be further improved by enhancing uni-prediction block. In addition, an extending training dataset with additional encoding information, such as merge mode flag, can give a better coding performance. | 8,755.6 | 2023-02-27T00:00:00.000 | [
"Computer Science"
] |
ResA3: A Web Tool for Resampling Analysis of Arbitrary Annotations
Resampling algorithms provide an empirical, non-parametric approach to determine the statistical significance of annotations in different experimental settings. ResA3 (Resampling Analysis of Arbitrary Annotations, short: ResA) is a novel tool to facilitate the analysis of enrichment and regulation of annotations deposited in various online resources such as KEGG, Gene Ontology and Pfam or any kind of classification. Results are presented in readily accessible navigable table views together with relevant information for statistical inference. The tool is able to analyze multiple types of annotations in a single run and includes a Gene Ontology annotation feature. We successfully tested ResA using a dataset obtained by measuring incorporation rates of stable isotopes into proteins in intact animals. ResA complements existing tools and will help to evaluate the increasing number of large-scale transcriptomics and proteomics datasets (resa.mpi-bn.mpg.de).
Introduction
Gene and protein annotations like Gene Ontology, KEGG or Pfam provide a systematic approach to classify protein function and localization. The statistical analysis of these gene annotations allows deep insight into regulatory circuits between functionally and spatially related groups of genes. To identify over-represented groups of genes and proteins from a large-scale dataset, a target set based on fold-change or some statistical value is constructed to distinguish between regulated and non-regulated candidates in most analyses. For example, tools like GOrilla, GoMiner and Catmap [1,2,3] use separate target and background sets to calculate enriched GO terms, or use a ranked gene list without experimental values. However, arbitrary cutoffs generate a bias and information in the dataset could be lost. Also, ranked lists lack any information about the type of distributions. Thus, for a more impartial analysis, random permutation approaches independent of cutoffs, were developed. ErmineJ [4], a tool providing a microarray focused permutation-based analysis, will be discussed later.
Here, we present ResA, a universal web tool designed to determine the statistical significance of sample distributions defined by annotation in genomic and proteomic data sets. Samples of experimental values linked to an annotation were evaluated for the significance of a statistical property (estimator) such as standard deviation (SD), coefficient of variation (CV) or deviation of the mean. ResA allows analysis of the enrichment and regulation of terms associated with protein complexes, function and other classifications. Significance is estimated by the application of a resampling algorithm. The algorithm estimates empirically the significance of a statistic of a selected set of experimental values. This is done by repetitive and random collection of samples of the same size from the complete dataset. For example, gene ontology analysis revealed that 20 proteins from the whole dataset belong to a proteasomal term and the estimator statistic (i.e. SD) of the given experimental values (i.e. incorporation rate of stable isotopes) is calculated. ResA compares this statistic to that of 1000 randomly selected sets of the same size from the whole dataset. If the random sample statistics are mainly less extreme compared to the proteasomal set, the determined pvalue will be low. (Figure 1A, B). In addition, ResA is not limited to common annotations and is able to handle any custom type annotation. Moreover, it provides a feature for full and slim Gene Ontology annotation of nine different organisms based on gene names and UniProt identifiers using the UniProt-GOA [5]. ResA has no limitations with respect to type of distribution, since the resampling approach accounts inherently for the distribution of the underlying population. Here we show that ResA is able to test for significantly regulated terms and is capable of enrichment analysis within the complete data set as demonstrated by the analysis of 13 C 6 -Lysine (Lys6) incorporation rate experiments in living animals. In addition to the p-value, the false discovery rate (FDR) is empirically determined taking into account dependencies within the data set [6]. Taken together the tool provides unbiased extreme value (regulation) and enrichment analysis without choosing a cutoff or interval to define the target data set.
Design and Implementation
The ResA algorithm was implemented using Python 2.5.2 with SciPy 0.6.0 and R 2.6.2. The web interface was implemented using Python, HTML, CSS and Java Script on an Apache web server running the mod_python module. Figure 1A, B illustrates the general flow of the resampling procedure. After parameter setup, data upload, and optional Gene Ontology annotation the resampling analysis on the chosen estimator is applied to each sample defined by the annotations provided.
The m experimental values, which are associated with a given term of annotation (i.e. GO-term), are collected and the estimator statistic (e s ) of this sample is calculated. The resampling procedure samples the complete data set R times (m out of n) by picking m elements randomly with replacement. The estimator statistic is evaluated for each sample (r s ) and stored in the empirical resampling distribution (RD). After R iterations the RD is sorted and the relative rank (r r ) of e s is determined. Of note, the relative rank is to a close approximation equal to the probability of obtaining the same or a more extreme value for e s by chance. Therefore, the relative rank gives the type I error probability, which reflects the significance level of the target set. To increase the resolution, linear interpolation between ranks and optional fitting of the generalized Pareto distribution to the upper and lower 2% of the RD is done by default using the R-package fExtremes ( Figure S1 and S2). To increase the speed of analysis, the empirical resampling distributions are reused with samples of equal size m for the estimation of type 1 error probability. We estimate the false discovery rate (FDR) using permutations of the dataset serving as H 0 distribution while retaining the interdependence of the underlying data. Specifically, this is done for each pvalue by dividing its rank in the H 0 distribution (rpH 0 ) by its corresponding rank in the H 1 distribution (rpH 1 ) [6]. A minimum of 1000 p-values, being a multiple of n, based on the H 0 distribution are generated to provide sufficient resolution of the FDR estimation.
Program Usage and Settings
Datasets can be inserted or uploaded to the web interface ( Figure 2b). Data must be tab separated and formatted as shown in Figure 3A. They could include gene symbols and UniProt identifiers, which are used for the Gene Ontology annotation feature. To annotate a dataset which contains gene symbols and/ or UniProt identifiers, the appropriate organism can be set ( Figure 2d) and the user can choose between full or slim GO annotation ( Figure 2e) and full or experimental evidence (Figure 2f). The annotation is based on the gene and UniProt identifiers provided by UniProt-GOA and will be updated monthly.
It is essential that the first column contains the experimental value (log fold-change, isotope incorporation, intensity, etc.). Titles of the annotation columns are used to discriminate between different types of annotation in the results. If a pasted dataset already contains annotations such as KEGG and Pfam, these terms must reside in tab separated and titled columns ( Figure 3A and online help). Annotations need to be located in the last columns. Importantly, semi-colons must separate multiple identifiers or terms in one column. For example, multiple identifiers are common for protein groups of mass spectrometric data and annotation will be done for all entries. Similarly, multiple terms are also common with Gene Ontology as one gene can be associated with several compartments at the same time and any of these terms will be treated independently. When columns of annotation are provided, the correct number of columns must be set in the respective spin-box in the web interface ( Figure 2c). The dataset used in the example analysis already contains three annotation columns (KEGG, Pfam and InterPro) and is available on the web site of the tool as tab separated text file and Excel file (Figure 2a).
Regulation or the enrichment based on various statistics (estimators) can be tested depending on the focus of the analysis (Figure 2g). The number of resamplings R can be set in the range from 500 to 10,000 (Figure 2h). The value of R states directly the resolution of the p-value and correlates linearly with the running time. The default value of 1,000 resamplings constitutes a compromise between accuracy and time consumption. Since the resolution of the p-value is limited by the number of resamplings, our tool uses interpolation between ranks of the empirical resampling distribution (RD) to obtain relative ranks in the interval (0, 1). In addition, optional fitting of the generalized Pareto distribution to the tails of the RD increases the resolution of the pvalues and reduces the occurrence of zero p-values (Figure 2j). The minimum size (number of experimental values) of the terms m can be set as an absolute quantity and is defaulted to 5 (Figure 2i). This setting can be increased or decreased depending on the interest in rare annotations. It works as a cut off for small sample sizes and has no further impact on the analysis of samples bigger than m. If m is set to higher values, the speed of analysis increases slightly.
We recommend specifying a subject name for the analysis. The results can be received by providing an email address (Figure 2k). Results are displayed in three linked levels following visual information on the progress of the analysis. First, the annotation types are listed with term frequencies. Second, the corresponding terms are listed in a sortable view together with statistics and a seven-figure summary diagram [7] of the sample distribution ( Figure 3B). The seven-figure summary, similar to a box-plot, shows minima, maxima, first and third quartile and median with additional marks for the 10 th and 90 th percentile. In addition, the mean is displayed in light color. Third, the assigned proteins or genes are presented by selecting a term. A download of the complete dataset containing annotation types, terms, and statistics is available on the first level of results. To facilitate figure preparation the term view containing the diagrams can be inserted into Excel by copy-and-paste. The results are stored for a period of 14 days after generation.
Results and Discussion
To demonstrate the usefulness of ResA we performed a pulsed stable isotope labeling experiment in living animals. Two mice were feed for two weeks with a diet containing Lys6 (purchased from Silantes). After labeling, heart tissues were isolated and extracted proteins were subjected to liquid chromatography mass spectrometry as described in [8]. RAW data were analyzed with MaxQuant (Version 2.2.9). The SILAC ratio of the labeled and unlabeled peaks (H/L) reflects the Lys6 incorporation rates of individual proteins in heart tissue. Prior to the analyses the ratios were transformed into the relative scale by H/(H+L). In addition to the GO annotation using ResA, we included KEGG, Pfam and Interpro information using the Perseus tool [9]. The dataset and the complete results are accessible on the web interface.
First, we identified annotations of proteins with significantly high or low Lys6 incorporation rates, which correspond to the detection of extremely high or low values in log 2 fold-change distributions. For this analysis we used the mean divided by standard deviation, which is related to the t-statistic, as statistical estimator, because variations in the sample mean increase in significance when accompanied by a low standard deviation (Figure 4, analysis 1). In addition, we used the width of the interval from the 10 th to the 90 th percentile to monitor all terms with a specific narrow range of Lys6 incorporation rates regardless of their location relative to the population mean. The advantage of this non-parametric estimator over standard deviation is its robustness to outliers (Figure 4, analysis 2). The results of the analysis revealed that the protein groups with highest levels of Lys6 incorporation belong to structures such as transport vesicle, recycling endosome and eif3 complex (Figure 4 and online results of example data). Conversely, we detected mainly GOterms of the basal lamina in the group with low Lys6 incorporation rates showing average Lys6 incorporation of 0.18 (SD 0.07). Thus, our data confirmed recent studies in cell culture systems and living animals, which identified similar stable isotope incorporation rates for proteins in the same cellular compartments and complexes [10,11]. Moreover, we detected the Wnt pathways with 17 members, including b-catenin, GSK3, and casein kinase with a median Lys6 incorporation of 0.55 (SD 0.06) (p-value ,0.001) indicating very similar incorporation for these pathway members.
Clearly, the GOrilla tool is also capable to find annotations with significantly low or high Lys6 incorporation rates without explicit definition of a background set. However, this approach uses ranked lists as input and an algorithm, which is based on a minimum hypergeometric score to discover enriched GO-terms. In contrast, our approach yields complementary information to the output provided by GOrilla and the results of ResA reveal the underlying distributions of the datasets. Furthermore, ResA covers not only significant annotations but reports also non-significant terms.
In contrast to ResA the software tool ErmineJ is dedicated to the analysis of microarray data and provides a powerful resampling based gene set enrichment analysis (gene score resampling, GRS). While the annotation of ErmineJ is based on mRNA probe-set ids, ResA accepts gene symbols and Uniprot IDs for GO annotation. In cases, where a custom annotation is provided together with input data, ResA does not need a specific type of identifier. The GSR of ErmineJ works on scores representing the significance of differential gene expression such as the -log(p-value), while ResA works directly on abundance ratios or incorporation levels. Thereby, ResA retains information about the mode of regulation and provides further types of analysis using additional estimators such as SD.
In order to assess the reproducibility and stability of the determined significances we used different numbers of resamplings R and analyzed the Lys6 incorporation datasets. The scatterplot ( Figure 5A, lower half) and QQ-plot matrix ( Figure 5A, upper half) displayed the average of the p-values for R equal to 500, 1000, 5000 and 10000. Plotting of the data with R = 500 against calculations with higher resamplings resulted in a significantly broader scatter range. However, resampling with R = 1000 resulted in p-values comparable to higher orders of resamples, indicating that R should be .1000 to obtain a reasonable resolution. More detailed scatterplot matrices containing all replicates and values for R between 500 and 10000 are available in Figure S3.
Next, we tested the correlation between calculated p-values and numbers of resamplings (R) and calculated the coefficient of variation (CV) of p-values between R = 500 and R = 10000 ( Figure 5B). We observed higher values of the CV for R = 500 as compared to those with higher sampling rates, indicated by the 90 th percentiles ( Figure 5B, red lines) being 0.58, 0.33, 0.13 and 0.13 for R equal to 500, 1000, 5000 and 10000, respectively.
The Venn diagram, Figure 5C, demonstrates the stability of KEGG-terms with p-values lower than 0.05 with R = 1000 in three replicates A, B and C. Of 62 different terms 59 terms were present in all replicates (.95%). Taken together, we propose R = 1000 as a reasonable compromise in terms of resolution, reproducibility and CPU time. In principle, due to its resampling approach, ResA is independent of sample and data set size. An exact commitment of the data set size is difficult since this property depends on the quality of the data. For typical applications, such as proteomic or transcriptomics data set analysis, sample size should not be an issue.
The combination of arbitrary annotation handling, unbiased, non-parametric empirical resampling including FDR estimation and vivid presentation of the results makes ResA a valuable tool for data analysis, which is freely available to the scientific community.
Availability and Future Directions
The tool is available free of charge at http://resa.mpi-bn.mpg. de. The Gene Ontology annotation database (UniProt-GOA) will be updated monthly. To further increase the speed of the resampling procedure it is planned to use CUDA based parallel GPU programming. | 3,738 | 2013-01-28T00:00:00.000 | [
"Computer Science"
] |
Experimental Infection of Voles with Francisella tularensis Indicates Their Amplification Role in Tularemia Outbreaks
Tularemia outbreaks in humans have been linked to fluctuations in rodent population density, but the mode of bacterial maintenance in nature is unclear. Here we report on an experiment to investigate the pathogenesis of Francisella tularensis infection in wild rodents, and thereby assess their potential to spread the bacterium. We infected 20 field voles (Microtus agrestis) and 12 bank voles (Myodes glareolus) with a strain of F. tularensis ssp. holarctica isolated from a human patient. Upon euthanasia or death, voles were necropsied and specimens collected for histological assessment and identification of bacteria by immunohistology and PCR. Bacterial excretion and a rapid lethal clinical course with pathological changes consistent with bacteremia and tissue necrosis were observed in infected animals. The results support a role for voles as an amplification host of F. tularensis, as excreta and, in particular, carcasses with high bacterial burden could serve as a source for environmental contamination.
Introduction
Francisella tularensis is a zoonotic intracellular bacterium that belongs to the c-subclass of Proteobacteria [1,2]. Two F. tularensis subspecies cause clinical infections in humans: F. tularensis subsp. tularensis (type A), which is almost exclusively found in North America, and F. tularensis subsp. holarctica (type B), which occurs throughout the Holarctic region [3]. In Finland, dozens to several hundreds of human tularemia cases are registered each year, and incidence rates show marked geographical variation between districts [4]. From 1996 to 2004, the cumulative incidence of human tularemia in Finland was over 37 cases/100,000 inhabitants, which is the highest of all EU member states [5]. Meanwhile, a series of outbreaks has demonstrated the re-emergence of this disease in other European countries [6][7][8].
F. tularensis is renowned for its high infectivity and wide host range. The infectious dose for humans can be as low as 10 bacteria [9], and the bacterium has been isolated from numerous mammalian species, including rabbits, hares, voles and other rodents [10][11][12][13], and detected from natural waters and mud, and from mosquito larvae collected in endemic areas [14,15]. It is very likely that F. tularensis persists in natural waters, possibly in aquatic protozoa [16].
Humans become infected with F. tularensis through arthropod bites, direct contact with infected animals, inhalation of infective aerosols, or ingestion of contaminated food or water [4,9]. Clinical manifestations depend mainly on the infection route, and the disease severity depends on the infecting subspecies and strain [17]. After an incubation period of approximately 3-5 days (range: 1-14 days), non-specific influenza-like symptoms, especially fever, chills and headache, arise usually with rapid onset [2,9,17,18]. Infection through the skin results in ulceroglandular tularemia, while infection via the mucous membranes induces ulceroglandular, glandular, oculoglandular, or oropharyngeal tularemia [2,17]. In Fennoscandia, where the bacterium is transmitted mainly through mosquito bites [4,19], the ulceroglandular form is most common [4]. Inhalation of aerosolized F. tularensis causes pulmonary tularemia, the most severe form of the disease [20][21][22].
Tularemia outbreaks in humans have been linked to high rodent densities [7,18,[23][24][25], and exposure to rodents or their droppings is suspected as the infection source in a large outbreak in Kosovo [24,25]. However, the precise role of rodents in bacterial maintenance, and the nature of their association with human disease have remained unclear. In Finland, the field vole (Microtus agrestis) and bank vole (Myodes glareolus) are the dominant rodent species [26], and hence the most plausible hosts for F. tularensis. Indeed, we have recently detected the bacterium in screening of wild field voles in Finland [27]. Here we report on an experiment to evaluate the pathogenicity of F. tularensis for these species, in order to further elucidate factors affecting their association with human disease outbreaks.
Ethics
Experimental procedures and facilities were approved by the Finnish Animal Experiment Board (Permit ESAVI/6162/ 04.10.03/2012), which followed the Finnish legislation for animal experiments. All efforts were made to minimize animal suffering.
Naturally infected animals
Three naturally F. tularensis-infected, PCR-positive adult field voles, trapped as part of a screening project in the Konnevesi area in Central Finland [27], were evaluated for the presence of bacteria in tissues and associated pathological changes as a reference for the experimental infection study. Tissue specimens from lungs, liver and kidneys were collected from these animals and frozen at 220uC. Samples were later thawed and fixed in 10% buffered formalin for histopathological and immunohistological examination.
Animals for experimental infections
The experimental infections were conducted on visibly healthy adult (.8 weeks of age) field and bank voles. These animals were laboratory-born at the Finnish Forest Research Institute, Suonenjoki station, and were the progeny of wild voles captured in the surrounding area.
For the experimental infections, voles were transferred to the biosafety level 3 laboratory of the Faculty of Veterinary Medicine, University of Helsinki, Finland, where they were housed in individually ventilated and HEPA-filtered isolation cages (Isocage Unit, Tecniplast, Italy). Wood shavings covered the cage floor, and a cardboard roll was supplied for additional cover. Water and rodent pellets (22.5% crude protein, 5% crude fat, 4.5% crude fiber and 6.5% crude ash) were supplied ad libitum, and voles were given a slice of fresh apple every 1-2 days. Voles were placed into the cages three days prior to experimental infections.
Bacteriology
A strain of F. tularensis, which had originally been isolated from a cutaneous ulcer of a 49-year-old woman, identified as ssp. holarctica by 16S rRNA gene sequencing, was used for the experimental infections. Bacteria were cultured on chocolate agar plates and incubated at +35uC in 5% CO 2 for five days. MacFarland 1.0 suspension was prepared in sterile isotonic saline and diluted in ten-fold series to approximately 1000 colonyforming units (cfu)/ml. The actual concentration was determined by plate counting in each experiment. The diluted suspension was kept on ice and used for inoculations within 1-2 h of preparation. The viable count of F. tularensis in the remaining dilution was similar to that of the fresh dilution.
Experimental infections
Pilot study. A pilot study was conducted to identify a bacterial delivery route and dose that best mimic natural infections in voles, and to gather information on the incubation period and clinical course of infection. For this, two field voles were allocated to each of 4 dose/route combinations (total n = 8): either 120 (low dose) or 1,200 (high dose) cfu of Francisella tularensis ssp. holarctica (diluted in 100 ml of sterile isotonic saline), and either intranasal (i.n.) or subcutaneous (s.c.) delivery route. Experimental infections were conducted under brief isoflurane anesthesia, and s.c. injections were delivered between the shoulder blades. One further vole served as an uninfected control.
The animals were checked twice daily for signs of illness or death, and immediately euthanized if they exhibited signs of illness. After 9 days, all remaining voles were euthanized via cervical dislocation under isoflurane anesthesia. A full post mortem examination was performed immediately after death or when the voles were found dead, and samples from the spleen, lung, liver, and kidney were aseptically collected and frozen at 280uC for PCR analysis. In addition, samples of heart, lungs, liver, kidneys, spleen, mesenteric and mediastinal lymph nodes, brain, and inoculation sites (skin, nose) were fixed in 10% buffered formalin, for histological and immunohistological assessment.
Main study. For the main experiment, 12 field voles and 12 bank voles were injected s.c. with a 100 ml suspension containing 70 cfu of F. tularensis ssp. holarctica in sterile isotonic saline. Three randomly selected animals of each species served as noninfected controls and were injected with 100 ml sterile isotonic saline alone. Voles were checked twice daily for signs of illness and death. Three infected voles of each species were electively euthanized on days 1 and 3 post infection (p.i). The remaining voles were euthanized by cervical dislocation under isoflurane anesthesia if symptomatic. Animals were necropsied immediately after death, and urine, feces, spleen, and kidney samples were aseptically collected and frozen at 280uC for PCR analysis. Tissue specimens from lungs, liver, spleen, bone marrow, kidneys, stomach, duodenum, jejunum, colon, and the inoculation site were fixed in 10% buffered formalin for histological and immunohistological assessment.
Histology and immunohistology
Fornalin-fixed tissue specimens from all animals were trimmed and routinely paraffin wax embedded. Sections (3-5 mm) were prepared and stained with hematoxylin-eosin (HE) or used for immunohistology (IH). IH was performed using a mouse monoclonal antibody against F. tularensis LPS (clone T14; Meridian Life Sciences, Memphis, USA) and the horseradish peroxidase method (Envision; Dako, Glostrup, Denmark) with diaminobenzidine as chromogen, after antigen retrieval with citrate buffer (pH 6.0) microwave pretreatment.
DNA extraction and PCR analyses
DNA was extracted from vole tissue samples and excreta using commercial kits. The Wizard Genomic DNA Purification Kit (Promega, Madison, USA) was used for spleen and kidney samples, following the protocol for animal tissue. The QIAamp DNA Stool kit (Qiagen, Hilden, Germany) was employed for fecal samples (20 mg feces+160 ml phosphate-buffered saline). From urine samples (24.5-140 ml), DNA was extracted with the QIAamp Viral RNA Mini kit (Qiagen, Hilden, Germany), using the protocol for purification of cellular, bacterial, or viral DNA from urine. Each sample batch contained water as a negative control. DNA concentration and purity were determined with the Nanodrop ND-1000 spectrophotometer (Thermo Scientific, Wilmington, DE, USA).
The DNA samples were subjected to a modified semiquantitative real-time PCR assay (qPCR) targeting the 23 kDa gene of F. tularensis (27,28). All PCRs were run in duplicate with an ABI 7500 instrument (Applied Biosystems, Foster City, CA, USA). DNA from tissue samples was analyzed using 1:100 dilutions, and for urine and fecal samples, three 10-fold (undiluted, 1:10, 1:100) dilutions were examined. The PCR assay included an internal positive inhibition control, water as negative non-template control, and F. tularensis LVS control strain DNA as positive control. The amount of F. tularensis bacteria in each sample was estimated based on genomic equivalents (GE). To enable comparison of F. tularensis amounts in tissues of experimentally versus naturally infected voles, we also calculated the GE amount in relation to the estimated number of cells in tissue samples [27].
Naturally infected wild field voles
In the three F. tularensis-infected wild field voles [27], bacteremia was confirmed by histology and immunohistology. Bacteria were found as aggregates within vessels and capillaries, specifically also in liver sinusoids and renal glomerular capillaries ( Figure 1A-D). They were abundant in the splenic red pulp where they were associated with extensive necrosis ( Figure 1D). In addition, bacteria were identified within macrophages in the liver (i.e. Kupffer cells: Figure 1B) and the splenic red pulp. In the livers, individual necrotic hepatocytes were also seen.
Pilot study in field voles
A pilot study was conducted on field voles to evaluate different infection routes (s.c. and i.n.) and doses (high and low dose). All voles remained asymptomatic during the first four days after infection. On day 5 p.i., four infected voles (two low dose s.c., one high dose i.n., and one high dose s.c.) were found dead, and another animal (high dose s.c.) was euthanized due to general malaise. On day 6 p.i., one symptomatic vole (high dose i.n.) was euthanized. Both low dose i.n. infected voles survived until day 9 p.i., when one was found dead and the other, which had remained asymptomatic, as well as the uninfected control animal, were electively euthanized at the scheduled end of the experiment.
The post mortem examination did not reveal any significant gross changes. Histology confirmed severe bacteremia in all but the electively euthanized low dose i.n. infected vole and the control animal, with bacterial aggregates in vessels in all examined organs and in the cardiac chambers. The pathological changes were very similar to those seen in the naturally infected voles and are typical for tularemia in other species [10,13], such as extensive splenic and lymph node necrosis with abundant cell-free bacteria (Figure 2A-D). Two i.n. infected voles (one high dose and one low dose) also showed a multifocal extensive necrotizing pneumonia with abundant bacteria both cell-free and in macrophages ( Figure 2E, F), features not seen in the naturally infected voles. This indicates direct aerosol infection of the lung and subsequent bacteremia, in particular since the animals exhibited neither histological changes nor bacteria in the nasal cavity. Bacterial loads in organs did not substantially vary in relation to the route and dose of infection and were generally high in all tissues of
Main experimental study in field and bank voles
For the main study, a low dose delivered via s.c. injection was chosen, as the pilot study demonstrated it to best mimic the natural infection in voles.
All animals that were sacrificed on days 1 and 3 p.i. (three field voles and three bank voles at each time point) had been asymptomatic and did not exhibit any significant gross changes. On day 1 p.i., PCR did not detect F. tularensis DNA in spleen, kidney, feces or urine (Figure 3), and IH did not identify bacteria in any tissue (Table 1). Histological changes were restricted to the injection site, where focal interstitial hemorrhage was generally seen. In one bank vole a focal macrophage aggregate was found in the adipose tissue of the inoculation site, and IH identified a few bacteria within the macrophages.
On day 3 p.i., a neutrophil-dominated inflammatory reaction with intracellular (macrophages, neutrophils) and cell-free bacteria was often seen at the inoculation site ( Figure 4A). The spleen of all animals tested positive for F. tularensis DNA (Figure 3), and in all but one weakly PCR-positive spleen, IH identified variable amounts of bacteria within macrophages in the red pulp ( Figure 4B, Table 1), confirming cell-associated bacteremia. This was not associated with distinct histological changes in the spleen. In two bank voles, the kidney was weakly PCR-positive, and IH identified some bacteria in glomerular capillaries, without other histological changes. The urine of both these animals was PCRnegative. IH also identified bacteria in the livers, as individual cells in sinuses, and identified patches of reactive hepatocytes. Some bacteria were found in capillaries in the lungs of the three bank voles, again without distinct histopathologic changes.
On day 4 p.i., one field vole displayed general malaise and was euthanized, and on day 5 p.i., the remaining 5 field voles and 6 bank voles died or were visibly symptomatic and euthanized. PCR demonstrated F. tularensis DNA in the urine and high F. tularensis loads in the spleens and kidneys of all animals ( Figure 3); in voles euthanized on day 5 p.i., F. tularensis was also identified in feces ( Figure 3, Table 1). Histology and IH confirmed these results and revealed features similar to those in the pilot study. The findings were similar in both species. In general, large bacterial aggregates were seen in the splenic red pulp and in capillaries in all examined organs. In the kidneys, bacteria were found in both glomerular and interstitial capillaries ( Figure 4C). Apart from disseminated bacterial aggregates between hepatic cords, the liver carried bacteria within Kupffer cells and exhibited multifocal random hepatocellular necrosis ( Figure 4D). In the spleen, the red pulp was almost completely effaced due to necrosis (and loss) of cells, and the white pulp was markedly reduced, with extensive (follicular) apoptosis/necrosis and replacement by bacteria. Lymph nodes exhibited focal areas of necrosis with abundant bacteria ( Figure 4E). In the bone marrow, bacteria were found within mononuclear cells (most consistent with macrophages) and sometimes cell free ( Figure 4F), and there was extensive necrosis/apoptosis of myelopoietic cells. Examination of the gastrointestinal tract identified bacteria within capillaries in all compartments, within Peyer's patches ( Figure 4G) and occasionally also in intestinal epithelial cells in both the small and large intestine ( Figure 4H, I). Some small macrophage aggregates with bacteria were found in the lamina propria mucosae. More extensive inflammatory infiltrates were restricted to inoculation sites, where variably extensive necrosis and neutrophil infiltration with masses of cell-free bacteria was seen.
Control voles remained asymptomatic and were euthanized on day 9, at the scheduled end of the experiment. They were negative for F. tularensis by PCR and IH and did not exhibit any histological changes.
Discussion
The current study presents an experimental model that mimics natural F. tularensis ssp. holarctica infection of wild voles and demonstrates that both field voles and bank voles are highly susceptible to the bacterium. Infected animals died with bacteremia, following a rapid clinical course and generally with very high bacterial loads in organs. We showed that infected voles excrete F. tularensis in their urine and feces around the time of death. The bacterial burden in excreta was relatively low compared to the bacterial load in tissues, but since only a low dose is generally required for infection [9], feces and urine might be infective for other animals and humans. Furthermore, the course of infection could be different under natural conditions. Long-term infections and shedding of F. tularensis have been reported after oral infection [29,30] and the oral route of infection should be studied in future. The presence of bacterial aggregates within the glomerular tufts in the kidneys and within mucosal vessels and between epithelial cells in the intestinal mucosa of animals by day 5 p.i. also indicates that F. tularensis is excreted in urine and feces at this stage. Excretion of F. tularensis, in addition to contamination from dead animals, might serve to transfer the bacteria into the environment, which could also include mosquito breeding sites. In support of this premise, F. tularensis has been demonstrated to survive in water for several weeks [31,32]. The survival is supported by protozoa, which are commonly found in natural aquatic systems as part of their normal biofilms [16].
Outbreaks of airborne tularemia in humans are mainly linked to farm work and other outdoor activities [6,8,12,22,33,34], for example exposure to hay dust has been associated with pneumonic tularemia [4]. This might be due to bacteria-containing aerosols originating from animal carcasses or excreta made airborne by agricultural machines. Similarly, Puumala hantavirus infection is acquired by inhalation from rodent excreta, and considerably [27,28]. Samples were collected at the following time points: day 1 post infection (p.i.), day 3 p.i. and days 4-5 p.i. doi:10.1371/journal.pone.0108864.g003 more often by farmers [35]. F. tularensis has been shown to survive up to 192 days in the environment on straw and grain depending on the temperature of the surrounding air [36]. Survival is longest in winter conditions, as the amount of viable bacteria decreases with rising temperatures [36]. The enhanced survival of F. tularensis in cool temperatures might be one factor contributing to the high tularemia incidence in Fennoscandia.
Our analysis of the pathogenesis of tularemia indicates that the bacteria are taken up locally (i.e. at the inoculation site) by macrophages and neutrophils and then distributed throughout the body, to eventually accumulate in the blood. Accordingly, they were found both within monocytes and cell free in vessels of almost all organs, and led to necrosis of infected cells, resulting in extensive necrosis particularly in the lymphatic tissues (i.e. spleen and lymph nodes). Interestingly, apart from the inoculation site, this was not associated with an overt inflammatory response. Similar changes have been reported in hares, in which tularemia is mainly characterized by acute focal necrosis without cellular reaction in liver, spleen, and bone marrow [10]. Recently, F. tularensis infection even without lesions has been described in squirrels [37]. In our pilot study, two intranasally infected voles exhibited a necrotizing to granulomatous pneumonia, indicating direct infection of the lung (not via bacteremia). This kind of prominent change is typical for inhalational tularemia; severe necrotizing pneumonia has been demonstrated in monkeys [38] and mice [39] after F. tularensis spp. tularensis aerosol exposure. Necrotizing granulomatous inflammation is also seen in lung biopsies of human patients with pulmonary tularemia [40,41].
In Fennoscandia, tularemia is primarily mosquito-transmitted, and large human outbreaks occur regularly [4,19]. Mosquitoes have been shown experimentally to become persistently infected already as larvae and then transstadially through the developmental stages to adults, without evidence of F. tularensis replication, however [42]. It has been shown that F. tularensis E. Field vole, mesenteric lymph node and large artery (A). Cell-free bacteria fill the lumen of the artery and are present within necrotic areas in the lymph node (arrow). F. Bank vole, bone marrow. Bacteria are mainly found within mononuclear (myeloid) cells (arrowhead). G. Field vole, duodenum with Peyer's patch, exhibiting bacteria within cells and cell free, also towards the mucosal surface (arrowhead). H. Field vole, jejunum. Bacterial aggregates fill capillaries (arrow) and are present within cells, also in the lamina epithelialis mucosae (arrowhead). I. Bank vole, colon. Bacterial aggregates fill capillaries (arrow) and are present within cells, also in the lamina epithelialis mucosae (arrowhead). Horseradish peroxidase method, Papanicolaou's hematoxylin counterstain. Bars = 20 mm (A-D, G), 50 mm (E), 10 mm (F, H, I). doi:10.1371/journal.pone.0108864.g004 multiplies in protozoa [16], but mammals are probably also needed as local amplifiers to facilitate the spread of the disease [42] e.g. through contaminated water and subsequently mosquitoes. In Sweden, a temporal link between outbreaks in humans and rodent density cycles has been reported during 1960s and 1970s [23]. Moreover, our recent survey of wild rodent species identified F. tularensis in wild field voles [27], and we show here that the massive bacteremia and pathological lesions after experimental infection are identical to those in naturally infected animals. Mosquitoes might also become infected by feeding on bacteremic voles and then perhaps directly transmit F. tularensis to humans and other susceptible hosts. It is also possible that F. tularensis, amongst other factors, contributes to the density crash of vole populations in certain areas, at which stage F. tularensis is released into the environment. This environmental contamination presumably also propagates the outbreak among voles. As our results show, infected dead voles can lead to heavy contamination of the environment and provide an explanation for the common association between rodent density and human tularemia incidence.
In summary, the fact that voles readily developed lethal tularemia, together with the severity and similarity of the lesions in both experimentally and naturally infected animals, suggest that long-term or latent infection of these species is unlikely, yet some reservation concerning the infection routes may be warranted. Instead, voles are likely to play a role as amplification hosts and lead to bacterial contamination of the local environment, and by this mechanism contribute to the incidence of human tularemia. | 5,188.4 | 2014-10-01T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Malliavin Calculus in L´evy spaces and Applications to Finance
The main goal of this paper is to generalize the results of Fourni´e et al. [8] for markets generated by L´evy processes. For this reason we extend the theory of Malliavin calculus to provide the tools that are necessary for the calculation of the sensitivities, such as differentiability results for the solution of a stochastic differential equation
Introduction
In recent years there has been an increasing interest in Malliavin Calculus and its applications to finance. Such applications were first presented in the seminal paper of Fournié et al. [8]. In this paper the authors are able to calculate the Greeks using well known results of Malliavin Calculus on Wiener spaces, such as the chain rule and the integration by parts formula. Their method produces better convergence results from other established methods, especially for discontinuous payoff functions. There have been a number of papers trying to produce similar results for markets generated by pure jump and jump-diffusion processes. For instance El-Khatib and Privault [6] have considered a market generated by Poisson processes. In Forster et al. [7] the authors work in a space generated by independent Wiener and Poisson processes; by conditioning on the jump part, they are able to calculate the Greeks using classical Malliavin calculus. Davis and Johansson [4] produce the Greeks for simple jump-diffusion processes which satisfy a separability condition. Each of the previous approaches has its advantages in specific cases. However, they can only treat subgroups of Lévy processes. This paper produces a global treatment for markets generated by Lévy processes and achieves a similar formulation of the sensitivities as in Fournié et al. [8]. We rely on Malliavin calculus for discontinuous processes and expand the theory to fulfill our needs. Malliavin calculus for discontinuous processes has been widely studied as an individual subject, see for instance Bichteler et al. [3] for an overview of early works, Di Nunno et al. [5], Løkka [12] and Nualart and Vives [14] for pure jump Lévy processes, Solé et al. [16] for general Lévy processes and Yablonski [17] for processes with independent increments. It has also been studied in the sphere of finance, see for instance Benth et al. [2] and Léon et al. [11]. In our case we focus on square integrable Lévy processes. The starting point of our approach is the fact that Lévy processes can be decomposed into a Wiener process and a Poisson random measure part. Hence we are able to use the results of Itô [9] on the chaos expansion property. In this way every square integrable random variable in our space can be represented as an infinite sum of integrals with respect to the Wiener process and the Poisson random measure. Having the chaos expansion we are able to introduce operators for the Wiener processes and Poisson random measure. With an application to finance in mind, the Wiener operator should preserve the chain rule property. Such a Wiener operator was introduced in Yablonski [17] for the more general class of processes with independent increments, using the classical Malliavin definition. In our case we adopt the definition of directional derivative first introduced in Nualart and Vives [14] for pure jump processes and then used in Léon et al. [11] and Solé et al. [16]. The chain rule formulation that is achieved for simple Lévy processes in Léon et al. [11], and for more general processes in Solé et al. [16], is only applicable to separable random variables. As Davis and Johansson [4] have shown, this form of chain rule restricts the scope of applications, for instance it excludes stochastic volatility models that allow jumps in the volatility. We are able to bypass the separability condition, by generalizing the chain rule in this setting. Following this, we define the directional Skorohod integrals, conduct a study of their properties and give a proof of the integration by parts formula. We conclude our theoretical part with the main result of the paper, the study of differentiability for the solution of a Lévy stochastic differential equation. With the help of these tools we produce formulas for the sensitivities that have the same simplicity and easy implementation as the ones in Fournié et al [8].
The paper is organized as follows. In Section 2 we summarize results of Malliavin calculus, define the two directional derivatives, in the Wiener and Poison random measure direction, prove their equivalence to the classical Malliavin derivative and the difference operator in Løkka [12] respectively, and prove the general chain rule. In Section 3 we define the adjoint of the directional derivatives, the Skorohod integrals, and prove an integration by parts formula. In Section 4 we prove the differentiability of a solution of a Lévy stochastic differential equation and get an explicit form for the Wiener directional derivative. Section 5 deals with the calculation of the sensitivities using these results. The paper concludes in Section 6, with the implementation of the results and some numerical experiments.
Malliavin calculus for square integrable Lévy Processes
is the augmented filtration generated by Z. Then the process can be represented as: where {W t } t∈[0,T ] is the standard Wiener process and µ(·, ·) is a Poisson random measure independent of the Wiener process defined by where A ∈ B(R 0 ) . The compensator of the Lévy measure is denoted by π(dz, dt) = λ(dt)ν(dz) and the jump measure of the Lévy process by ν(·), for more details see [1]. Since Z is square integrable the Lévy measure satisfies R 0 z 2 ν(dz) < ∞. Finally σ is a positive constant, λ the Lebesgue measure and R 0 = R \ {0}. In the followingμ(ds, dz) = µ(ds, dz) − π(ds, dz) will represent the compensated random measure. In order to simplify the presentation, we introduce the following unifying notation for the Wiener process and the random Poisson measure also we define an expanded simplex of the form: for j 1 , . . . , j n = 0, 1.
Chaos expansion
The theorem that follows is the chaos expansion for processes in the Lévy space L 2 (Ω). It states that every random variable F in this space can be uniquely represented as an infinite sum of integrals of the form (1). This can be considered as a reformulation of the results in [9], or an expansion of the results in [12].
We can show that this definition is reduced to the Malliavin derivative if we take j i = 0, ∀i = 1, . . . , n, and to the definition of [12] if j i = 1,∀i = 1, . . . , n.
From the above we can reach the following definition for the space of random variables differentiable in the l-th direction, which we denote by D (l) , and its respective derivative D (l) : 1. Let D (l) be the space of the random variables in L 2 (Ω) that are differentiable in the l-th direction, then 2. Let F ∈ D (l) . Then the derivative on the l-th direction is: From the definition of the domain of the l−directional derivative, all the elements of L 2 (Ω) with finite chaos expansion are included in D (l) . Hence, we can conclude that D (l) is dense in L 2 (Ω).
Relation between the Classical and the Directional Derivatives
In order to study the relation between the classical Malliavin derivative, see [13], the difference operator in [12] and the directional derivatives, we need to work on the canonical space. The canonical Brownian motion is defined on the probability space (Ω W , F W , P W ), where Ω W = C 0 ([0, 1]); the space of continuous functions on [0, 1] equal to null at time zero; F W is the Borel σ-algebra and P W is the probability measure on F W such that B t (ω) := ω(t) is a Brownian motion.
Respectively, the triplet (Ω N , F N , P N ) denotes the space on which the canonical Poisson random measure. We denote with Ω N the space of integer valued measures ω on [0, 1] × R 0 , such that ω (t, u) ≤ 1 for any point (t, u) ∈ [0, 1]×R 0 , and ω (A×B) < ∞ when π(A×B) = λ(A)ν(B) < ∞, where ν is the σ-finite measure on R 0 . The canonical random measure on Ω N is defined as With P N we denote the probability measure on F N under which µ is a Poisson random measure with intensity π. Hence, µ(A × B) is a Poisson variable with mean π(A × B), and the variables In our case we have a combination of the two above spaces. With (Ω, F, {F t } t∈[0,1] , P) we will denote the joint probability space,where Ω := Ω W ⊗ Ω N equipped with the probability measure P := P W ⊗ P N and F t := F W t ⊗ F N t . Then there exists an isometry Therefore we can consider every F ∈ L 2 (Ω W ; L 2 (Ω N )) as a functional F : ω → F (ω, ω ). This implies that L 2 (Ω W ; L 2 (Ω N )) is a Wiener space on which we can define the classical Malliavin derivative D. The derivative D is a closed operator from L 2 (Ω W ; L 2 (Ω N )) into In the same way the difference operatorD defined in [12] with domainD 1,2 is closed from As a consequence we have the following proposition.
Given the directional derivatives D andD we reach the subsequent proposition.
where Z depends only on the Wiener part and Z ∈ D (0) , Z depends only on the Poisson random measure and f (x, y) is a continuously differentiable function with bounded partial derivatives in x, then
Chain rule
The last proposition is an extension of the results in [11], where the authors consider only simple Lévy processes, and similar to corollary 3.6 in [16]. However, this chain rule is applicable to random variables that can be separated to a continuous and a discontinuous part;separable random variables, for more details see [4]. In what follows we provide the proof of chain rule with no separability requirements. The first step is to find a dense linear span of Doléans-Dade exponentials for our space. To achieve this, as in [12], we use the continuous function which is totally bounded and has an inverse. Moreover γ ∈ L 2 (ν), e λγ − 1 ∈ L 2 (ν), ∀λ ∈ R and for h ∈ C([0, T ]) we have e hγ − 1 ∈ L 2 (π), hγ ∈ L 2 (π), e hγ ∈ L 1 (π).
Proof. The proof follows the same steps of [12].
The proof of the chain rule requires the next technical lemma.
Proof. We follow the same steps as in Lemma 6 in [12]. Since F k converges to F lim k→∞ ∞ n=0 j 1 ,...,jn=0,1 Since F k , F ∈ D (0) from the definition of the directional derivative we have From (4) we can choose a subsequence such that g k m+1 Using the fact that D (0) is a densely defined and closed operator, and that the elements of the linear span S are separable processes, we prove in the following theorem the chain rule for all processes in D (0) .
Theorem 2. (Chain Rule)
Let F ∈ D (0) and f be a continuously differentiable function with bounded derivative. Then f (F ) ∈ D (0) and the following chain rule holds: Proof. Let F ∈ D (0) . F can be approximated in L 2 (Ω) by a sequence {F n } ∞ n=0 , where F n ∈ S for all n ∈ N. Every term of F n , as a linear combination of Lévy exponentials, is in D (0) . Then from Lemma 2 there exists a subsequence {F n k } ∞ k=0 such that lim k→∞ D (0) However, the elements of the sequence {F n k } ∞ k=0 are separable processes. We can then apply the chain rule in Proposition 2 to the process f (F n k ) and we have Since f is continuously differentiable with bounded derivative lim k→∞ f (F n k ) = f (F ) in L 2 (Ω), and from the dominate convergence theorem we can conclude that lim k→∞ f (F n k ) = f (F ) in L 2 (Ω). Hence lim k→∞ f (F n k )D Remark.The theory developed in this chapter also holds in the case that our space is generated by an d-dimensional Wiener process and k-dimensional random Poisson measures. However, we will have to introduce new notation for the directional derivatives in order to simplify things. For the multidimensional case, D (0) t F will denote a row vector, where the element of the i-th row is the directional derivative for the Wiener process W i , for all i = 1, . . . , d. Similarly we define the row vector D (1) (t,z) F . Furthermore D i F , i = 1, . . . , d, will be scalars denoting the directional derivative of the i-th Wiener process W i for i = 1, . . . , d, and the derivative in the direction of the i-th random Poisson measureμ i for i = d + 1, . . . , d + k.
Skorohod Integral
The next step after the definition of the directional derivatives is to define their adjoint, which are the Skorohod integrals in the Wiener and Poisson random measure directions. The first two result of the section are the calculation of the Skorohod integral and the study of its relation to the Itô and Stieltjes-Lebesgue integrals. These are extensions of the results in [4] and [10] from simple Poisson processes to square integrable Lévy processes. The proof are performed in parallel ways as in [4] (or in more detail in [10]), therefore they are omitted. The main result however is an integration by parts formula. Although the separability result is yet again an extension of [4], having attained a chain rule for D (0) that does not require a condition, we are able to provide a simpler and more elegant proof. Finally the section closes with a technical result.
Definition 3. The Skorohod integral
Let δ (l) be the adjoint operator of the directional derivative D (l) ,l = 0, 1. The operator δ (l) maps L 2 (Ω × U l ) to L 2 (Ω). The set of processes h ∈ L 2 (Ω × U l ) such that: for all F ∈ D (l) , is the domain of δ (l) , denoted by Domδ (l) . For every h ∈ Domδ (l) we can define the Skorohod integral in the l-th direction δ (l) (h) for which for any F ∈ D (l) .
The following proposition provides the form of the Skorohod integral. Then the l-th directional Skorohod integral is if the infinite sum converges in L 2 (Ω).
Having the exact form of the Skorohod integral we can study its properties. For instance the Skorohod integral can be reduced to an Itô or Stieltjes-Lebesgue integral in the case of predicable processes.
Proposition 4. Let h t be a predictable process such that
Then h ∈ Dom δ (l) for l = 0, 1 and We are now able to prove one of the main results, the integration by parts formula.
Proposition 5. (Integration by parts formula)
if and only if the second part of the equation is in L 2 (Ω).
Proof. From Theorem 2 we have
. Hence, from the definition of the Skorohod integral we have Combining (8), (9) and Proposition 4 the proof is concluded.
Note that when F is an m-dimensional vector process and h a m × m matrix process the integration by part formula can be written as follows: The last proposition of this chapter will provide a relationship between the Itô and the Stieltjes-Lebesgue integrals and the directional derivatives.
Proposition 6.
Let h t be a predictable square integrable process. Then Proof. This result can be easily deduced from the definition of the directional derivative.
Differentiability of Stochastic Differential Equations
The aim of this section is to prove that under specific conditions the solution of a stochastic differential equation belongs to the domains of the directional derivatives. Having in mind the applications in finance, we will also provide a specific expression for the Wiener directional derivative of the solution.
Let {X t } t∈[0,T ] be an m-dimensional process in our probability space, satisfying the following stochastic differential equation: are continuously differentiable with bounded derivatives. The coefficients also satisfy the following linear growth condition: for each t ∈ [0, T ], x ∈ R m where C is a positive constant. Furthermore there exists ρ : R → R with R 0 ρ(z) 2 ν(dz) < ∞, and a positive constant D such that for all x, y ∈ R m and z ∈ R 0 . Under these conditions there exists a solution for (10) which is also unique 1 . For what follows we denote with σ i the i-th column vector of σ and adopt the Einstein convention of leaving the summations implicit.
In the next theorem we prove that the solution {X t } t∈[0,T ] is differentiable in both directions of the Malliavin derivative. Moreover we reach the stochastic differentiable equations satisfied by the derivatives.
for s ≤ t a.e. and D i s X t = 0, a.e. otherwise.
Proof.
1. Using Picard's approximation scheme we introduce the following process if n ≥ 0. We prove by induction that the following hypothesis holds true for all n ≥ 0.
exists for all s ≥ r, D (0) X n s is a predictable process and for some constants c 1 , c 2 . It is straightforward that (H) is satisfied for n = 0. Let us assume that (H) is satisfied for n ≥ 0. Then from Theorem 2, b(s, X n s − ), σ(s, X n s − ) and γ(s, z, X n s − ) are in D (0) . Furthermore, we have that Since the coefficients have continuously bounded first derivatives in the x direction and condition (11), there exists a constant K such that However, From the above we can conclude that X n+1 From Cauchy-Schwartz and Burkholder-Davis-Gundy 2 inequality, (19) takes the following form From (15), (16) and (17) we have where β = sup n,i E sup r≤s≤t |σ i (s, X n s − )| 2 . Thus, hypothesis (H) holds for n + 1. From Applebaum [1], Theorem 6.2.3, we have that E sup s≤T |X n s − X s | 2 → 0 as n goes to infinity. By induction to the inequality (20), see for more details appendix A, we can conclude that the derivatives of X n s are bounded in L 2 (Ω × [0, T ]) uniformly in n. Hence X t ∈ D (0) . Applying the chain rule to (12) we conclude our proof.
2. Following the same steps we can prove the second claim of the theorem.
With the previous theorem we have proven that the solution of (10) is in D (0) , and reached the stochastic differential equation that D (0) s X t satisfies. However, the Wiener directional derivative can take a more explicit form. As in the classical Malliavin calculus we are able to associate the solution of (12) with the process Y t = ∇X t ; first variation of X t . Y satisfies the following stochastic differential equation 3 : where prime denotes the derivative and I the identity matrix. Hence, we reach the following proposition which provides us with a simpler expression for D where Y t = ∇X t is the first variation of X t .
Proof. The elements of the matrix Y satisfy the following equation: where δ is the Dirichlet delta.
Let {Z t } t∈[0,T ] be a d × d matrix valued process that satisfies the following equation By applying integration by parts formula we can prove that Furthermore it is easy to show applying again Itô's formula, that Y il t Z lk r − σ k j (r, X r − ) verifies (12) for all r < t. Hence the proof is concluded.
Sensitivities
Using the Malliavin calculus developed in the previous sections we are able to calculate the sensitivities, i.e. the Greek letters. The Greeks are calculated for an m-dimensional process {X t } t∈[0,T ] that satisfies equation (10). We denote the price of the contingent claim as where φ(X t 1 , . . . , X tn ) is the payoff function, which is square integrable, evaluated at times t 1 , . . . , t n and discounted from maturity T .
In what follows we assume the following ellipticity condition for the diffusion matrix σ.
Assumption 1. The diffusion matrix σ is elliptic. That implies that there exists k > 0 such that
Variation in the Drift Coefficient
Let us assume the following perturbed process where is a scalar and ξ is a bounded function. Then we reach the following proposition.
Proposition 8. Let σ be a uniformly elliptic matrix. We denote u (x) the following payoff Then Proof. The proof is based on an application of Girsanov's theorem. For
Variation in the Initial Condition
In order to calculate the variation in the initial condition we will define the set Γ, as follows where t i are as in (23).
Proposition 9. Assume that the diffusion matrix σ is uniformly elliptic. Then for all Proof. Let φ be a continuously differentiable function with bounded gradient. Then we can differentiate inside the expectation 4 and we have where ∇ i φ(X t 1 , . . . , X tn ) is the gradient of φ with respect to X t i , and ∂ ∂x X t i is the d × d matrix of the first variation of the d-dimensional process X t i . From (22) we have Hence for any ζ ∈ Γ and inserting the above to (24) follows that from Theorem 2 φ(X t 1 , . . . , X tn ) ∈ D (0) , thus From the definition of the Skorohod integral we reach However, ζ(t)(σ −1 (t, X t − )Y t − ) * is a predictable process, thus the Skorohod integral coincides with the Wiener. Since the family of continuously differentiable functions is dense in L 2 , the result hold for any φ ∈ L 2 , see [8] and [4] for more details.
Variation in the diffusion coefficient
For this section we consider the following perturbed process where is a scalar and ξ is a continuously differentiable function with bounded gradient. We also introduce the variation process in respect to , Z t = ∂ ∂ X t , which satisfies the following sde In this case, we introduce the set Γ n , where Γ n = {ζ ∈ L 2 ([0, T )) : ζ(t)dt = 1, ∀i = 1, . . . , n}.
Then for all ζ(t) ∈ Γ Proof. Let φ be a continuously differentiable function with bounded gradient as in Proposition 9,we can differentiate inside the expectation. Hence Inserting the above to (25) we have the following the result follows. If β ∈ D (0) using Proposition 5 we can calculate the Skorohod integral.
Variation in the jump amplitude
For this section we consider the following perturbed process where is a scalar and ξ is a continuously differentiable function with bounded gradient. As in the previous section, we will also introduce the variation process in respect to , Z t = ∂ ∂ X t , which satisfies the following sde And we will use the set Γ n as it is defined in the previous section. | 5,689.6 | 2008-08-05T00:00:00.000 | [
"Mathematics"
] |
Effect of nanoclay loading on the thermal decomposition of nanoclay polyurethane elastomers obtained by bulk polymerization
Thermoplastic urethane (TPU) nanocomposite was prepared successfully by dispersion at high shear stress of the nanoclay in polyol and further bulk polymerization. Our results from DSC studies showed an increase in decomposition temperature when nanoclay was loaded at 3,5% on elastomeric PU made from TDI, PTMEG and BDO, while not when nanoclay content was lower (1,5%). The exotherms at 370-375°C could be adscribed to the decomposition of the hard segments according to previous work.
Introduction
Nanocomposites of polyurethane exhibit improvement in various properties compared to the conventional microcomposites such as outdoor resistance of coatings [1], heat resistance, gas permeability, and flammability [2,3,4,5]. Mineral fillers as high-modulus additives are used as reinforcing agents in polymeric materials. In particular, polyurethane/clay nanocomposites have attracted great interest in recent years. They have gradually become more widely accepted in applications such as automobile parts [4,6].
On a broad basis, the preparation of polyurethanes (PU) is typically classified by the sequence of addition of the reactants (one-step, two-step or prepolymer process) or by the number of components, which the user has to mix together (a one-component or a two-component system).
Thermoplastic polyurethanes (TPU) have a two phase structure because of the thermodynamic incompatibility of the hard and soft segments of PU chain. Phase separation of hard segment from polyol soft segment determines elastomeric properties. Phase separation, leading to domain microstructure, has been postulated to explain the unusual viscoelastic properties of segmented PU [7].
In general, PU are formulated with an excess isocyanate to improve elastomeric properties during post-curing reactions. Mechanical properties of PU elastomers could be controlled by changing the molar ratio of two different monomer components [8]. The NCO/OH ratio determines the physical properties of PU, being generally a ratio of 1 the better for improving physical properties. The type of polyol, isocyanate, chain extender and synthesis conditions determines the physical and mechanical properties of PU, rendering a wide variety of materials with properties ranging from a soft foam to extremely hard and impact/tear-resistant materials.
Additionally, the way of incorporation of nanoclay into the PU matrix greatly affects its physical properties. The simplest approach of mixing nanoclay and PU is to physically mix them both, resulting in physical trap forces such as polar, hydrogen bonding and shear between the clay and the polymer [9]. Other mixing methods were reported in the state of the art. As an example, in one method for rigid PU foams, organoclay was dispersed first in the isocyanate component ultrasonically. It was found that using toluene as common solvent enhanced dispersion significantly [10]. Bulk and melt processing polymerization were compared as two methods of incorporation of nanoclay into thermoplastic polyurethane matrix [11].
The glass transition and decomposition temperatures of nanoclay PU increase with increasing clay content, due to the restricted motion of chains and barrier property of the clay platelets. In PU/clay nanocomposites, hard segments are highly attracted to the silicate surface but soft segments are not attracted in order to push the platelets apart to regain entropy [12,13]. Functionalized nanoclays were reported to increase the T g gradually to the range of 60 to 62°C for 5 wt% loading in PU based thermoset adhesive [14]. On the other side, Thirumal and coworkers [16] observed that the glass transition temperature (T g ) decreases on loading of organically-modified nanoclay. These findings highlights that thermal behaviour depends on several factors such as type of TPU, clay incorporation method and how platelets are dispersed within the PU matrix.
Introduction of nanoclay limits the motion of TPU molecule and leads the nanocomposites to exhibit higher thermal stability [20]. Decomposition temperatures for 10% weight loss was at least 10ºC higher than control sample for nanoclay-reinforced PU foams up to 2% pphp (parts percent polyol basis) and were between 350 and 380ºC [17]. For thermoplastic PU (TPU) based on HMDI, PTMEG and BDO and loaded with nanoclay from 1 to 9% (pphp), decomposition temperatures were increased from 400 ºC for control TPU to almost 450 ºC for 5% nanoclay filled. The authors found that all the specimens of PU nanocomposites showed two stages of thermal degradation. The first stage is dominated by the degradation of the hard segment and the second stage correlated well with the thermal dissociation of the soft segment [18]. It is common knowledge that strong dipole-dipole, hydrogen bond interactions and crystallinity that nanoclay addition imparts improve heat resistance. Also for organoclay-modified PU foams enhancement degree of thermal stability and flame retardancy of composites was reported to coincide well with the sequences of gallery spacing of organoclay in the PU matrix [15].
In this contribution we report our preliminary results on the effect of nanoclay incorporation over the decomposition temperature of an elastomeric PU based on poly(tetramethylene)glycol (PTMEG), toluene diisocyanate (TDI) as curing agents and 1,4-butanediol (BDO) as chain extender. OCT (organoclay)-polyurethane nanocomposite: OCT-polyol nanocomposites were prepared by twostep or prepolymer bulk polymerization. Increasing quantities of OCT (nanoclay) were added to the polyol, while stirring at 2000 rpm for 10 minutes. The resulting OCT-polyol nanocomposites were reacted with TDI for 120 minutes at 60ºC to form the prepolymer under vaccum (4 mm Hg). Then BDO was added as chain extender for 30-60 minutes the same temperature until elastomeric material developed, evidenced by the increased thickness and final formation of a gummous and soft material. Post-cure was made at ambient temperature for at least 7 days.
Reactives
Differential scanning calorimetry (DSC) studies were performed in a TA model Q20 apparatus belonging to the Department. Samples were heated at 5ºC/min under a nitrogen atmosphere.
Results
We obtained PU elastomers adding TDI to the polyol at 60ºC and adding BDO as chain extender in a second stage. The NCO/OH initial ratio was set to 1. When BDO was added, the reaction crude increased its viscosity until a hard elastic material was formed. For obtaining the nanocomposites, nanoclay was incorporated in the polyol fraction at high shear (2000 rpm) for 10 minutes.
In the following Table, we summarized the temperatures at with thermal peaks were obtained for different elastomeric PU. Linear PU made from TDI and PTMEG only showed less heat resistance than those branched with BDO, evidenced by a 20-30 ºC reduction in decomposition temperature. Decomposition temperatures were 10ºC lower for 1,5% loaded nanoclay PU than for control sample. However, 3,5% loaded sample decomposed 5 ºC after the control, coincidently with the majority of the reports [1,2,3,4]. The exotherms at 370-375ºC could be adscribed to the decomposition of the hard segments, according to previous reports [18,20]. Saiani and coworkers [21] assigned the observed high-temperature endothermic transitions to the disruption of an ordered structure appearing in the hard phase under certain annealing conditions and to the microphase mixing of the soft and hard segments. Overall, decomposition temperatures were in the same range as for other elastomeric PU systems. For example, it was around 440ºC when poly(caprolactone) was the soft segment, and 4,4-methylene bis (cyclohexyl isocyanate) and BDO conformed the hard segment [19] and around 420ºC for a similar PU system cured with the aliphatic isophorone diisocyanate (IPDI) instead of aromatic TDI [22]. The thermal stabilization effect of nanoclay can be explained because nanoclay hinders the diffusion of the volatile degradation products (carbon dioxide, carbon monoxide, water molecules, etc.) from the bulk of the polymer matrix to the gaseous phase [20].
Conclusions
TPU nanocomposite was prepared successfully by dispersion at high shear stress of the nanoclay in the polyol and further bulk polymerization. Our results from DSC studies showed an increase in decomposition temperature when nanoclay was loaded at 3,5% on elastomeric PU made from TDI, PTMEG and BDO, while not when nanoclay content was lower (1,5%). The exotherms at 370-375ºC could be adscribed to the decomposition of the hard segments according to previous work. Our results justify further research with this PU system to determine specifically which nanoclay content improves thermal resistance without affecting general properties of this system. | 1,859.2 | 2014-08-22T00:00:00.000 | [
"Materials Science"
] |
Experimental realization of dual task processing with a photonic reservoir computer
We experimentally demonstrate the possibility to process two tasks in parallel with a photonic reservoir computer based on a vertical-cavity surface-emitting laser (VCSEL) as a physical node with time-delay optical feedback. The two tasks are injected optically by exploiting the polarization dynamics of the VCSEL. We test our reservoir with the very demanding task of nonlinear optical channel equalization as an illustration of the performance of the system and show the recover of two signals simultaneously with an error rate of 0.3% (3%) for a 25 km-fiber distortion (50 km-fiber distortion) at a processing speed of 51.3 Mb/s.
I. INTRODUCTION
Building energy efficient systems to process data currently performed by computer is one of the focus problems that photonic reservoir computing is trying to address. A reservoir computing system is a specific kind of neural network with a recurrent topology, i.e., coupling signals and information are not propagating unidirectional in the network structure. The training, consisting of adjusting the interconnection weight between the neurons for this particular structure, is usually difficult and data intensive as it scales with the square of the network size to solve a specific task. This also implies that the physical architecture with many tunable degrees of freedom should be designed, which represents a significant technical challenge for the development of efficient hardware platforms. A reservoir computing system overcomes these hurdles by not realizing the training through internal weight adjustments but by keeping it fixed and training a readout layer unidirectionally connected to the recurrent network. This can be achieved with a simple linear regression at the readout with simple regression algorithms. 1,2 This is specifically interesting as it allows the use of physical components for a hardware implementation of a neural network. Several architectures using this specific principle already exist. [3][4][5][6][7] However, realizing a large physical neural network remains a technical challenge especially with photonic devices. Hence, a solution was proposed with time-delay reservoir computing: Instead of using physical neurons, only one physical neuron is used, and several virtual neurons are temporally spread along a delay line. 8 The time separation between virtual neurons is set to be smaller than the physical-neuron response time so that the neurons remain in a sustained transient dynamics, which effectively translates into time-multiplexed interconnection between the virtual neurons. In that framework, adding neurons only requires lengthening the delay line. Several photonic architectures use this specific technique, with either an optoelectronic 4,9,10 or an all-optical [11][12][13][14][15][16][17] delay line.
The vertical-cavity surface-emitting laser (VCSEL) is a good candidate to realize a time-delay reservoir computer and process data in optical networks as it is widely used in optical telecommunication networks. One of VCSEL's specificity is light emission along two orthogonal linear polarization modes and a faster modulation frequency than an edge-emitting laser. 18 We have already proven numerically 19 and experimentally 20 that a VCSEL-based time-delay reservoir computer is able to efficiently perform computation tasks, with state-of-the-art performance on various tasks such as chaotic time-series prediction and nonlinear WIFI channel equalization.
Parallel processing of two tasks was originally proposed in Ref. 13 using single-mode dynamics of a laser diode. Using the multimode polarization dynamics of a laser diode has also been considered to perform simultaneously several tasks. It has been shown theoretically that using two longitudinal modes of an edge-emitting laser, 17 the two modes of a semiconductor ring laser 15 or the two polarization modes of a VCSEL 21 enable parallel processing with a time-delay reservoir computing architecture. We thus experimentally address here the question of whether a VCSEL-based photonic reservoir, which exhibits two polarization modes, is able to perform efficiently two tasks consisting of the recovery of two optical signals being distorted by a fiber.
In this article, we present an experimental realization of a reservoir computer processing two tasks simultaneously. This reservoir computer is based on the time-delay reservoir architecture, using a VCSEL as a physical node. The two tasks are injected optically in each polarization mode of the VCSEL. By carefully choosing the operating point of the reservoir computer, we show the possibility to tune the performance of the system on each processed task. As an illustration, we test our reservoir on the nonlinear optical channel equalization. This task is very demanding as signals sent in optical fiber are distorted due to several nonlinear effects, such as chromatic dispersion and Kerr effect. 22 More specifically, we are able to recover two signals simultaneously distorted by 25 km and by 50 km of fiber and sent at 25 Gb/s with a mean error rate of 0.3% at 25 km and of 3% at 50 km, at a processing speed of 51.3 Mb/s.
II. METHOD
The experimental setup is depicted in Fig. 1. The reservoir itself is the same as the one we have previously studied in Ref. 20: It comprises a VCSEL (Raycan) as a physical node, which emits light at 1552.75 nm for the dominant linear polarization mode (LPx) and at 1552.89 nm for the depressed polarization mode (LPy). The bias current of the VCSEL is set at 4.5 mA, which corresponds to 1.5 times the threshold current. This choice of pumping current is based on the previous numerical analysis we conducted in Ref. 19, showing that a pumping current close to the current threshold lead to high-memory capacity and overall computing performance for the time-delay VCSEL-based reservoir computer. The feedback loop is made of a SMF-28 single mode fiber (standard telecommunication fiber) resulting in a delay line of τ = 39.4 ns. As only one calculation step can be performed per round-trip, this length imposes to a (2) n for each bit bn. This signal is temporally rescaled so that each symbol duration is τ. The ten values b (1) n−4 , bn−4 (2) , to b The speed of the system could be increased by reducing the length of the delay line, which was not possible in our case. To optimize our use of the VCSEL dynamics, we set the inter-nodes delay θ = 0.04 ns according to previous simulations 19 and the frequency limitation of the experimental components (i.e., oscilloscope, arbitrary waveform generator and modulators): The optimal delay between virtual nodes that exploits the best VCSEL's transient response is θ * = 0.02 ns; however, the modulation bandwidth of our arbitrary waveform generator (AWG) is at 25 GHz. We use for the training and testing of the reservoir only one every two nodes separated by 2θ = 0.08 ns due to the memory limitation of the computer performing the training, thus leading to consider N = 492 nodes instead of N = 984.
Considering an increasing number of virtual nodes while keeping the feedback delay fixed, we observed numerically an improvement of the performance up to Nth = 100. Beyond this threshold value, increasing the size of the virtual network will only lead to marginal improvement in the RC performance. In our experience, we choose N = 492 > N th for experimental convenience rather than using all the accessible virtual nodes to speed up the training phase without compromising on the performance. There is also a polarization controller (P.C.) to control the optical polarization along the feedback loop. Finally, an optical attenuator Keysight 81577A (Att.) is used to control the feedback strength. In this article, the results presented are obtained with the isotropic feedback configuration, i.e., the orientation of the two VCSEL's polarization modes (LPx,y) are preserved in the external cavity prior to being fed back. Accordingly to the results obtained in Ref. 19, there is an optimum operating point for each value of the feedback strength while varying the injection power. This is why we set the feedback attenuation η to 17 dB to guarantee that enough power is injected to find this best operating point. The input layer is primarily composed of an arbitrary waveform generator (AWG) AWG700002A from Tektronix, a tunable laser Tunics T100S from Yanista, and two Mach-Zehnder modulators (MZx,y) with a bandwidth of 12.5 GHz. Both modulators are working in their linear regime. The light emitted by the tunable laser is split in two different beams and sent in the two different modulators. The wavelength of this laser is set to 1552.82 nm so that its wavelength is equally separated from the frequencies of the main and depressed polarization modes of the VCSEL, as presented in Fig. 2. By doing so, we ensure that having the same power in both linear polarization modes at the output of the modulators, the power is equally distributed among the two linear polarization modes of the injected VCSEL. Shifting the frequency of the master laser to one of the polarization modes of the VCSEL leads to a more efficient optical injection in this mode and therefore enhances the response of this mode at the expense of the response of the other mode, for which the optical injection is reduced. The two different masked input streams, corresponding to the two tasks Tx,y to be processed, are used to drive both modulators and are generated by the AWG at a symbol rate of 25 GS/s for each stream. The output power of the modulator is controlled by an optical attenuator built inside each modulator. This allows the independent change of the injected power Pinj x,y of the tasks Tx,y. At the modulators output, the optical polarization of the input stream containing Tx is aligned with the main polarization mode (LPx) of the VCSEL and the one of the input stream containing Ty with the depressed polarization mode (LPy). An example of input streams is given in Fig. 1(b). Both beams are then recombined and sent in the reservoir computer.
The response of the reservoir is recorded at the output layer: The signal is first amplified with an erbium-doped fiber amplifier (EDFA) from Lumibird. Then, the two polarization modes of the VCSEL are separated and recorded with two photodiodes Newport 1544-B 12 GHz bandwidth, connected to an oscilloscope Tektronix DPO 71604C 16 GHz bandwidth with two channels at 50 GS/s. Examples of the experimental time series recorded for each polarization mode of the VCSEL are given in Fig. 1(c). The signal-to-noise ratio (SNR) has been experimentally measured at 21 dB.
With the high-resolution optical spectrum analyzer BOSA from Aragon Photonics, we can study the spectral dynamics of the system in different configurations. Figure 2(a) shows the experimental optical spectrum of the reservoir computer without injection and with optical feedback. The VCSEL is lasing at 1552.72 nm, the wavelength of its dominant polarization mode. The dominant mode LPx of the VCSEL has a spectral width of 5.72 GHz with an attenuation of 17 dB in the feedback loop. The two smaller side peaks are induced by the undamped relaxation oscillations of the VCSEL, 23 which frequency is measured at 3.73 GHz. Figure 2(b) presents the spectrum of the reservoir with injection but without modulating input: Under this condition, the VCSEL is emitting light only in its dominant polarization mode, with the wavelength of the master laser at 1552.82 nm. We notice that the slave laser exhibits wave-mixing dynamics and that it is not locked to the master laser. When modulating the master laser, its spectrum broadens and overlaps the two wavelengths of the VCSEL, as shown in Figs. 2(c) and 2(d). This allows the VCSEL APL Photon. 5, 086105 (2020); doi: 10.1063/5.0017574 5, 086105-3 ARTICLE scitation.org/journal/app to react to the master laser and to respond according to the modulated input. This response also broadens the spectra of the two polarization modes of the VCSEL. The spectral width of the dominant polarization mode LPx detuned from the modulated input by 9.45 GHz. We observe also that injecting more power in the depressed mode LPy forces its emission despite not lasing when the VCSEL is free-running.
We have tested the dual-tasking performance of our reservoir at solving a nonlinear optical channel equalization, which aims at reconstructing a transmitted signal only from the given distorted signal at the channel's output. We have chosen a single-mode optical fiber for the telecommunication channel. The distortion introduced by this channel is simulated using the nonlinear Schrödinger equation, which models the propagation of a signal in the fiber. This equation reads as 24 where E(z, t) is the slowly varying envelop of the optical field, α is the attenuation of the fiber, β 2 is the second order coefficient of dispersion, and γ refers to the nonlinearity of the fiber. We have chosen the coefficient of the SMF-28 fiber, which is the single mode silica fiber used for long haul transmission, with α = 0. n , which are the time-average values of the upper half and the lower half of the distorted signal for the duration of one bit. The input of the reservoir is realized by masking each feature value for five consecutive bits, hence using 10 different masks (one per input value) of 985 values, which are then summed together. The masked input of the reservoir Jn− 2 (t) at the step n − 2 reads where Mi(t) is one of the ten different masks. A graphical illustration of the preprocessing is given in Fig. 1(d). At the output of the reservoir, we train the system by linear regression with N = 492 nodes to recover bits bn− 2 . For each node, we use as a state the values of the optical power of the two orthogonal polarization modes (LPx and
ARTICLE scitation.org/journal/app
LPy). Two different linear regressions are performed, one for each task Tx and Ty, using the whole state of the reservoir. The equations of the regressions are S × ωx = b T x and S × ωy = b T y , where S is the reservoir's state matrix containing the power associated with the dominant (LPx) and depressed (LPy) polarization mode. ωi is the vector containing the readout layer weights obtained from linear regression, and b T i is the vector containing the target output of the task Ti. Exploiting the two LP modes for each regression stems from nonlinear mixing the two input data streams in the VCSEL dynamics so that the two polarization modes contains part of the information of both processed tasks. For the training of the reservoir, we use 20 000 samples, i.e., sliding block of five consecutive distorted bits.
Since we record optical power of LPx,y modes for the 492 nodes, the size of S is 20 000 × 984. The performance of the reservoir is tested on 5380 samples and measured using the bit error rate (BER). As already stated, for each value of the feedback strength, there is a corresponding optimal injection power for the reservoir computer. 19 That is why we vary only the injected power, while keeping the value of the feedback strength fixed. This allows reducing the dimension of the space parameters to explore to find the best experimental operating point. By finding the best operating point, we ensure for our VCSEL-based reservoir computing system to have a combination of large memory capacity (i.e., long fading memory) and large computational ability (i.e., good aptitude for approximation and generalization), as demonstrated in our previous numerical analysis. 19 Furthermore, we aim at showing the tunable parameters that can control the performance of the two processed tasks Tx and Ty. Figures 4 and 5 present the influence of the ratio of injection power P injy P injx on the performance of the two processed tasks. To produce these figures, we first find the best operating point for each value of this ratio: We sweep the value of Pinj x (an example is provided in Fig. 3), and Pinj y is then fixed by the value of the ratio. As a result, we find the value of Pinj x that minimizes the mean BER for both Tx and Ty. This optimal value is then reported in the graph (this is why Figs. 4 and 5 do not contain any information on the effective injected power). Figure 3 shows an example of the method used to produce the performance figures.
We first present the influence of the injected power on the performance of both tasks Tx and Ty in Fig. 3 for the two lengths of fiber recovered: 25 km (a) and 50 km (b). On this figure, the injection ratio Pinj y /Pinj x is fixed to 0.3. We can observe that there is an optimal injected power that yields the best mean performance at Pinj x = 0.09 mW for 25 km and at Pinj x = 0.2 mW for 50 km. We will only report this best value in the figures.
III. RESULTS
The results for the channel equalization of 25 km of propagation in the fiber are presented in Fig. 4(c). Figures 4(a) and 4(b) present an example of the signal at the input and output of the optical fiber, respectively. We observe that the performance on tasks Tx and Ty varies with the injection ratio Pinj y /Pinj x . If this ratio is smaller than 2, task Tx is better performed than task Ty. When this ratio is higher than 2, the trend is reversed, and the task Ty is better performed. This can be explained by a polarization switching in the VCSEL output induced by optical injection (i.e., the role of the dominant and depressed polarization modes of the VCSEL are exchanged 27 ). This phenomenon therefore increases the SNR of the task Ty injected in the depressed polarization mode. The system is able to provide a BER of 0.04% for the task Tx, while the dominant mode is strongly injected (with an injection ratio Pinj y /Pinj x of 0.2). The other task is processed with lower performance in this case, with a BER of 1.6%. When the ratio of power is greater than 0.5, the average performance of the reservoir reaches a threshold of performance with a BER of 0.35%. The ratio of injected power in the polarization modes can thereby be used to easily choose the split of performance between the two performed tasks. While processing a single nonlinear channel equalization task, the reservoir computer exhibits a BER of 0.08%. We notice that the performance of our VCSEL-based reservoir on a single task is comparable to the one achieved with a monomode laser diode with a more complex modulation format and similar propagation distance. 26 However, processing two tasks instead of one mitigates the averaged performance of the system.
To analyze the impact of the nonlinear transformation induced by our VCSEL-based reservoir on the task, we compare it to a APL Photon. 5, 086105 (2020); doi: 10.1063/5.0017574 5, 086105-5 ARTICLE scitation.org/journal/app stand-alone linear regression (a linear classifier). Toward this end, the linear classifier is operated in the same conditions as the reservoir computer: One classifier is used to process the two tasks with the same dimension and similar injection power ratio as in a photonic reservoir computer. We use also the same input features with identical sizes for the training and testing sets (20 000 samples for training and 5380 for testing). Finally, similar SNR conditions are considered. To meet this last condition, as the VCSEL introduces additional noise, we added white noise to the input signal to achieve 21 dB before performing a stand-alone linear regression. With these similar operating conditions, a stand-alone linear regression provides a BER slightly lower than 1%, and the mean BER of the two tasks is ∼3.2% in the best operating point identified in our experiment (i.e., for a ratio in the range of 0.6-3). The reservoir computer is thus able to improve the performance on the two tasks by approximately one order of magnitude.
We also provide results on the dual channel equalization of the propagation in 50 km of single mode fiber. Since the distortion of the signal is more pronounced [ Fig. 5(b)], the mean performance of the reservoir computer is expected to be lower than the one after a 25 km transmission. The performance of the reservoir computer is given in Fig. 5(c).
We still observe a similar trend: The polarization switching of the VCSEL for a ratio of injection Pinj y /Pinj x ∼ 1, and the best achieved BER for one task is at 1.6%. The best mean performance is at 2.2%, achieved for a ratio of injection at 0.7. The system performing this single task exhibits a BER of 1.9%, which is slightly below the performance previously reported. 16 Contrary to the equalization of the shorter optical fiber, processing two tasks simultaneously slightly decreases the mean performance of the system, when compared to processing a single task.
The performance of the stand-alone linear regression (linear classifier) is presented in Fig. 5(d). The test has been realized with the same condition as the one used for the reservoir computer. The linear classifier is achieving a BER of 7.5% as a best performance. When both processed signals are balanced, the linear classifier exhibits its best mean performance, with a mean BER at 8.4%. Using the nonlinear effects in our VCSEL-based photonic reservoir computer in similar SNR conditions thus provides a significant benefit, allowing to improve by a factor 5 the performance on the signal-recovery task.
The relatively low range of power used for the input signal propagating in the fiber is consistent with the range of power use in telecommunication networks. Furthermore, it does not lead to significant trigger of the Kerr nonlinearity. Equalizing both linear distortion and a strong Kerr effect remains a challenge in current digital signal processing (DSP)-based techniques for optical channel equalization. 28 To analyze how the Kerr effect would affect the performance of the reservoir, we have sent in the fiber two signals with a large pulse-amplitude modulation depth of 0.5 W and recover two signals simultaneously at the output of the fiber. This power is large enough to trigger the Kerr nonlinearity (as only a few tens of mW are necessary) and make the task more complex to solve. Under these new conditions and using similar parametric and operating conditions, our reservoir can now recover two signals simultaneously with an optimal mean BER of 8.9% for a 25 km fiber distortion and with a mean BER of 17.9% for a 50 km distortion. A degradation of at least one order of magnitude is observed in these conditions with the level of recovery unsuitable for telecom application. However, the level of power was quite large, and no specific optimization was performed to optimize this modified task: There may be a more efficient size of the training set, larger reservoir size, and adapted preprocessing with more peripheral bits data to achieve better level of the performance. This work is left for future studies.
IV. CONCLUSION
We have realized an experimental photonic reservoir computer architecture capable of processing two tasks simultaneously. This reservoir is a time-delay reservoir computer, using a VCSEL as a physical node. The two different inputs are made by injecting two different optical signals, each being aligned with a different polarization mode of the VCSEL. Using this system, we have performed as an illustration two signal-recovery tasks simultaneously when the signal generated at 25 Gb/s is distorted by propagation in a 25 km or 50 km long SMF-28 optical fiber. We have been able to recover two signals with a BER of 0.3% at a processing speed of 51.3 Mb/s in total for a 25 km-distortion and with a BER of 3% at the same bit rate for a 50 km-distortion. On both tasks, the reservoir allows improving the performance by a factor 5-10, compared to processing the input signal directly under similar SNR conditions. The actual telecommunication networks use digital signal processing (DSP) to mitigate the effects of the optical fiber 29 as it allows propagating the signals along several thousands of kilometers with a BER of ∼10 −3 compatible with forward error correction, but at the expense of important computational resources.
Our result also shows that there is still a significant margin of improvement before considering it a viable alternative to the best DSP approaches, despite achieving level of performance comparable to existing photonic-based machine learning techniques on this particular task. 30 Nevertheless, this result is a first step showing that analog photonic reservoir computing could be envisioned for such dual-tasking on optical channel equalization.
We proved in our previous work that the bimodal dynamics of the VCSEL allows better computational performance than a single mode dynamics system. This is due to a more complex dynamics that is suitable to perform computation. Here, we proved experimentally that we can exploit the bimodal dynamics of the VCSEL to process two tasks simultaneously. This suggests that using a system exhibiting more dynamical modes would allow scaling up the number of tasks to be processed simultaneously. However, performing several tasks simultaneously slightly degrades the mean computational performance of the system. There is thus a trade-off between the number of tasks to be processed and the individual performance of each task considered. Moreover, we hypothesize that the physics underlying the coupling mechanism between modes may also influence the performance of the reservoir computer, for instance, using longitudinal mode of a laser 17 or the two modes of a semiconductor ring laser 15 instead of using the polarization modes of the VCSEL. This may constitute an interesting frame for future studies of multimode reservoir computing. | 5,922 | 2020-08-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
TEACHERS’ PERCEPTION ON THE IMPACT OF USING FIRST LANGUAGE TOWARDS ENGLISH CLASSROOM
The investigation of teachers' viewpoints on non-native English-speaking students is the main topic of this study. Nevertheless, there is still uncertainty regarding the efficacy of implementing first language instruction in English classrooms. The purpose of this study is to better understand the viewpoints of instructors on the usage of native tongues in English classrooms. The research methodology included qualitative data collection, with the objective of acquiring a broad spectrum of information. The English teachers at Santi Witya Serong School in South Thailand are the study's participants. The findings shows that teacher was sometimes using Thai language as an instruction in English classroom but most of the instruction is on English as their targeted language. The results demonstrate that teachers noted that first language implementation increased students' desire to learn English, gave them a chance to better understand challenging concepts in the language, and inspired them to ask bold questions that clarify linguistic elements. In an English-speaking classroom, student interaction and communication can become quite difficult, particularly when there is a tendency for the students to have poor English language skills. Using the first language in the target language classroom can increase students' enthusiasm and eagerness to study regardless of their background level and can be seen as a sign of respect for those who still require help learning English. It is noteworthy that although having a classroom where all of the language used is English is good for the students, it is also vital to pay attention to the state of the students.
INTRODUCTION
Language has a dynamic landscape in Language education, the availability of linguistic element into English class was always being a subject of escalating the effect.One particularly the influence of the use of Thailand language into English classroom's learning activity has become an interesting topic to break down.Educator who taught English language as a foreign language required to navigate the language instruction of the complexities of students' mother tongue and English language.Teacher's perspective on the use of Thailand language in the English classroom released a myriad opportunity that influence on the more attractive and engaging of learning activity.
Thailand is a country who have a diverse of linguistic element, it creates a unique background of English teacher, English as an international language view as a connecting language barrier worldwide.Hence, the integration of Thailand language into English language classroom activities required reasonable approach in terms of linguistic exchange.
There are a multifaceted aspect needs to delve in incorporating Thailand's language into English classroom activities, including, pedagogical aspect, teachers grapple in maintaining the balance between the language immersion and the objectiveness of English language integrity.Furthermore, the background of the learners in diverse linguistic element in Thailand's language and the potential of English language acquisition need consideration.
In this study, the researcher aims to investigate how English teachers in Santi Witya Serong School, South Thailand navigate the teacher perspective in interplay use of Thailand language in English classroom setting.The researcher will delve into the creativeness of English teacher in infusing Thai element into learning activities without compromising the main focus in English in achieving English acquisition and to what extent the tolerances in using Thailand language in English classroom.Additionally, the researcher will investigate the benefit of using Thailand Language in English classroom, such as foster students' motivation in learn English and ease student in to learn English.
The Use of First Language in Foreign Language Classroom
The impact of using first language in English classroom or their second/ foreign language has been debatable for decades in regards to its effect on student's outcome.study that held in some of the Jordan university revealed that the use of first language in English classroom as their second/foreign language has a positive impact in assist students' comprehension in learning the complicated word and the new vocabulary to understand its meaning, break down the syntax rules, saving time, explaining grammatical rules in a easy way, and build rapport between learners and teachers In regards to this preference, it is not understandable that the prohibitions of using the first language in targeted classroom affected in depriving students learning tool (Hussein 2013).
The second study was conducted in Malaysia, the participants admitted the positive effect on using the first language in English classroom, the students utilize the first language as a tool in facilitating their learning process such as: find out the meaning of a new vocabulary, explain a new concept and points and also help them in escalating their task faster.They also acknowledge the role of L1 in helping them to learn English.(Michelle Manty and Parilah M. Shah 2017) In regards to this preference, it is not understandable that the prohibitions of using the first language in targeted classroom affected in depriving students learning tool (Hussein 2013).
The result shows that most of the participants prefer to use targeted language in the classroom, they believe that a massive use of the first language inthe classroom impacting in depriving students' chance in practice the oral skills We can conclude that the study from above-mentioned research revealed that first language is useful and not a barrier in language learning activity.However, on the other hand several studies reported opposite findings.In the study that was held in Asia-Pacific International University revealed that first language in English classroom had a crippling in English classroom learning activity as they were attributed in the failure of the massive use the first language in English classroom and the lack of determine the best time in using first language appropriately (Tantip Kitjaroonchai and Ritha Maidom Lampadan 2016).
The other study in one of the high schools in Iran found that the massive using of first language in English classroom affected in demotivating effect on learning process (Gholam-Ali Kalanzadeh et al. 2013).The next study was conducted by Sener and Korkut shows that most of the participants prefer to use targeted language in the classroom, they believe that a massive use of the first language in the classroom impacting in depriving students' chance in practice the oral skills (Sener and Korkut 2017) In the contrary some research revealed that the use if first language in proper portion can affecting on the improvement of students' achievement in writing sentences (Usadiati 2010) and leading students' achievement (Damra, Heba Mohammad, and Mahmoud Al Qudah 2012).
Mother Tongue in Escalating Second Language Acquisition
Mother Tongue able to escalate the second language acquisition for student, based on the previous research the reason of using mother tongue for second language acquisition learning is student who learn second language from their mother tongue are likely achieve higher proficiency in second language and mother tongue can be a foundation to learn second language (Г.З.Узакова 2022), previous research in jabodetabek, Indonesia revealed that most of the teacher and student in senior high school viewed, first language assist student in develop their language skill, they considered first language assist them in learning the forth skill (reading, writing, listening and speaking), most of them agree that the utilization of first language was useful as a tool in escalating learning process of vocabulary and grammar (Pardede 2018).
RESEARCH METHOD
Methodology of this research employed qualitative data gathering, the aim of this technique is to obtaining the wide range of information.The participants in this study are all of the English teachers of Santi Witya Serong School, South Thailand, the total of the teachers was willing of voluntary taking the interviews was 3. The present study was captured the exploration of finding out the teachers' perceptions.
Semi-structure interview was adopted to portray teachers' perspective on the use of first language in English classroom.This interview result was transcribed and audio-recorded.The qualitative data result was reviewed interpretively. 1 (Cohen, Manion, and Morrison 2002) This study aims to investigate the perception regarding the use of first language in English classroom, To accomplished that, following questions were adopted: 1. the creativeness of English teacher in infusing Thai element into learning activities without compromising the main focus in English in achieving English acquisition 2. to what extent the tolerances in using Thailand language in English classroom 3. What the benefit of using Thailand language in English classroom
FINDINGS
The data gathered from the teachers' interview was identified into each category arranged by the question of the interview.Each question's responses were examined and discussed in relation to it sequentially.A few quotes were chosen to accurately convey the participants' actual opinions or perspective.
Teachers' perspective in infusing Thailand language element into English classroom
This question was made in investigated teachers' perspective in infusing Thailand language element into English classroom, the aim is to find out are they were wholly in favor of or against the usage of first language.The interviews result revealed that all of the participants were agree in infusing first language in English classroom.However, most of the teacher believed that the use of first language in English classroom should be restricted and the frequency is adjust based on students' skill and levels.The restriction made by teachers to escalate students' exposure to their target language (English language).Based on teacher experienced they revealed that students who are exposed to their targeted language tend to be faster in mastered the language and seems to gain positive mind on the use of first language especially in conversation.All of the teacher also allowed the students in using their mother tongue especially when teach the grammar, all of the teacher also recommend that students who still in the basic level of English not compulsory to use English in class.The teachers believe that the use of first language in English classroom can motivate student to learn more and reduce students' anxiety and students who afraid to start learning English.
Based on the teachers experienced the use of first language in the beginner class effective to make student more active and engage in the learning process because when the teacher only use English on the beginner class students will get difficulty in grasp the material because students still do not have proper background in English.Based on the interview all of the teachers also consider the balance use between first language and second language, all of the teachers stated that they only use first language when it's needed for example: to explain grammatical rules, clarifying and summarize the topic using English and break down the difficult concept in English, however for the intermediate class the teacher restrict the use of first language in English classroom, students only can use first language when they are completely don't understand the concept and need clarification from teacher.
To what extent the tolerances in using Thailand language in English classroom
The result of the interview shows that most of the teacher use the first language in a certain circumstances, such as : to give an instruction especially when the material is difficult to grasp, because they want to ensure student understand the course properly and The teachers also used first language as a tool to describe and explain grammatical rule and clarify the different language element between first language and targeted language, In addition, on the beginner class the teachers also use first language as a means of teaching English vocabulary.
What the benefit of using Thailand language in English classroom
This question means to investigate teachers' perspective on the benefit of using Thailand language / first language in English classroom, due to all the teachers implement the first language in class as a tool in achieving students' proficiency in English language, the researcher includes this question to clarify the reason behind this action by find out the benefit of using Thailand language in first language in English classroom can enhancing students' proficiency in English due to first language can be a tool to simplify difficult material.Some of the teachers also the use of first language can established a rapport between teachers and students in class because the students are not hesitating to asking the difficult concept in English without being judge by the teacher.In addition, teachers perceived using first language in class can decrease students' apprehension in class because they still can use their first language when they do not understand the meaning or when students asked the teacher to summarize the material in first language.Furthermore, all of the teachers stated that the use of first language in English classroom can saving time because when the teachers were teaching grammar to the students who still have basic level of English, they use first language to explain it in detail and because the grammatical course needs a lot of time to understand, besides if they still on the beginner level it will needs an extra time to clarify the concept in English.
DISCUSSION
Even though there are still numerous pros and cons regarding the use of first language in second or foreign language classroom, The merits of its pragmatism and practicality could not be disregarded.Examining the results of this study, it is evident that teachers often assign first language specific functions in the context of foreign language learning, most of the teachers are seen of having positives views towards the implication of this method because the allowance use of first language in English classroom perceived to have a positive impact on teachers and students.
The present study that held in qualitative data shows that teachers have an upbeat perspective regarding the incorporation of first language into their classes.
They observe that teaching in Thailand / students' first language helps teaching practice, especially when it comes to students who still on the beginner class in the course of vocabulary and grammar.Teacher also believes that first language can provide clear instructions for students and fostering an engaging and supportive learning environment.According to the majority of participants, Thailand is only a useful auxiliary language in language classes, and its application varies depending on the proficiency and level of the students.The use of the first language in grammar course in particular level especially for beginner or the first stage of learning grammar in the circumstances that students may not fully understand some complicated concepts, which could result in the development of anxious obstacles to language acquisition, this first language will be extremely useful in assist students understanding in course.
CONCLUSION
The result shows that teacher was sometimes using Thai language as an instruction in English classroom but most of the instruction is on English as their targeted language.Teachers reported that such an approach escalate students' willingness to learn English, provided an opportunity to more understand difficult concept in English and encourage student to be brave asking, explain linguistic element and clarify something.Despite the high amount of teacher who recognize the importance of using the first language in English classroom, they consider the positive effect of using English in classroom, the finding revealed that even though they use Thai in English classroom but they perceived this approach are not necessary and only use in the certain circumstances because using the targeted language in the classroom can stimulate student to speak and listen more in English and adapt with this situation since this can increasing their exposure to the targeted language, however due to Thai and English language have a totally different linguistic element and since they still goes to the elementary school using first language is important in this situation.
In addition, the data shows that there is appeared to frequently use the first language (Thailand language) in the learning activity.The data revealed most of the teachers consider this practice was extremely useful and important, and most of them reported to be consistent using this approach as a strategy in English classroom that means to ease students' understanding on the material.This might be due to the leverage of this practice which allows teacher and students to switch into the first language which purpose to provide a more understandable concept or it might due to teacher awareness on the effectiveness in using first language as assistance in learning English language.
In conclusion, in the Santi Witya Serong School, South Thailand, interaction and communication between students in English classroom can be totally complex, especially when the students tend to have a low proficiency in English language.Deploying the first language on the target language class can be a sign of respect of students who still need assistance in learning English this means to enhance students' motivation and eagerness to learn regardless of their background level.It is notable that despite the use of 100% English language in English classroom is beneficial for students but the students' condition is also important to note. | 3,858.4 | 2023-11-28T00:00:00.000 | [
"Education",
"Linguistics"
] |
Conceptualising sustainability through environmental stewardship and virtuous cycles—a new empirically-grounded model
Humans depend on earth’s ecosystems and in the Anthropocene, ecosystems are increasingly impacted by human activities. Sustainability—the long-term integrity of social–ecological systems—depends on effective environmental stewardship, yet current conceptual frameworks often lack empirical validation and are limited in their ability to show progress towards sustainability goals. In this study we examine institutional and local stewardship actions and their ecological and social outcomes along 7000 km of Australia’s coastline. We use empirical mixed methods and grounded theory to show that the combination of local and institutional stewardship leads to improved ecological outcomes, which in turn enhance social values and motivate further stewardship to form a virtuous cycle. Virtuous cycles may proceed over multiple iterations, which we represent in a new spiral model enabling visualisation of progress towards sustainability goals over time. Our study has important implications for collaborative earth stewardship and the role of policy in enabling virtuous cycles to ultimately realise sustainable futures.
Introduction
We are inseparable from our environment. Humans depend on nature to provide the essentials of life, and in turn, environmental health is heavily dependent on the actions of humans (Preiser et al. 2017;Steffen et al. 2011). The fundamental importance and mutuality of human-environment relationships is embodied in the concept of sustainability-the "long-term integrity of the biosphere and human well-being" (Chapin et al. 2011). Despite the criticality of human-environment relationships, we have much yet to learn of the modern structures, interactions and dynamics of social-ecological systems (Messerli et al. 2019;Scholz and Binder 2011).
A key element in achieving a sustainable future is for humans to take responsibility as environmental stewards (Steffen et al. 2011;Preiser et al. 2017). Whilst stewardship is just one of several framings for the human-environment relationship, it most closely supports reconnecting people with nature and building resilience in social-ecological systems (Preiser et al. 2017). Environmental stewardship is a fluid concept (Turnbull et al. 2020a); here we define it as active earth-keeping, taking responsibility to protect, care for and use the environment for positive ecological and social outcomes (Lerner 1993;Bennett et al. 2018).
The United Nations 2030 Agenda for Sustainable Development provides a plan of stewardship action "for people, planet and prosperity" (DESA UN 2016). This social-ecological Agenda seeks to end inequality and poverty, and heal and secure our planet for a sustainable future. It is actioned through 17 Sustainable Development Goals (SDGs) and 169 targets, many of which connect humans and nature. Yet today, most SDGs are projected to fall short of their targets, Handled by Georgina Gurney, Australian Research Council Centre of Excellence for Coral Reef Studies, James Cook University, Australia.
3
with several goals currently on negative trajectories (UN Secretary General 2019).
Converting negative trajectories to positive, to achieve our global sustainability Agenda, will require new ways of thinking and acting (UN Secretary General 2019). Novel inter-disciplinary approaches such as integrating science, business and government, informed by improved knowledge networks and resulting in collaborative management, are required (Messerli et al. 2019). Such societal transformations will depend on new conceptualisations of the human-environment relationship, as today most theoretical social-ecological models have limited application (Binder et al. 2013).
Protected areas can provide places which facilitate environmental stewardship, resulting in improved social and ecological values (Powell et al. 2002). Protected areas may exist in both terrestrial and marine realms and have varying levels of protection or stewardship (Dudley et al. 2013). Fully protected areas, for example, prohibit the removal of or damage to all animals and plants. Partially protected areas have widely varying regulations but allow a range of extractive activities to occur including fishing and collecting. Such differing levels of stewardship can result in varying levels of ecological and social effectiveness (Turnbull et al. 2021).
In this study, we aimed to develop a novel conceptualisation of the human-environment relationship, focusing on the positive actions human society may take towards sustainability. We explored the concepts of environmental stewardship and virtuous cycles and investigated whether these concepts were supported by empirical evidence. We studied both institutional stewardship-in the form of varying levels of protection-and the individual or local environmental stewardship actions of people at a place (Turnbull et al. 2020a).
Our approach was to examine a diverse social-ecological system to provide insight into broader-scale trajectories towards sustainability. To achieve this, we selected coastal places as they integrate terrestrial and marine realms and provide a linked system of social and ecological dynamics (Pollnac et al. 2010). We chose Australia's Great Southern Reef coastline, spanning five jurisdictions and 7000 km for our study due to its size, diversity and ecological importance (Bennett et al. 2016).
Frameworks and models
Human-ecosystem relationships can be visualised through multiple frameworks including unidirectional (such as ecosystem services or stewardship alone), bidirectional (such as closed loop production), and intersecting or nested domains ( Fig. 1) Moskell and Allred 2013;Raymond et al. 2013). Selection of a given framework both highlights and hides elements, preferencing one set of perspectives, ethics and outcomes over another (Preiser et al. 2017;Raymond et al. 2013). The closed-loop framework, expanded beyond production to encompass values, services and dis-services, as well as positive and negative human impacts on ecosystems, has some limitations yet has potential for broad application (Masterson et al. 2019;Raymond et al. 2013). It is manifest in varying degrees in a number of existing systems models or derivative frameworks. We now discuss three such derivative frameworks, selected to illustrate the diverse yet still limited practical applications of the general closed-loop framework.
The DPSIR framework-driving forces, pressures, states, impacts, and responses (Smeets 1999)-is a widely used framework for environmental indicators. DPSIR models a mostly one-way flow from Drivers such as industry, to Pressures such as pollution, State of environment such as water quality and Impacts such as loss of biodiversity or drinking water. The final step, Response, closes the loop with a human intervention to mitigate impacts, states, pressures and drivers through actions such as wastewater treatment. The language of DPSIR is focused on the negative impacts of humans on the environment although it may be applied in the context of sustainability with the use of suitable indicators (Smeets 1999).
The Human-Environment Systems (HES) framework (Scholz and Binder 2011) focuses on managing the negative impacts of humans on the environment but with explicit recognition of the reciprocal impact of environmental factors on humans. HES is grounded in the social and sustainability sciences and decision theory, and enables the general formation of goals and strategies to manage the human-environment relationship (Scholz and Binder 2011;Binder et al. . It models primary and secondary feedback loops for the evaluation of environmental responses and dynamics arising from these strategies. The HES framework does not contain a sustainability component, but it can be used to investigate sustainability learning in a given context (Scholz and Binder 2011).
The Social-Ecological Systems Framework (SESF) is balanced in its treatment of social and ecological subsystems but takes an anthropocentric perspective that views ecological components as resources (Ostrom 2009). This is reflected in its application in the management of agriculture, fisheries and water resources. It acknowledges the governance system and resource "users", with feedback loops for the social and ecological outcomes arising from system interactions. As with HES, the SESF does not explicitly contain a sustainability component but can be used to analyse sustainability of the social-ecological system (Ostrom 2009).
The originating context, perspective and assumptions for each of the above frameworks are manifest in the specific language and limitations of each framework, often resulting in a focus on the negative or exploitative aspects of the human-environment relationship. In our study, we aim to develop a model, grounded on empirical evidence, which highlights the positive actions that humans can take to drive upward trajectories in both environmental health and human well-being. This virtuous circle or cycle has potential as a basis for such a model; however, this concept has been used in varying, sometimes differing ways in both the academic and management literature.
Early research regarding the virtuous circle or cycle proposed a model in which social and ecological capital were mutually reinforced and concluded that a key objective of policy should be to achieve "virtuosity in the landscape" (Selman and Knight 2006). Qualitative, trans-disciplinary approaches were considered necessary to fully appreciate the interdependency between "people and place" and develop representative models (Selman and Knight 2006). Protected areas were recognised as pivotal in achieving virtuosity, leading to sustainability improvements in both landscape quality and community quality of life (Powell et al. 2002), although recent research highlights the difficulty in simultaneously meeting social and ecological goals in coastal settings (Cinner et al. 2020). Tidball et al. (2017) applied virtuous and vicious cycles to develop the concept of resilience in social-ecological systems. They used systems theory, in which positive feedback amplifies change and negative feedback inhibits or counterbalances change. Virtuous and vicious cycles were, therefore, both positive or reinforcing feedback loops, but driving the system in desirable or undesirable directions. The definition of desirable vs. undesirable is value laden (Preiser et al. 2017), but in terms of sustainability these could be represented by, for example, endemic biodiversity preservation vs. loss, and the gain or loss of human wellbeing. The authors placed desirable states in the virtuous domain and undesirable states in the vicious domain, with a bifurcation zone between which may tip in either direction based on policy and management actions. They encouraged future research to detect the practices contributing to virtuous cycles and provide evidence of the resulting social and ecological outcomes. Masterson et al. (2019) most recently conceptualised the relationship between ecosystems and human wellbeing as a holistic cycle that can be either positive (virtuous), or negative. The virtuous cycle results from effective stewardship, whilst the negative cycle results from overexploitation of the environment and poor management. The model integrates human values, attitudes and actions and recognises the mediating role of institutions and policy in the cycle. Human benefits are modelled broadly as a "basket" of direct use, monetary income and experiences. In presenting this broad conceptual model, the authors call for further empirical research to understand and verify components of the cycle.
The recent Global Sustainable Development Report (UN Secretary General 2019) mentions transforming "vicious to virtuous circles" but offers no conceptual basis for these terms. Virtuous circles are not explained, but vicious circles are referenced in the context of negative tipping points in Earth's natural systems and the acceleration of global warming through melting sea ice and permafrost. Importantly, eleven of the seventeen SDGs embody one or both directions of the virtuous circle, in the general form of humans caring for, or benefitting from, the environment (Table 1). Ultimately, the vicious-to-virtuous transformation is described as "key to the implementation of the 2030 Agenda" (UN Secretary General 2019).
Existing social-ecological and virtuous cycle models only partly enable visualisation of such transformation. They generally focus on the relationship between components in the social-ecological system, but do not directly incorporate the concept of sustainability nor allow visualisation of positive progress towards sustainability over time. This would require representation of both a direction-towards (or away from) the goal; and time-as current sustainable development goals are set for a given year (DESA UN 2016). Tidball et al.'s (2017) model does include a graphical landscape which enables visualisation the system state between virtuous and vicious domains, but with the goal of resilience rather than sustainability. There is, therefore, an opportunity to further conceptualise the positive pathways through stewardship and virtuous cycles to sustainability (Chapin et al. 2011;Mathevet et al. 2018).
Approach
Our research along Australia's Great Southern Reef spanned the southern half of the continent of Australia, from Port Stephens to Perth. We studied 56 sites, spanning five jurisdictions (States), with roughly even distribution across protected area levels to model different policy (institutional stewardship) settings; 19 sites were fully protected areas, 18 sites were partially protected areas and 19 sites were open areas ( Fig. 2 and Table S1). We selected site boundaries to encompass the diversity of recreational uses observed at the site and a mix of terrain such as water, rocky shore, beach, parkland and other developed areas, where they were present.
Our social-ecological research questions called for a diverse set of methods. We used structured observation (Bryman 2016) to record site factors such as mix of users (people swimming, walking, fishing etc) and signage. Perceptions, values, motivations and recreational and stewardship activities of individuals at each site were gathered using semi-structured interviews (Bryman 2016). We used purposive sampling, selecting people in proportion to the numbers in each user category at each site (Table S3), and aiming for representation of sex and age classes where possible. At several of our sites, the numbers of people present were small, allowing sampling of most or even all users.
We chose underwater visual census as implemented in the global Reef Life Survey (RLS) program for the ecological part of our study (RLS 2016). RLS uses highly trained volunteers and scientists to gather fish, invertebrate and habitat data on shallow reefs, and has been used in many studies around the world (for example, Edgar et al. 2014). RLS data include size-classed abundances of all visible fishes, abundances of all visible mobile macroinvertebrates, and Gender equality Access to natural resources 6 Clean water and sanitation Protect and restore water-related ecosystems 8 Decent work and economic growth Decouple economic growth from environmental degradation 9 Industry, innovation and infrastructure Greater adoption of clean and environmentally sound technologies 11 Sustainable cities and communities Protect the world's natural heritage Reduce environmental impact of cities We verified that the RLS dataset contained ecological data aligned with the top three categories of marine life that were mentioned as important by participants in interviews; fish, algae and seagrass (habitat data set) and marine mammals (included in the "fish" data set).
Data collection
We gathered social data over a 15-month period commencing in March 2018. Due to the practical limitations inherent in covering large distances in Australia, we travelled primarily from east to west, surveying NSW then Victoria, Tasmania, South Australia and Western Australia. To check for the influence of seasonal effects, we completed our final site and social surveys once we had looped back to NSW and confirmed that site usage figures did not vary significantly by season (PERMANOVA p > 0.05). In total, 190 site surveys and 439 interviews were conducted during daylight hours over a mix of weekdays and weekends. The interview guide is provided in Table S2. We prompted for stewardship activities using the categories in the Local Environmental Stewardship Indicator (Turnbull et al 2020b and Table 2). The average duration of each site visit was 97 min. Interviews, which typically took 15 min each but in some cases lasted up to 45 min, passed the point of theoretical saturation by the end of the project (Bryman 2016).
Due to the large public Reef Life Survey database we were able to incorporate retrospective ecological data spanning 6 years. We chose this period as a balance between the duration of participants' experience at a site and the duration of our study. We included a total of 625 RLS fish surveys, 556 invertebrate surveys and 1971 photo quadrats in our study.
Analysis
Our approach followed grounded theory, one of the most widely applied analytical approaches in the study of qualitative data (Bryman 2016), identifying and developing concepts via structured analysis and inductive reasoning over the course of our research (Glaser et al. 1968). We evaluated: the perceptions, values, motivations and stewardship actions of people; policy settings in the form of levels of protection; and ecological health factors including biodiversity and abundance of fish, invertebrates and algae at each site.
We used a combination of indicators incorporating Likert scales (agreement/disagreement), frequency scales (how often an activity was performed or observed) and categorical coding of open and closed questions during each interview (Bryman 2016). Responses to open questions were recorded by a combination of audio recordings and in situ written transcripts and were later coded and analysed in nVivo software version 12 (QSR 2018). Further classifications were created, for example locals vs. visitors, based on self-reporting or observation. Signage was classified as compliance (e.g., relating to fishing regulations) or marine life (e.g., celebrating the local fauna).
To understand relationships between social factors and ecological condition, we conducted Gaussian linear mixed-effects models (LMM) via the LME4 package in R (R Core Team 2018). Response variables were the richness, abundance and biomass of fish, invertebrate and habitat communities, and predictor variables were local stewardship and protection level. We included random intercepts for Year (6 levels), State (5 levels), and Site (56 levels), where Site was nested in State and Year. Data were log-transformed to meet the assumptions of homogeneity of variance. Fish biomass was calculated using constants from the allometric growth equation Biomass=aL b (Froese 2017). We used Collaborative and Annotation Tools for Analysis of Marine Imagery (CATAMI) guide version 1.2 (Althaus et al. 2013) to analyse habitat to the morphotaxa level in CoralNet (Beijbom 2012) as we were most interested in the visible and structural aspects of habitat.
Stewardship was calculated for each participant as a continuous variable based on the reported frequency of the seven stewardship actions using the Local Environment Stewardship Indicator (LESI) (Turnbull et al. 2020b) ( Table 2). We calculated site stewardship levels as the maximum stewardship score across all participants at a site, due to the importance of "uber-stewards" in directly and indirectly influencing local ecological and social outcomes (Turnbull et al. 2020a).
This study was conducted under the ethics approval of the University of NSW, permit HC180044
The sustainability spiral
Our findings are summarised in a new empirically grounded framework comprising a conceptual diagram and spiral model which we name the Sustainability Spiral. The conceptual diagram (Fig. 3a) portrays a virtuous cycle in which institutional and local stewardship combine to improve ecological outcomes, which in turn motivate further stewardship. Multiple iterations of this cycle are portrayed in the new spiral model, enabling visualisation of progress towards sustainability goals over time (Fig. 3b).
Empirical support for the sustainability spiral
Overall 48% of our sample identified as female, 58% of participants regarded themselves as local, participants had been coming to their site for an average of 14.8 years and had visited 7.3 times in the last month. The majority (89%) of participants reported undertaking one or more stewardship actions at their site (Table 2). "You make sure you're not in the marine reserve when you fish" (102) "I always catch and release" (105) Education 41 "I'm looking to contribute, to educate others, now I'm retired" (255) "I educate kids on how to care for (this place) and the importance of animals" (360) "The beauty of the environment here is an opportunity to educate others" (420) Advocacy 19 "I'm an environmental advocate. We only act locally" (181) "Reserves have a positive impact, any argument to the contrary is absurd. I support them and I'm a fisher" (219) "I enjoy bird counting. My data can influence decision-makers" (362) Informal enforcement 21 "I have approached people and said 'that's an undersize fish' but you have to be careful" (70) "We always call out if we see fishing boats; I have the fisheries hotline on speed dial" (95) "It's worthless to have reserves if they don't restrict fishing, so we have to enforce reserves" (281) Monitoring 15 "I count the number of people with a clicker, and write down the species that are seen each day" (2) "I take photos for iNaturalist" (64) "We do the nudibranch census here each year" (101) Preservation 28 "I like to preserve it, to let the marine life recover" (12) "I learnt to look after the environment in Scouts, to keep it pristine" (293) "You have to respect what you've got, not damage things, so you can come back" (322) Restoration 75 "I always take three for the sea" (95) "We're an active group that has been cleaning up the beach" (96) "I bring the grand-kids here to do clean-ups" (255) When asked to elaborate on the motivation for their stewardship actions, 91% indicated one or more components of the virtuous cycle (Table 3), and almost half (43%) acted to achieve ecological outcomes alone such as protecting marine life from harm. Favoured marine life were primarily fish (valued by 26% of participants) followed by algae and seagrass (10%), marine mammals (9%) and birds (8%). People who fished at their site valued fish the most (valued by 42% of fishers). Whilst non-fishers talked more generally about marine life or wildlife (30%), their focus on fish as favoured marine life was still high (23%).
Over one quarter (27%) of people were motivated by social outcomes alone such as swimming in water free of debris, and 30% were motivated by both social and ecological factors, effectively describing both directions of the virtuous cycle and in many cases longer term sustainability outcomes (Table 3). Over half (55%) of stewards were motivated by sustainability or related long-term concepts such as preservation for future generations, integrity of nature or ecosystems and reducing unsustainable human impacts.
Our quantitative analyses provided correlative support for this virtuous cycle. Sites with higher maximum local stewardship levels and higher institutional stewardship (fully protected areas) were associated with significantly more fish biomass (Fig. 4c, d). We detected no significant improvement in fish diversity or biomass in partially protected areas compared to open areas. Participants reported undertaking higher levels of local stewardship action at sites with more diverse habitat and when they perceived better marine life at a site (p < 0.05 for all results, Fig. 4e, f, and Table S3).
Participants also undertook stewardship actions as a result of the presence of, and to improve the effectiveness of, their local marine protected area. These are generally represented by empowerment and informal enforcement arrows in Fig. 3a, respectively. Stewardship was significantly higher in fully protected areas than in partially protected areas and open areas, but there was no significant difference in stewardship between partially protected areas and open areas ( Fig. 4a and Table S3). Empowerment included having effective rules to enforce in fully protected areas, enabling the connection between shore and marine life, and valuing and preserving fully protected areas (Table 4). Informal enforcement was undertaken by over one fifth (21%) of participants (Table 2) and took the forms of documenting transgressions, speaking with people breaking the rules and sometimes reporting them. Signage also appeared to correspond with increased stewardship of sites, with significantly higher maximum stewardship levels at sites that had more signs promoting local marine life ( Fig. 4b and Table S3).
Discussion
The Sustainability Spiral portrays the mutual interdependency of social and ecological domains in progressing towards sustainability through time, over multiple (Chapin III et al. 2010;Folke et al. 2016) in general terms, describing how effective stewardship moves virtuous cycles upwards towards sustainability, whilst poor stewardship and overexploitation of natural systems result in downward vicious cycles towards unsustainability.
The spiral model is an important contribution to the conceptualisation of stewardship. Existing theoretical models rely on the two-dimensional closed-loop framework and show the relationship between virtuous cycle components, but make it difficult to visualise changes in the status of these components over time. Reframing stewardship to recognise its dynamic, transformative nature is an essential step towards achieving conservation and sustainability goals (Chapin III et al. 2010;Mathevet et al. 2018).
The Sustainability Spiral enables visualisation of the direction of progress and position in time for sustainability overall, or at a more discrete level such as a particular SDG or target. For example, the goal of Life Below Water (SDG #14) includes the target of protection of 10% of marine and coastal ecosystems by 2020. The Sustainability Spiral can be applied for this one component alone, visualised by placing the 10% goal at the top of the spiral and noting the current position at points in time on the vertical dimension [4.4% in 2015 and 7.4% in 2020 (UNEP-WCMC 2020)]. The model then encourages elaboration of the stewardship actions that are required (for example, steps to increase the area of ocean under protection and enable local community support) and the values and ecosystem services that motivate progress towards the goal (for example, more fish diversity and biomass, sustainable supply of protein and tourism revenue).
Stewardship
Numerous studies have found that effective institutional stewardship, in the form of well-managed fully protected areas, results in higher fish biomass and diversity (Costello and Ballantine 2015; Edgar et al. 2014;Turnbull et al. 2018). Our study builds on these results to show that institutional stewardship combines with the stewardship actions of people in the community to result in an even stronger positive association for fish biomass (Fig. 4c). This is in keeping with studies in other settings, for example co-management of tropical reefs (Cinner et al. 2012;Pollnac et al. 2010).
The most frequently reported stewardship action among participants in our study was restoration (75% of participants), primarily through cleaning up debris, followed by educating others (41% of participants) ( Table 2). These represent direct and indirect stewardship actions, respectively, with the former impacting directly on the environment and the latter potentially raising local stewardship levels by influencing the actions of others (Bennett et al. 2018). The high rates of stewardship reported in our study reflect our broad measure, encompassing seven actions, and provide evidence of substantial pro-environmental behaviour despite the recognised gap between human values, intentions and actual behaviour (Kollmuss and Agyeman 2002). Overall, 89% of participants were motivated to act on their values and intentions to undertake some form of stewardship. Table 3 Examples of motivations for stewardship, in response to the question "why do you (take these stewardship actions)"?; ecological, social and both; with participant reference number Longer term sustainability outcomes, aligned with SDGs (Table 1), are underlined Ecological motivations (n = 172) Social motivations (n = 109) Social-ecological motivations (n = 120) "Because it's harmful to animals. I've seen awful impacts on marine life" (237) "To help the ocean; humans are mindlessly exploiting it" (257) "So we don't ruin nature and contribute to its destruction" (274) "So the animals don't eat pollution" (347) "Safeguard the environment" (403) "To take care of my local and preserve diversity" (410) "It's a privilege to swim here and see fish" (17) "Preserve it for our children" (48) "The ocean takes care of us" (79) "The diving is important for my business" (106) "This place is free pleasure, I want to keep it natural" (123) "It's part of who I am" (275) "For future generations of animals and humans" (29) "For the integrity of ecosystems, and human health" (59) "For sustainability and sustainable fisheries, to avoid animals ingesting plastic" (138) "Care for the habitat, so other creatures can use it, and other people can enjoy it" (197) "Minimal impact is the future, it's the only way to sustain life" (244) "It's part of the circle of life" (246) "Water is life, we share a connection with the ocean" (288) Positive local signage relating to marine life, for example showing "what lives here", was significantly related to higher levels of stewardship (Fig. 4b). Previous studies have found that signage can influence pro-environmental behaviour (Martin et al. 2017;Marschall et al. 2017) although the effect can vary depending on presentation, content and placement (Martin et al. 2015). Such signage may flag the presence of social or collective norms for stewardship and, together with compliance signage, may signal the existence of policies focused on preservation vs. exploitation (Goldstein and Cialdini 2007). Social norms can provide a mechanism to reduce consumption of shared resources, for the collective good (Levin 2006). We found no significant relationship between local stewardship and compliance signage (signs explaining the rules); however, compliance signage does improve awareness of regulations (Turnbull et al., 2021) and may, therefore, still act indirectly on ecological outcomes.
Ecological outcomes
The most direct ecological outcome of combined institutional and local stewardship-more fish-appears to also be the most socially valued ecological factor on the virtuous cycle, and is, therefore, the primary basis for the empirical support for our model (Fig. 4). Participants also valued and were motivated by broader sustainability outcomes including the welfare of and reduction of harm to animals in general, keeping a place 'natural' or 'pristine', and protection from overexploitation and pollution (Table S5). Key supporting themes included the fragility of the environment and the need to respect and care for it, the unsustainable level of human impact, and resulting degradation of ecological integrity, health, abundance and diversity.
Valuing marine life generally, and fish more specifically, was driven primarily by aesthetic, non-extractive reasons. Liking, beauty, nature and watching were all more prevalent as reasons for favouring marine life than catching or eating fish. This reflects the diverse range of coastal users, 82% of which did not fish in the context of our study, and highlights the importance of considering such non-consumptive stakeholders in coastal studies (Farr et al. 2014)
Social outcomes
Over half of participants reported undertaking stewardship of the environment to achieve social outcomes such as the enjoyment of observing wildlife, human benefits based on our dependence on nature, identity, preservation for future generations and business (Table S6). Aesthetics, cleanliness and families were key themes in valuing and caring for the environment, as well as directly experiencing wildlife in its natural habitat (Table S6).
These social factors were enabled by healthy natural ecosystems, perpetuated and improved by ongoing stewardship (Fig. 3), as represented in the Sustainability Spiral. Our quantitative results aligned with these findings via the significant relationship between more diverse habitat and the level of local environmental stewardship. We also found a significant positive relationship between the perception that local marine life is better than surrounding areas and higher stewardship activity. Multiple theories propose that behaviour is the result of values and perceptions, for example the Theory of Planned Behaviour (Ajzen 1991), supporting a potential direction of causality from ecological improvement to perception of this improvement and finally stewardship behaviour.
Policy and management implications
Policies for sustainability through institutional stewardship, in the form of fully protected areas, empowered local stewardship and directly improved ecological outcomes in our study. Policies, therefore, synergised with the actions of local community stewards to deliver an even stronger virtuous cycle (Fig. 3a). Protection-related policies were highlighted as important enablers of sustainability by participants undertaking stewardship, including the presence of protected areas in general and fully protected areas in particular (10% of participants each), effective rules and regulations (7%), effective management (6%), enablement of science and research (3%) and prevention of user conflict through zoning (3%).
Fully protected areas were associated with higher levels of maximum and individual local stewardship (Table S4), enabling the realisation of the desire for improving ecological health. They empowered local stakeholders to undertake advocacy, education, and informal enforcement (Table 2). Informal enforcement, both alone and in combination with formal enforcement, can improve the effectiveness of protected areas (Santis and Chávez 2015). Empowerment at individual and collective scales is essential for the transformation that is necessary to achieve global sustainability (Andrijevic et al. 2020;Messerli et al. 2019).
One component of our model-the policy-to-place relationship-is well documented in other studies (for example, Edgar et al 2014) and so the direction of causality is supported in the literature. The other four relationships all entail social factors which we support in this study with qualitative evidence in Tables 3 and 4 to support our interpretation of causality. Further research may target each of these relationships to further explore them with new statistical methods and designs.
We designed our study to focus on local contexts and draw conclusions in aggregate at a semi-continental scale. Our qualitative results provide insight into individuals and their motivations at the local level, and our quantitative results provide evidence for the large-scale relationships between people, policy and place. Global environmental problems need action at multiple scales (Sterner et al. 2019) but solutions are often best implemented at local scales (Duarte et al. 2020). Our study illustrates the operation of virtuous cycles via people undertaking stewardship at their local coastal place, enabled by effective higher level policy.
Whilst we developed our model for broad application, our general language may need refinement to suit the prevailing terminology in other settings. We distinguish between local stewardship-arising from the motivations and actions of individuals and groups at a local level; and institutional stewardship-driven by higher level authorities through policy and regulation. These terms are in keeping with prior research such as the co-management of tropical marine social-ecological systems (Cinner et al. 2012), although the two forms of stewardship may be hybridised as combined customary and modern management institutions (Cinner and Aswani 2007). We believe further research is warranted to explore the concepts, framings and dynamics of virtuous cycles in other social-ecological contexts.
Conclusion
Today, human society and our biosphere urgently need to achieve sustainability through effective earth stewardship Steffen et al. 2011). In this study, we conclude that effective institutional and local stewardship can drive iterative virtuous cycles of improving social and ecological outcomes, which over time may progress towards sustainability. Pursuit of the UN 2030 Agenda requires such virtuous cycles to conserve and protect wildlife and the environment, maintain natural resources and preserve the long-term integrity of our social-ecological systems. This depends on more positive, generally applicable ways of conceptualising social-ecological systems, such as the Sustainability Spiral. Such framings are essential to engage and facilitate collaborations with stakeholders, enable transformation, and pursue mutually desirable outcomes for a sustainable future. Fig. 4 Linear model plots of quantitative empirical support for the virtuous cycle of the Sustainability Spiral with bands showing standard error; a higher participant stewardship levels empowered by fully protected areas (p = 0.027); b higher maximum stewardship levels at sites with more marine life-related signage (p = 0.034); c higher big fish biomass at sites that have high maximum local stewardship, grey: all sites and red: fully protected areas only (p all sites = 0.03 and p fpa = 0.009); d higher big fish biomass in fully protected areas (p = 0.05); e higher stewardship levels at sites with more diverse habitat (p = 0.033); f higher stewardship activity when participants perceived better marine life at a site (p = 0.007); and g significant results mapped onto the Sustainability Spiral conceptual diagram by their panel letters (a-f) ◂ Table 4 Examples of policy-related explanations offered by participants regarding their stewardship actions and why they were undertaken, with participant reference number in brackets Empowerment (n = 35) Informal enforcement (n = 92) "Because it's a sanctuary zone. It's connected-shore to marine life" (435) "Because (the fully protected area) should be enforced; we want to have places protected" (235) "Our sanctuaries are so small, any transgression is a big issue" (183) "Because it's precious to have a marine park so close to the city, with wild animals in it" (256, in a fully protected area) "To preserve the area, because it's a fish sanctuary" (272) "(The partially protected area) should be a sanctuary… at the moment the only protections the marine life receive… is from locals defending fish, gastropods and weed themselves" (19) "We always call out if we see fishing boats; I have the fisheries hotline on speed dial. I often take photos of boats fishing in the reserve." (98) "A fishing boat came in early in the morning, they were standing there in the middle of the bay, blatantly fishing. We yelled at them and told them there were big fines, and they moved" (2) "I ask people not to spear the fish" (278) "You can only go down and try to explain the situation to people, even if they give you a gobful." (17) "A family was fishing off rocks in the reserve, I spoke to them" (282)
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 8,527.4 | 2021-06-09T00:00:00.000 | [
"Economics"
] |
N=1 super sinh-Gordon model with defects revisited
The Lax pair formalism is considered to discuss the integrability of the N = 1 supersymmetric sinh-Gordon model with a defect. We derive associated defect matrix for the model and construct the generating functions of the modified conserved quantities. The corresponding defect contributions for the modified energy and momentum of the model are explicitly computed.
Introduction
Defects in integrable classical field theories have been an intensively studied topic in recent years [1]- [15]. They were initially introduced in [1, 2] as internal boundary conditions described by a local Lagrangian density located at a fixed point, and it was shown for several type of bosonic field theories ,that these conditions correspond to frozen Backlund transformations and preserve integrability in the defect models. When the fields on either side of the defect only interact with each other at the boundary is referred to as a type-I defect. However, it was shown in [3] that additional degrees of freedom associated to the defect itself can be also introduced through auxiliary fields which only exist at the defect point. These kind of defect are named type-II defects.
More recently it was also suggested a different and rather systematic approach to defects in classical integrable field theories [4]. The inverse scattering method formalism is used and the defect conditions are encoded in a defect matrix. This approach provided an elegant way to compute the modified conserved quantities, ensuring integrability. Using this framework the generating function for the modified conserved charges for any integrable evolution equation of the AKNS scheme were computed, and the type-II Backlund transformations for the sine-Gordon and Tzitzéica-Bullough-Dodd models had been also recovered [7]. The massive Thirring [8] and the supersymmetric Liouville [9] models have been also considered.
On the other hand, the question of involutivity of the modified conserved charges has been addressed for several models [10]- [13], by using essentially the algebraic framework JHEP02(2015)175 of the classical r-matrix approach, and a modified transition matrix to describe integrable defects.
The presence of integrable defects in the N = 1 supersymmetric sinh-Gordon (sshG) model was discussed in [14], under the Lagrangian formalism and Backlund transformations. The authors have introduced a "partially" type-II defect in the model since only a fermionic auxiliary field appears in the defect Lagrangian, and the integrability was considered in terms of zero curvature representation.
The purpose of this paper is to study the integrability of the N = 1 sshG model from the defect matrix approach, in order to establish the existence of a generating function for an infinite set of modified conserved quantities. In section 2, we briefly review the defect N = 1 sshG model, present the supersymmetry transformations that leave the total (bulk + defect) action invariant and derive the defect contribution to the supercharge from the Lagrangian formalism. In section 3 we present the Lax pair formalism and derive explicitly the defect matrix for the N = 1 sshG model. In section 4 we introduce the associated linear problem to discuss the conservation laws and then we compute the corresponding defect contributions to the modified conserved quantities. In section 5 we discuss further solutions for the defect matrix leading in the special case of the pure bosonic limit to a type-II Backlund transformation. This fact suggests the derivation of the type-II Backlund transformation for the N = 1 supersymmetric sinh-Gordon system.
Review of the Lagrangian description
The Lagrangian density describing the N = 1 sshG model with type-I defects can be written as follows [14], where φ p are real scalar fields, and ψ p ,ψ p are the components of Majorana spinor fields in the regions x < 0 (p = 1) and x > 0 (p = 2) respectively, with the corresponding potentials given by
JHEP02(2015)175
The bulk fields equations are, and the defect conditions at x = 0 are given by Here we have a fermionic degree of freedom f 1 at the defect which anticommutes with the fields ψ p andψ p . If we also consider the x-derivative of f 1 , the eqns (2.8)-(2.13) become the Backlund transformations for the supersymmetric sinh-Gordon model [16]. From this Lagrangian density we derive the canonical momentum (2.14) and by computing its time derivative, it was shown in [14] that the modified momentum, namely, is conserved after using properly the defect conditions (2.8)-(2.12). Analogously, for the energy, the modified conserved energy is given by, In addition, the total action (bulk + defect) for the defect sshG model is invariant under the supersymmetry (susy) transformations, 1 together with susy transformation of the auxiliary fermionic field (see appendix A), After introducing the defect at x = 0, the corresponding supercharges read as follows, Now, by taking the time-derivative respectively, we get Then, by considering the defect conditions (2.8)-(2.12), we find that the modified conserved supercharges take the following form [17], where The light-cone coordinates are taken to be x± = x ± t, and therefore ∂± = 1 2 (∂x ± ∂t).
JHEP02(2015)175
3 Lax formulation and defect matrix The N = 1 sshG equation can be derived as a compatibility condition of the following linear system of first-order differential equations, where Ψ(x ± , λ) is a three-component vector-valued field, λ is the spectral parameter, and the Lax connections A ± are 3 × 3 graded matrices valued in the sl(2, 1) Lie superalgebra, which can be written in the following form,
2)
Then, from the zero-curvature condition or Zakharov-Shabat equation, we recover the sshG field equations (2.7). Now, to derive the defect matrix via gauge transformations, we consider the existence of a graded matrix K connecting two different configurations, namely Ψ (2) = K(λ)Ψ (1) , satisfying the following equations, where A (p) ± represents the Lax connections depending on the respective fields φ p , ψ p , and ψ p . Let us consider the following ansatz for the λ-expansion of the matrix K, with α ij , β ij , and γ ij being the entries of 3 × 3 graded matrices. First of all, by considering the λ-expansion in order to solve the differentials equations (3.5), we find that the λ +3/2 and λ +1 terms lead to and
By introducing the following parametrizations, Since b 11 = 0, we also find that the set of elements {α 11 , α 22 , α 23 , α 31 , β 11 , β 13 , β 22 , β 32 , β 33 } completely vanish and then do not contribute at all to the K matrix. Finally, if we consider equation (B.30) involving the element α 33 , we obtain from where we conclude that α 33 = 2c 11 ω −2 . Therefore, we have found a suitable solution for the defect matrix K, which can be written in the following form, where ω represent the Backlund parameter and c 11 is a free constant parameter. Now, as it was proposed in [8], the defect matrix will be used to derive modified conserved quantities. To do that, in the next section we will derive the bulk energy and momentum by using the Lax formalism.
Conservation laws
In this section we will construct explicitly generating functions for an infinite set of independent conserved quantities for the sshG model in the bulk theory, as well as derive the corresponding modified conserved quantities arising from the defect contributions in the defect theory, by using the Lax approach.
Associated linear problem and conserved quantities
Let us consider the associated linear problem for the sshG model in the (x, t) coordinates as follows, , and the vector-valued function Ψ has the form Ψ = (Ψ 1 , Ψ 2 , ǫΨ 3 ) T , with bosonic components Ψ i and ǫ a Grassmannian parameter. Now, as it was claimed in [18], it is possible to construct a generating function for the conservation laws by defining a set of auxiliary functions Γ ij [Ψ] = Ψ i Ψ −1 j , for i, j = 1, 2, 3. Then, considering the linear system (4.1) and (4.2), we find that the j-th conservation equation can be written as, where the functions Γ ij satisfy the following Riccati equations, Then, the corresponding j-th generating function of the conserved quantities reads, In order to derive explicitly the conserved quantities, it is necessary to introduce λexpansions of the functions Γ ij in positive and negative powers of the spectral parameter to solve in a recursive way the Riccati equations for each coefficient. As a consequence, an infinite set of conserved quantities will appear from the generating function (4.6). In particular, to derive the energy and momentum we will consider the λ 1/2 -terms of the charges I 1 and I 2 .
Firstly, let us consider the explictly form of the Riccati equations for j = 1, Now, we expand Γ 12 and Γ 31 as λ → 0, . (4.9) By inserting these expansions into the Riccati equations (4.7) and (4.8) we find that the first coefficients are given by, Γ Thus, we find from the first generating function of conserved quantities (4.6), namely that the charge I 1 is the topological charge, namely, 15) and the first non-trivial conserved quantity is given by the λ 1/2 -term, which is given by Now, if we consider the expansion of the Γ 12 and Γ 31 as λ → ∞, we get,Γ Then, by introducing these results into eq. (4.14) we find in this case that the zero-order term vanishes and the first non-vanishing conserved quantity is given by the λ −1/2 order, as followŝ Let us now consider the respective Riccati equations for j = 2, JHEP02 (2015)175 (4.23) As it was done before, we expand Γ 12 and Γ 32 as λ → 0, and then we get From the generating function (4.6) for j = 2, we find again that the zero-order gives us the topological term but in this case with an opposite sign, and the first non-trivial conserved quantity is given by the λ 1/2 -order in the following way, From eq. (4.28) we obtain that the zero-order term vanishes and the first non-vanishing conserved quantity is given by,
JHEP02(2015)175
Notice that by adding (4.15) and (4.29) we find that the topological charge of the model is zero [19,20]. On the other hand I , and then we can introduce a new set of conserved quantities defined by (4.37) Then, the energy and momentum for the sshG model in the bulk theory are recovered in this formalism through the following combinations of the conserved quantities defined above, with, In the next subsection we will derive explicitly the corresponding defect contribution to the modified energy and momentum after introducing the defect in the formalism.
Defect contributions
So far we have considered the derivation of conserved quantities by using the Lax formalism in the bulk theory. In this section we will consider the modification to those quantities after introducing the defect into the formalism. To do that, let us recall how to construct the corresponding modified conserved quantities from the defect matrix. As explained in [8], a defect placed at x = 0 can be introduced in the generating functions of conserved quantities in the following form, ij with p = 1, 2 are the components of the x-part of the Lax connections (4.2) describing each associated linear problem in the regions x < 0, and x > 0 respectively; and j ) −1 are their corresponding set of auxiliary functions derived in the last section. After taking the time-derivative of (4.41), we find that the modified quantities gives the defect contributions to the j-th generating function of conserved quantities, and its precise form depends on the components of the defect matrix. From the above formula (4.42) we will derive two different sets of defect contributions by considering the expansion of the auxiliary functions in positive and negative powers of λ respectively.
JHEP02(2015)175
Firstly, from the explicit form of the defect matrix (3.37), and the coefficients corresponding to the expansions in positives powers of λ (4.10)-(4.13) and (4.25)-(4.27), we get Analogously, from the coefficients for the negative powers of λ (4.18)-(4.20), and (4.32)-(4.35), we findD Now, let us define the following quantities, Then, the defect contributions to the modified energy and momentum are recovered by adding and subtracting the expression in (4.48) as follows, The expressions derived from the Lagrangian formalism in (2.15) and (2.17) can be reached by noting that the auxiliary field f 1 introduced in (3.20) satisfy the following relations, Then, we finally get Notice that eqs. (3.5) and (3.6) led to solutions partially of type-II in the sense that the bosonic part is of type-I (i.e., supersymmetric extension of type-I with an auxiliary fermionic field). The question we raise in whether (3.6) generates more general solutions.
In order to face this problem let us consider the pure bosonic limit when the fermions are set to zero. We shall see that in this case, we obtain solutions of type-II. In such limit the gauge potentials are written as and with the results above (5.3), (5.4) and (5.6) we find the simple solutions . 14) Introducing the auxiliary field Λ such that γ 21 = c 21 e Λ−φ + , equations (5.10)-(5.13) become A compatible solution for (5.16)-(5.19) and (5.14), (5.15) is (5.20) The above system is identified as a type-II Backlund transformation for the bosonic sinh-Gordon model [3,7]. Therefore the matrix K is given now by This result for the pure bosonic case indicates that eqs. (3.5) and (3.6) may also generate type-II Backlund transformations for the supersymmetric system, which is a subject under our investigation.
JHEP02(2015)175
A Supersymmetry of the N=1 defect sshG model The action for the bulk N = 1 sshG model, with, has N = 1 supersymmetry without topological charge [19,20]. The supersymmetry transformation is given by where ε andε are fermionic parameters. Under a general not-rigid susy transformation, i.e with parameters ε(x, t) andε(x, t), L bulk changes by a total derivative if the conservation laws holds, Then, the associated bulk supercharges Q ε andQε can be written as integrals of local fermionic densities, as follows From the above expressions can be easily verified that {Q ε ,Qε} = 0. On the other hand, when considering a rigid (constant parameters) susy transformation, L bulk changes as It turns to be that the bulk theory defined by the Lagrangian (A.2) is invariant under susy transformation. However, this is not necessarily true for the defect theory, and therefore
JHEP02(2015)175
we should show that the presence of the defect will not destroy the supersymmetry of the bulk theory. In fact, in the defect theory the total action (left half-line + defect + right half-line), under the susy transformation (A.3)-(A.5) changes as follows, Then, as the right-hand-side of the above equation does not vanish immediately, we should show that the variation of the defect Lagrangian, cancels out the surface terms and exactly restores supersymmetry, where the defect potentials B k , with k = 0, 1 are given by It is important to take into account that all the fields appearing in the defect Lagrangian are valued in x = 0 and only depend on time. Now, let us first derive the corresponding susy transformation for the auxiliary fermionic field f 1 . By applying the susy variation on (2.10), one of the defect equations involving f 1 , namely we get from l.h.s.,
JHEP02(2015)175
and from the r.h.s. we find, where we have used eq. (2.11). Now, comparing the above results and using eq. (3.35) it can be checked that the susy transformation of the fermionic field f 1 is given by Now, we find that the supersymmetry variations of the bosonic terms of the defect Lagrangian (A.14) are, and For the terms involving only fermionic fields, we obtain and (A.25) In addition, for the defect potential B 1 we get,
JHEP02(2015)175
Now, by putting together all the variations obtained we can find after some algebra that the susy transformation of the defect Lagrangian is exactly given by, which cancels out exactly the surface terms in (A.13). Then, we have shown that the defect sshG model has a well-defined N = 1 supersymmetry. This implies that there must be modified supercharges which are preserved. Let us compute the defect contribution for Q ε . After the introduction of the defect at x = 0, we get Now, by taking the time-derivative respectively, we get . (A.29) The first term in the above result vanishes after using the defect conditions (2.10) and (2.11). Then we find that the modified conserved supercharges can be written in the form Q = Q ε + Q D , with the defect contribution given by Analogously, for the superchargeQε, the corresponding modified conserved supercharge isQ =Qε +Q D , where the defect contribution is given byQ . (A.32)
B Calculation of the defect matrix
The defect matrix K is directly derived by solving the differential equations, with the Lax connections are given To find a solution for the defect matrix K, we propose the following λ-expansion, Now, considering term by term we find a set of constraints coming from the λ ±3/2 and λ ±1 terms, which we will present explicitly as follows: λ +3/2 -terms: | 4,048.8 | 2015-02-01T00:00:00.000 | [
"Physics"
] |
Communication Audit of Digital Entrepreneurship Academy of Human Resources Research Program and Development Agency of the BPSDMP Kominfo Surabaya in Pamekasan Region
The Digital Talent Scholarship (DTS) program by the BPSDM (Center for Human Resources Development and Research) Kominfo aims to enhance the skills and competencies of Indonesian human resources in the digital field. One such program is the Digital Entrepreneurship Academy (DEA) program, which trains individuals to accelerate the growth of digital technology in entrepreneurship. This study aims to understand the DEA Program training in Pamekasan Regency, conducted by BPSDMP Kominfo Surabaya, and its benefits. The qualitative research method used is interviews, observation, documentation, and literature studies. Results show that almost 100% of participants in the DEA program experienced success in business development through networking both offline and online. The study concludes that progress has been significant in the DEA Program training, particularly during the COVID-19 pandemic, which has shifted many activities from conventional to digital, including entrepreneurial activities. Monitoring is necessary to ensure the program's resources, expected outputs, and constraints. The research aims to provide an overview and study of digital entrepreneurship by the government to the community.
Introduction
A company, organization, or institution is made up of multiple people, each of whom has different interests.Any interaction activity in a given context requires communication to occur because it is only through communication that it appears to have the ability to affect an individual's behavior.A communication audit, which is helpful for assessing the entire course of communication, is required to maintain the quality of the communication within an institution or organization (Sudarmanto et al., 2023).Comprehensive and methodical analysis, evaluation, and quantification of various communication-related aspects are all part of communication audits (Suwatno, 2019).Empirical evidence indicates that, despite the formulation of "perfect" internal communication policies, communication frequently fails to function as intended, necessitating the need for a communications audit.Organizational performance is anticipated to be significantly impacted by communication inefficiencies.Executives in the organization feel that regular reviews of internal communication procedures are necessary.Reviewing internal communication practises is the most suitable method for determining how effective they are (Hardjana, 2014).
Nevertheless, the idea of communication auditing did not take off right away, despite its significance.A small group of professionals did not begin using communications auditing until the late 1960s.The idea of communication auditing is thought to be unpractical, which explains its lack of acceptance.Since it entails a thorough analysis of every aspect of communication, such as the source, meaning and message, receiver, medium, process, impact, and context of the communication, a communication audit is regarded as complex.Therefore, using a combination of quantitative and qualitative research methods is necessary when conducting a communication audit.
Therefore the researchers are interested in looking into how communication audits can be conducted at government agencies of the BPSDMP (Center for Human Resources Development and Research) in one of its programmers, given the significance of communication audits that are not balanced with the level of awareness of experts.Because this programme makes use of widely-used information and communication technology as well as the community.One programme under the Digital Talent Scholarship (DTS) initiative was introduced by the Ministry of Communication and Information Technology and is known as the Digital Entrepreneurship Academy (DEA) (Suyudi, 2022).In order to strengthen the digital economy, the programme was designed to develop human resources capable of quickening the development or application of digital technology in the field of entrepreneurship (Humas Kominfo, 2022).The Human Resource Research and Development Agency (BPSDMP), Global Tech Company Partners, Colleges, Local Start-ups, and the BPSDMP (Center for Human Resources Development and Research) collaborated to create the programme, which started in 2020.The program's primary goal is to develop a group of young entrepreneurs with the abilities and know-how to successfully use technology, information, and communication.
The DEA programme also requires to increase the number of MSMEs that are aware of and adept at using the digital world to advance their entrepreneurial abilities.In accordance with the Small Enterprises Act, as specified in the Regulation of the Minister of State for Corporations and Small and Medium Enterprises of the Republic of Indonesia Number 2, 2008, small businesses can become more resilient and independent through training and support from the government, business community, and other sources.This is also consistent with the topics covered at the November 2022 G20 Presidency, which included the Digital Economy as one of the pillars of Digital Transformation.It is therefore envisaged that Indonesia will be able to produce a large number of creative digital entrepreneurs thanks to the DEA programme.
The following are problems that can be developed using background information: According to the research's background information provided above, the researcher concentrated on an issue pertaining to the Communication Audit procedure that took place during the Digital Entrepreneurship Academy (DEA) training programme in the Pamekasan Regency.
Communication Audit
Communication Audit, according to Joseph Kopec is the analysis of organisational communication both internally and externally to get a general picture of communication needs, policies, actions, and capabilities.It also aims to identify the data required to empower management of the company to make decisions based on reliable and cost-effective information for the organization's future communication.This data is utilised for communication within the organisation in the future (Carter, 2007).It is necessary to conduct a detailed analysis of the company's communication practises, including what it says, how it says it, and who it says it to.Businesses are able to see what they are doing clearly in this way.Decisions or policies can then be made using the data audit communication.
An effective communication audit requires the following elements in order to function: 1) the person or people with whom the conversation should take place; 2) Specifically, with whom the conversation is held; 3) The things that ought to be expressed; 4) Guidelines for effective communication The actual method of communication (Quinn & Hargie, 2004).
Purpose of Communication Audit
Communication audit has its own goals and justifications, as previously discussed in relation to its definition and effective implementation.These include: 1) To ascertain whether and where there are benefits or drawbacks to communication in relation to the topic, sources, and channels of communication 2) By appropriately assessing interpersonal trust, it is possible to gauge the calibre of data and communication partnerships.3) Locate networks of informal operational communication and contrast them with official channels of communication.4) By contrasting these people with the roles they play in communication networks, you can learn about the elements that lead to the formation of information flow barriers and gatekeepers.5) Recognise the various forms of constructive and destructive communication meetings and events, along with instances of each kind.6) In order to specify the components, frequency, and quality of connections of communication at the individual, group, or organisational levels, 7) Provide some recommendations for changes or enhancements that ought to be implemented (Suwatno, 2019).
Communication Audit Approach and Model
It is very important to keep in mind that the approach and model to be used will depend on the goals of the communications audit, as outlined by the researchers above: 1) Conceptual approach: In the area of communication organisations, this approach addresses the effectiveness of communication systems or organisational performance.
Setting criteria for evaluating the organization's performance, or gauging the degree to which the communication activities' aims and objectives have been met, is the first step in the process.Stated differently, it assesses the degree to which the organisation has met its objectives.
2) The most popular approach is the survey approach, which treats surveys as a single tool.Homophile research is one example of this methodology; it gauges communication effectiveness by comparing the transmitter and receiver's frames of reference.
3) The procedure approach gives the measurement tools' communication audit process top priority.This approach is complicated because it employs a team of auditors over an extended period of time who use a variety of measurement tools for the entire organisation (Suwatno, 2019).
In addition to the previously mentioned communication audit methodologies.Additionally, there are communication audit models that fall into the following three categories: 1) The conceptual structure model is an organisational communication audit that aims to comprehend the relationship between work procedures or implementation procedures such as the use of communication networks-the application of communication policies and their implementation, and organisational structure which consists of work units, functional communication networks, communication policies, and activities and the purpose or ultimate goal of organisational communication in order to achieve organisational goals.
2) The purpose of the organisational profile model, a functional analysis model of organisational systems, is to assess the current situation in order to identify errors that may be occurring within an organisation and devise solutions for them in order to improve organisational effectiveness.3) Analysing and evaluating communication practises and activities in a particular context is part of the communication evaluation model.The data acquired can be used by management as a point of reference to enhance planning and control, as well as internal and external communication systems.Furthermore, it can assist in filling in the gaps that currently exist in communication systems (Suwatno, 2019).
Digital Entrepreneurship
Several things, including entrepreneurship, are now possible to do online because of technological advancements.Entrepreneurship is essentially the mindset of someone who conducts business or who is an entrepreneur; in other words, entrepreneurship leads to potential business activities (Munawaroh et al., 2016).The following strategies can help to overcome the challenges posed by digital transformation as an entrepreneur (Musnaini et al., 2020): 1) Create a thorough digital strategy, 2) Appropriate hiring practises for HR, 3) Sufficient utilisation of technology, 4) Real-time use of data, 5) Establishing efficient channels of communication between decision-makers and participants in the digital world is essential, 6) Applying the proper risk matrix.
Training
Training with maturity is necessary for the creation of optimal resources.The fundamental word exercise, which in KBBI means "learning and getting used to being able to (can) do something," is included in the term training.Thus, the process of training is the training itself.Training encourages people to develop new abilities for performing particular tasks and helps them understand the workplace.Consequently, it can be said that training is an endeavour to broaden a person's knowledge and skill set so they can better adjust to their workplace.However, the community at large is not always included when discussing training for staff members (Willson, 2020).According to Siagian (1985, cited in Saktiarsih, 2015), the community can benefit from training in the following ways: 1) Assist the community in meeting its needs and improving its quality of life more quickly.2) Enhance people's attitudes to help them adapt to changes in their environment and be able to make wise decisions.3) Boost your innate desire to learn and develop a steadfast willingness to support the acquisition of new information and abilities.4) The growth of a sense of self-worth and a stronger sense of unity within the community.5) Boost output to a higher degree in terms of quantity and quality.6) Shorten the average time it takes for people to pick up new skills and reach new performance levels.7) Encourage loyalty, cultivate a positive outlook, and enhance teamwork.8) Fulfil the prerequisites and standards of human resource management.9) Lower the quantity and expense of workplace accidents.10) Encourage and support each person's own personal development and progress.
Research Method
This study uses qualitative methodology, specifically by outlining fundamental presumptions and cognitive principles at the outset, and then employing systematic techniques for data collection and analysis to provide clarifications and arguments (Wijaya & Sirine, 2016).The process of conducting research using qualitative methodology produces descriptive data in the form of verbatim or written accounts of the subjects and behaviours seen (Riyadi et al., 2019).Since researchers are the primary instrument, a thorough understanding and analysis of field phenomena are required.Utilising a case study methodology, the research thoroughly examines, characterises, and describes individuals, groups, programmes or activities, organisations, or events that take place in society (Hariwijaya, 2016).
Observation
The purpose of this methodology is to gather accurate information about the Digital Entrepreneurship Academy Programme in order to develop human resources capable of accelerating the advancement and application of digital technology in the field of entrepreneurship and enhancing the digital economy.The DEA training facility in Pamekasan will be the subject of observations.
Interview
Muhammad Khusaeri, who oversees the DEA programme in the Pamekasan region, will be interviewed.The informant was picked due to his extensive background in managing the DEA program's operations in that location.More investigation into the DEA program's communication process will be conducted with the help of these informants.Subsequently, data regarding the advantages of DEA training will be acquired from additional sources, specifically the trainees.
Documentation
This context refers to a data collection approach that involves the examination and analysis of written materials, such as books, publications, regulations, and similar sources, among others.In accordance with the purpose of the research, (Mamik, 2015) states that it is essential to examine documentation pertaining to the implementation (DEA) by the Education Management Certification Fund Management Fund Management Agency (BPDSMP) Surabaya.
Data Reduction
Data collected from observations, interviews, and documentation were chosen, focused on, and reduced in size.In order for the data to be in line with the research's focus that is, communication audits and the training advantages of DEA data reduction must be done.
Data Presentation
Presenting reduction data as comprehensive narratives and descriptions.Making decisions and taking action may become simpler as a result.To make data descriptions appear clearer, more detailed, and easier to understand, they can be enhanced with matrices, figures, tables, and other visual aids.The information provided includes participant benefits from DEA training and coordinator descriptions of activities.
Withdrawal of Conclusions
Analyse information about DEA implementation by BPDSMP Surabaya that has been gathered from documentation, observations, and interviews.
Result and Discussion
The purpose of this discussion is to present the results of research projects on the assessment of communication practises in the Digital Entrepreneurship Academy Programme at the Ministry of Communication and Information Surabaya's Human Resources Research and Development Agency, with a focus on the Pamekasan Region.The problem formulation and research focus previously described will serve as the foundation for the analysis.In addition to interviewing informants, researchers use these conversations to gather information and evaluate their findings, paying particular attention to issues and their solutions.
The effectiveness of the DEA program's implementation in terms of human resources is demonstrated by all of the DEA instructors, who have received training to enable them to address participants' needs and issues as they arise.This therefore has to do with how prepared they are to deal with issues as they arise.However, technological infrastructure can also experience issues from time to time, such as server outages, which can complicate registration for participants.Participants wanted DEA organisers to pay more attention (Pradinasari et al., 2023).
It is anticipated that DEA participants in this activity will expand their digital MSMEs both during and after the pandemic.Therefore, it is important to observe how entrepreneurs are adjusting to digital technology (Saktiarsih, 2015).The majority of participants, according to the interview results, could use the application to sell digitally but had not yet recorded financial statements.This is because the participants chose to manually record financial statements due to a variety of issues.
According to the information gleaned from the interviews, trainees in the DEA's ongoing training programmes face a number of challenges.The organisers, who are ultimately expected to solve the issue, are fully informed of these challenges (Adriansyah & Rimadias, 2023).
Several factors contribute to the functional Communication Audit objectives being achieved.These include who communication should be conducted with, who specifically communication should be conducted with, what communication should be conducted, how communication should be conducted, and how precisely communication should be conducted (Quinn & Hargie, 2004).As the study's organiser, the Khusaeri informant should have received a description of the challenges faced by the participants.Based on the study's findings, it appears that the participants told the right person the informant Khusaeri about the challenges they faced.In terms of what needs to be discussed, participants should share the challenges they have faced.From the perspective of the ongoing programme, the training participants have established effective communication by communicating challenges like budgetary constraints and resource mismatches.Researchers can observe this when they speak with Khusaeri's informant, who manages the budget well to meet trainees' needs in the classroom despite a shortage.
It is obvious how communication needs to be conducted based on the research that was conducted.In order to assess the effectiveness of the material delivered, DEA training providers offer Pre-and Post-tests.Informants have also mastered the use of internet platforms for business purposes, based on interviews with elementary, MZ, and DK informants.
However, the Communication Audit indicates that while this programme appears to be successful overall, there are still issues that need to be resolved.There are informants who continue to use manual methods, similar to financial statements that ought to be recorded digitally.
Conclusion
The primary objective of the DEA programme, which is run by BPDSMP Surabaya in Pamekasan, is to grow digital MSMEs, particularly in light of the COVID-19 pandemic, according to qualitative analysis that has been done.This is due to the fact that many activities, including entrepreneurial endeavours, have transitioned from traditional to digital platforms during the pandemic.In general, the program's implementation has been able to fulfil the requirements for carrying out training.
The challenges faced by trainees who have discovered solutions define this.For instance, instructors receive training to address trainees' concerns, particularly those related to registration.Then, although issues with technology infrastructure such as server outages and other issues with the internet have been resolved, problems still arise.The results of the interviews revealed that the budget was effectively managed to meet the needs of the trainees while also expeditiously achieving the trainee target.
However, the DEA programme Communications Audit found that the training program's implementation had not produced the best outcomes.Unresolved challenges that participants still face include manually handling financial statements.
Benefits of it frequently get received by different trainees.Furthermore, individuals have embraced digital technology to engage in digital entrepreneurship; however, some have refrained from utilising financial report applications because of a variety of issues.The participants' eagerness to grow their businesses and provide extra training based on individual needs then demonstrates their enthusiasm.
For the purpose of creating evaluative resources for an activity, conducting a communication audit is invaluable.In order to maximise the execution of public programmes, institutions and organisations' performance can be improved through the evaluation of programme activities, which also acts as a benchmark for gauging the programmes' efficacy.It's expected that the study will be a useful tool or point of reference for researchers working on similar communications audits in the future.Within academic research, particular attention is paid to auditing, communicating, investigating, and providing training.Several useful suggestions can be provided as follows: 1) Organisers of DEA programmes are required to participate in communications audits grounded in recognised theoretical frameworks.Making sure that everyone is aware of the issues that need to be resolved is the goal.
2) Challenges and roadblocks encountered during DEA operations can be further explored in order to minimise issues and develop alternative solutions.3) One way to show appreciation for the participants' zeal is to organise follow-up main training sessions. | 4,428.2 | 2023-12-05T00:00:00.000 | [
"Education",
"Business",
"Computer Science"
] |
Potential analysis of holographic Schwinger effect in the magnetized background
We study the holographic Schwinger effect with magnetic field at RHIC and LHC energies by using the AdS/CFT correspondence. We consider both weak and strong magnetic field cases with B≪T2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B\ll T^2$$\end{document} and B≫T2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B\gg T^2$$\end{document} solutions respectively. Firstly, we calculate separating length of the particle pairs at finite magnetic field. It is found that for both weak and strong magnetic field solutions the maximum value of separating length decreases with the increase of magnetic field , which can be inferred that the virtual electron-positron pairs become real particles more easily. We also find that the magnetic field reduces the potential barrier and the critical field for the weak magnetic field solution, thus favors the Schwinger effect. With strong magnetic field solution, the magnetic field enhances the Schwinger effect when the pairs are in perpendicular to the magnetic field although the magnetic field increases the critical electric field.
Introduction
The virtual electron-positron pairs can be materialized under the strong electric-field in quantum electrodynamic (QED). This non-perturbative phenomenon is known as the Schwinger effect [1]. This phenomenon is not unique to QED, but has a general feature of vacuum instability in the presence of the external field. The production rate in the weak-coupling and weak-field case was put forward in [1] and was extended to the arbitrary-coupling and weak-field case [2]: where m, e represent the mass and charge of the particle pairs, respectively. E is the external electric-field. There exists a a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>(corresponding author) c e-mail<EMAIL_ADDRESS>critical value E c of the electric field when the exponential suppression vanishes. In string theory, there also exists a critical value E c which is proportional to the string tension [3,4]. By utilizing the AdS/CFT correspondence [5][6][7][8][9], the duality between the string theory on Ad S 5 × S 5 space and the N = 4 super Yang-Mills (SYM) theory, one can study the Schwinger effect in this holographic method. In order to realize the N = 4 SYM system coupled with an U(1) gauge field, one can break the gauge group from U (N + 1) to SU (N ) × U (1) by using the Higgs mechanism. In the usual studies, the test particles are assumed to be heavy quark limit. To avoid pair creation suppressed by the divergent mass, the location of the probe D3-brane is at finite radial position rather than at the AdS boundary. The mass of the particles is finite so that the production rate can make sense [10]. Therefore, the production rate can be given as with a critical field which agrees with the result from the Dirac-Born-Infeld (DBI) action and λ is the 't Hooft coupling. Following the holographic step, the potential analysis was performed in the confining theories in [11,12]. The potential barrier can be regarded as a quantum tunneling process. The virtual particle pairs need to get enough energy from an external electric field. When reaching to a critical value E c the potential barrier will vanish. Then the real particles pairs production are completely uncontrolled and the vacuum turns into totally instability. The potential analysis provide a new perspective to study the Schwenger effect. A lot of research work have been studied by using the AdS/CFT correspondence. The production rate in the confining theories was discussed in [13][14][15]. The universal nature of holographic Schwinger effect in general confining backgrounds was analyzed in [16]. The Schwinger effect also has been investigated in the AdS/QCD models [17,18]. The potential analysis in non-relativistic backgrounds [19] and a Dinstantons background [20] were discussed. The holographic Schwinger effect in de Sitter space has been studied in [21]. Other important research results can be seen in [22][23][24][25][26][27][28][29][30][31][32][33].
The heavy ion collisions at RHIC and LHC experiments produce strong electro-magnetic fields. As a result, studying the Schwinger effect in the strong magnetic field (m 2 π ∼ 15m 2 π ) created by RHIC and LHC [34][35][36][37][38] is the main motivation of this paper. The strong magnetic fields may provide us some different views for the vacuum structure and we expect the Schwinger effect may be observed through the heavy-ion collisions experiments in future. The magnetic field is expected to remain large enough when QGP forms although rapidly decays after the collision [39,40]. It has significant implications for the QCD matter near the deconfinement transition temperature [41] and QCD phase structure [42,43]. This expectation led to an in-depth research of QCD in the magnetized background. The asymptotically magnetic brane solutions were constructed in [44,45] in the Ad S 5 of the Einstein-Maxwell theory which is dual to the N = 4 SYM theory. The chiral magnetic effect in [46,47] has been studied. (Inverse) magnetic catalysis can see [48][49][50][51][52][53][54][55][56][57] and the holographic energy loss in the magnetized background see [58]. The magnetic field also has an influence on the early universe physics [59,60].
Thence, we study the holographic Schwinger effect in the 5-dimensional Einstein-Maxwell system with a proper magnetic field range [49] produced in the non-central heavyion collisions at RHIC and LHC energies. This may give us some inspiration for studying the Schwinger effect through the experimental results. The production rate of Schwinger effect with the presence of electric and magnetic fields was discussed in [25]. One way to turn on magnetic fields is considering a circular Wilson loop under the parallel electric and magnetic fields. Another way is to utilize circular Wilson loop solutions depending on additional parameters which are related to the magnetic fields. However these methods of adding magnetic field neglected the magnetic effect on the geometry of background. In this paper we incorporate a magnetic field with the magnetized Einstein-Maxwell system. With the magnetized background in this paper , we study the holographic Schwinger effect with a magnetic field by using the AdS/CFT correspondence . The organization of the paper is as follows. In Sect. 2, we introduce the 5-dimensional Einstein-Maxwell system with a magnetic field. In Sect. 3, we study the potential analysis in the magnetized background with B T 2 solutions. In Sect. 4, we discuss the potential analysis when B T 2 . The discussion and conclusion are given in Sect. 5.
Background geometry
The gravity background with magnetic field was introduced into the 5-dimensional Einstein-Maxwell system by using the AdS/QCD model [45], and the action is where g is the determinant of metric g M N . R, G 5 , F M N are the scalar curvature, 5D Newton constant and the U(1) gauge field strength tensor, respectively. L is the AdS radius and we set it to 1. As discussed in [49], turning on a bulk magnetic field in the x 3 -direction and the metric of the black hole takes the form with where r denotes the radial coordinate of the 5th dimension. The magnetic field breaks the rotation symmetry and allows us to analyze the anisotropic cases because the element q(r) is not equal to h(r) and the anisotropy was induced by the magnetic field [61,62]. The anisotropic direction is along x 3 -direction in this article. The perturbative solutions of this black hole metric can work well when B T 2 . Note that the physical magnetic field B is related with the magnetic field B by the equation B = √ 3B. The Hawking temperature is where r h is the black-hole horizon. In this article, we will use this Einstein-Maxwell system and extend it to study the holographic effect of magnetic field on the Schwinger effect.
Potential analysis with weak magnetic field B T 2 solutions
Since the magnetic field is along x 3 -direction, it is reasonable to consider the test particle pairs are transverse to the magnetic field and parallel to the magnetic field. From this point of view, we perform the potential analysis with the two cases in the magnetized background.
Transverse to the magnetic field
We study the potential analysis with the test particle pairs separated in the x 1 -direction first, which means the particle pairs are transverse to the magnetic field. The coordinates are parameterized by By utilizing the Euclidean signature, the Nambu-Goto action is given as where g αβ represents the determinant of the induced metric. T F = 1 2πα is the string tension and where g μν denote the brane metric and X μ is target space coordinates. Then the induced metric is withṙ = dr dσ . The Lagrangian density is given as and L does not rely on σ explicitly. The conserved quantity is obtained by which leads to By using the boundary condition where the D3-brane located at finite radial position r = r 0 .
The conserved quantity C can be expressed as Plugging Eq. (18) into Eq. (16),one getṡ By integrating Eq. (19), one can get the separate length x ⊥ of the test particle pairs with the dimensionless parameter By using Eqs. (14) and (19), the sum of the Coulomb potential and static energy can be given as The critical field is obtained by the DBI action in the Lorentzian signature. The DBI action is with a D3-brane tension From Eq. (5), the induced metric G μν reads Then considering F μν = 2πα F μν [63] and the electric field E is along x 1 -direction [12], one gets which leads to By plugging Eq. (27) into Eq. (23), one gets where r = r 0 is the location of the D3-brane. To avoid Eq. (28) being ill-defined, The critical field E c is obtained by In Eq. (30), one can see that the critical field is related to the magnetic field. By introducing a dimensionless parameter
Parallel to the magnetic field
We consider the test particle pairs separated in the x 3direction which means the particle pairs are parallel to the magnetic field. The coordinates are parameterized by The separate length x versus the parameter a = r c /r 0 in different situations is depicted in Figs. 1 and 2. First, we note that there are two possible U-shape string configurations, similar as heavy quark limit [9,64,65]. The U-shape string remains unchanged at vanishing temperature for all separate distance, while the U-Shape string exists only at large a and become unstable at small a for finite temperature case. We take the stable branch, corresponding to large values of a in the potential analysis. In our numerical computation, we set T F and r 0 as constants for simplicity. Next, from these two pictures, we can see that the maximum value of distance is decreasing with the increases of temperature and magnetic field. Thus we can infer that Schwinger effect happens easily at larger temperature and magnetic field. The sum of the Coulomb potential and static energy at the finite temperature in the magnetized background is The the total potential V tot ( ) can be obtained as The shapes of the total potential V tot with respect to the separate length x for various α when T = 0.25 GeV are plotted in Fig. 3. We can find that the potential barrier decreases with the increase of external electric-field and vanishes at a critical field. When α < 1, the potential barrier is existent and the pairs production can be explained by the tunneling process. When α > 1, the particles are easier to produce as the external electric-field increases. The vacuum becomes unstable extremely and the production of the pairs are explosive. The result agrees with the shapes of the potential for various values of E c in [12].
The effect of the magnetic field on the total potential when T = 0.3 GeV is studied in Fig. 4. We find that the magnetic field reduces the height and width of the potential barrier and favor the Schwinger effect in a). We also plot E c versus B in b). One can obtain that E c decreases as the magnetic field increases, so that Schwinger effect occurs easily. This result agrees with the finding of a). The Schwinger effect is more obvious when pairs are perpendicular to the magnetic field than that in parallel case .
The relationship between the total potential and the temperature when B = 0.01 GeV 2 is analyzed in Fig. 5. One can see that the potential barrier decreases with the increase of temperature in a). It is found that the temperature also reduces the critical electric field E c in b) and thus favors the Schwinger effect.
Potential analysis with strong magnetic field B T 2 solutions
In this section, we discuss the Schwinger effect for strong magnetic field case with B T 2 . In [44], the BT Z × T 2 black hole solution when B T 2 is obtained with The magnetic field is in x 3 -direction in this black hole. The Hawking temperature is When the particle pairs separated in the x 1 -direction which means pairs are perpendicular to the magnetic field. The electric field E is along x 1 -direction, then the critical field E c and total potential V tot are where The separate length x versus the parameter a in different situations are plotted in Fig. 6. We can find that the maximum value of distance is decreasing with the increasing magnetic field which is consistent with the results of Fig. 1. The shapes of the total potential V tot versus the separate length x for various α when T = 0.15 GeV are plotted in Fig. 7. When α < 1, the Schwinger effect can not occur. The potential barrier decreases with the external electric-field increasing. When α ≥ 1, the production of the pairs is not limited.
In Fig. 8, we plot E c against B when T = 0.15 GeV and find that the E c increases with B ⊥ which is consistent with the results in [17,25], which is different from our result for the weak magnetic field shown in Fig. 4. The reasons may due to the different ways of turning on the magnetic field. In this paper, the magnetic field affects the geometry of background and has an influence on the potential barrier. Moreover, we find the high temperature also reduces E c which is consistent with the finding in Fig. 5 for weak magnetic field case.
The effect of the magnetic field on the total potential when T = 0.15 GeV in different external electric-field is studied in Fig. 9. When α = 0.9, the magnetic field enhance the total potential in small distance x. However, the effect of the magnetic field on the width of the potential barrier is more prominent in large distance x. The magnetic field reduces the width of the potential barrier and enhance the Schwinger effect in large distance x although the magnetic field enhances E c . When α = 1.0, the magnetic field reduces the width of the potential barrier obviously and favors the Schwinger effect.
It should be mentioned that the magnetic field has no effect on separate length and the sum of the Coulomb potential and static energy when the pairs are in parallel to the magnetic field. In this case, E c increases with magnetic field and Schwinger effect is suppressed.
Conclusion and discussion
In this paper, we study the potential analysis in the 5dimensional Einstein-Maxwell system with the magnetic fields corresponding to the RHIC and LHC energies. Since the heavy ion collisions at RHIC and LHC experiments produce strong electro-magnetic fields. The strong magnetic fields may provide some different views for the vacuum structure and we expect that the Schwinger effect could be observed through the heavy-ion collisions in future.
The separate length between test particle pairs by using a probe D3-brane at a finite radial position is discussed in this article. We consider the test particle pairs both transverse to the magnetic field and parallel to the magnetic field. We find that the separating length decreases with the increasing magnetic field and the temperature.
We calculated the critical electric field via the DBI action and derived the formula of the total potential so that we can perform the potential analysis in the magnetized backgrounds. It is found that both the magnetic field and the temperature reduce the potential barrier and the critical field with the weak magnetic field B T 2 solutions, thus enhance the Schwinger effect. That means the magnetic field and the temperature increase the production rate of the real particle pairs. For the strong magnetic field case with B T 2 solutions when the pairs are in perpendicular to the magnetic field, the magnetic field also enhances the Schwinger effect rate though the magnetic field increases the critical electric field since magnetic field reduces the width of the potential barrier and enhances potential at larger distance.
We expect that the nontrivial magnetic field effects on the Schwinger effect in the magnetized background could provide some inspiration of QCD with a strong electric field. Moreover, the production rate in the Einstein-Maxwelldilaton system in a holographic QCD model may be worth to be investigated [66][67][68][69]. We hope to report in these directions in future.
Acknowledgements This work is in part supported by the NSFC Grant Nos. 11735007, 11890711.
Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors' comment: This is a theoretical study and no experimental data in this paper.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 . | 4,267.2 | 2020-06-01T00:00:00.000 | [
"Physics"
] |
Geometric Shaping for Distortion-Limited Intensity Modulation/Direct Detection Data Center Links
Intra-data center links are subject to transmission impairments that pose challenges for efficient scaling of per-wavelength data rates beyond 100 Gb/s. Limited electrical and optical component bandwidths, limited data converter resolution, and direct detection induce significant signal-dependent distortion, degrading receiver sensitivity (RS) and chromatic dispersion tolerance. In this paper, we present a geometric shaping (GS) scheme that optimizes transmitted intensity levels based on symbol-error statistics observed at the receiver. The proposed GS scheme adjusts these levels to achieve substantially equal symbol-error probabilities at all decision thresholds. The scheme enables the levels most affected by signal-dependent distortion to be detected with the same reliability as other levels, thereby increasing the effectiveness of linear or nonlinear equalization techniques. This can be exploited to improve RS and extend transmission distance for a fixed equalization scheme or, alternatively, to reduce the complexity of signal processing needed to achieve a target RS or transmission distance. For example, in 200 Gb/s PAM links, GS and 21-tap linear equalization achieves RS and reach similar to uniform level spacing and Volterra nonlinear equalization with 21 linear and 3 second-order taps.
I. INTRODUCTION
A RISE in global Internet traffic, primarily from video and machine-learning applications, has caused a massive increase in intra-data center (DC) bandwidth requirements [1].
To meet these demands, many data center operators have adopted high per-wavelength data rates and coarse wavelength-division multiplexing (CWDM) [1], [2] or other forms of WDM.Intra-DC optical links are often subdivided into two categories [3].The shortest-reach optical links have lengths up to a few hundred meters and traditionally have used multi-mode fiber (MMF).Intra-DC links beyond this length use single-mode fiber (SMF) to avoid modal dispersion [4].Using multilevel pulse-amplitude modulation (PAM) and direct detection (DD) at data rates up to 100 Gb/s per wavelength, they typically achieve maximum link lengths ranging from 2 to 20 km.These intra-DC links are the focus of this paper.The authors are with the E. L. Ginzton Laboratory, Department of Electrical Engineering, Stanford University, Stanford, CA 94305 USA (e-mail: emliang@stanford.edu;jmk@ee.stanford.edu).
Digital Object Identifier 10.1109/JPHOT.2023.3335398 Efficiently scaling intra-DC links beyond 100 Gb/s per wavelength will require the mitigation of two key impairments.First, intra-DC transceivers use small-form-factor components that often have low bandwidth and substantial nonlinearity, which combine to cause considerable distortion [3].Second, the combination of chromatic dispersion (CD) with DD results in nonlinear distortion, and the effects of CD are exacerbated by CWDM, which employs wavelengths away from the dispersion zero.The above-mentioned forms of distortion cause intersymbol interference (ISI).Since the amount of ISI typically differs from one signal level to another, we refer to these forms of distortion collectively as signal-dependent distortion.
Geometric shaping (GS) provides a method to reduce the impact of signal-dependent distortion in DD optical links.GS is often defined as the optimization of the locations of constellation points (which are intensity levels in DD systems) according to some specified criterion [5].In this paper, we focus on minimizing signal-dependent distortion arising from transmitter bandwidth limitations and CD in unamplified DD links.Various GS schemes using conventional optimization techniques [6], [7], [8], [9], [10], [11], [12], [13] or machine learning [14], [15], [16] have been previously proposed for intensity modulation (IM)/DD optical links.While GS schemes using autoencoders or other machine learning techniques have been used to improve the performance of various optical links [14], [15], [16], [17], [18], [19], the strict cost and complexity limits on intra-DC links lead us to focus primarily on non-machine learning-based GS schemes.Among schemes using conventional optimization techniques, all previously proposed GS schemes for IM/DD links either parameterize the signal constellation by up to several variables or assume the noise distribution at the receiver follows a Gaussian distribution.These assumptions limit the effectiveness of these previously proposed GS schemes in IM/DD links in which nonlinear signal-dependent distortion is a significant factor.
In this paper, we use GS to reduce the impact of signaldependent distortion on receiver sensitivity (RS) in unamplified DD optical links.Our proposed GS algorithm distinguishes itself from previously proposed GS schemes for DD-based optical interconnects by achieving the optimization objective without parameterizing the signal constellation nor assuming the dominant noise source follows a Gaussian distribution.The proposed GS scheme reduces the impact of signal-dependent distortion by increasing the distance between intensity levels most affected by distortion.The resulting optimized signal constellation causes each level to be detected with the same fidelity as any other level, thereby improving equalizer performance, RS, and transmission reach.The proposed scheme can alternatively be used to reduce the complexity of the signal processing required to achieve a target RS or transmission reach.We also study data converter resolution requirements and show that the proposed GS scheme can reduce the impact of finite data converter resolution.While we use GS to reduce the impact of distortion arising from modulator nonlinearities, CD, and data converters, the proposed GS scheme can be straightforwardly used to compensate for other forms of signal-dependent distortion.Although current intra-DC interconnects typically use p-i-n photodetectors that produce signal-independent noise, the proposed GS scheme can also mitigate signal-dependent noise, as in DD receivers using APDs or SOAs [7].
The remainder of the paper is organized as follows.Section II introduces the system model, emphasizing the modeling of modulators, which are important sources of signal-dependent distortion in IM/DD links.Section III presents the proposed GS scheme and the performance metrics that are used to evaluate the efficacy of the proposed scheme.Section IV quantifies the performance benefit of GS in combating bandwidth limitations and chromatic dispersion in systems using linear equalization and Volterra nonlinear equalization.Section V studies GS for 200 Gb/s-per-wavelength CWDM intra-DC links, including data converter resolution, drive signal optimization, and implementation complexity.Section VI concludes the paper.
II. SYSTEM MODEL
In this section, we describe our model for IM/DD optical links, placing emphasis on modulator modeling.We define the relevant notation and present the system design parameters assumed throughout the paper.
A. Overview
Fig. 1(a) and (b) depict a block diagram and equivalent baseband model for an IM/DD optical link, respectively.We first use the block diagram to describe the set of link components that constitute the IM/DD link model.We then derive the equivalent baseband model from the block diagram using analytical models for each of the constituent components.The equivalent baseband model in Fig. 1(b) is employed in our numerical simulations.
Fig. 1(a) begins with a finite sequence of M -PAM symbols.The digital-to-analog converter (DAC) transforms the M -PAM sequence into a sequence of voltages that are subsequently input to the modulator.The resulting modulated optical signal is transmitted through the fiber to the receiver, where is it converted to a current by the photodetector.The trans-impedance amplifier (TIA) converts the current into a usable voltage, which is subsequently filtered by the anti-aliasing (AA) low-pass filter (LPF).The filtered voltage signal is sampled by the analog-to-digital converter (ADC) to generate a sampled sequence.A finite impulse response (FIR) equalizer (EQ) processes the sampled sequence to reduce ISI.The resulting sequence is mapped to a decoded M -PAM sequence by the hard-decision device.
The equivalent baseband model in Fig. 1(b) also begins with a finite sequence of M -PAM symbols a 0 , a 1 , . . ., a N −1 of length N .The ith transmitted symbol assumes one of M possible values a i ∈ {A 0 , . .., A M −1 }, where A j denotes the jth symbol in the M -PAM alphabet.The symbol sequence is input to the DAC, which generates an analog electrical drive signal V (t).The target amplitudes of the analog drive signal are designed to achieve a specified set of output powers in the transmitted optical signal after accounting for the modulator nonlinear transfer characteristic f mod (•).The target output powers {P 0 , P 1 , . . ., P M −1 } are mapped to the pre-distorted voltage amplitudes {V 0 , V 1 , . . ., V M −1 } by the inverse modulator characteristic f −1 mod (•).The resulting peak-to-peak voltage is given by V pp = |V M −1 − V 0 |.The extinction ratio r ex is defined as the ratio of the maximum and minimum powers at the modulator output.
The M -PAM input sequence assumes voltage amplitudes from the set {V 0 , V 1 , . . ., V M −1 }.The voltage sequence is quantized by the function Q DAC , which is parameterized by a full-scale interval Δ DAC = [Δ DAC,min , Δ DAC,max ], clipping ratio r DAC , and resolution B DAC [20].We assume Δ DAC is centered with respect to the input sequence minimum and maximum.r DAC is defined as the ratio of Δ DAC,max − Δ DAC,min to V pp .For each simulation, r DAC is chosen to minimize the empirical meansquared difference between the unquantized input sequence and the quantized output sequence.The DAC codebook consists of values equally spaced over the interval Δ DAC , and each element of the input sequence is quantized to the nearest value in the codebook.Similar to [21], the bandwidth limitations of the DAC are described by an impulse response h DAC (t), which is the cascade of a zero-order hold filter and a 5th-order Bessel filter.As seen in the baseband model of Fig. 1(b), the overall transmitter bandwidth limitation results from the convolution of h DAC (t) and the modulator impulse response h mod (t).
The nonlinear characteristics of the modulator follow the linear low-pass filtering effects of h mod (t).The two effects captured in our model are the instantaneous nonlinear transfer characteristic f mod (•) and modulator chirp f chirp (•).We defer further discussion of the modulator model to Section II-B.
After modulation, the signal is coupled into an SMF with zero-dispersion wavelength λ 0 and dispersion slope parameter S 0 , yielding a dispersion parameter given by where λ is the wavelength of the optical signal.We model the dispersion on each channel using the dispersion parameter at its center frequency.
After transmission through the SMF, the transmitted optical signal E(t) is detected at the receiver using a PIN photodetector, generating electrical current I(t).Shot noise is added with onesided power spectral density (PSD) where q is the electron charge, R is the responsivity of the photodetector, P rec is the received optical power and I d is the dark current [22].The current is converted to a usable voltage by the TIA with input-referred noise I n .I n is related to the one-sided AWGN PSD of the thermal noise by N 0 = I 2 n .Thermal noise is dominant in well-designed unamplified IM/DD links, so we neglect any impact of relative intensity noise (RIN) [23].
The detected voltage is input to an analog 4th-order Butterworth LPF with cutoff frequency f 3dB,AA = 0.5 • R s • r os , where R s is the baud rate and r os is an oversampling rate.The filtered electrical signal is input to the ADC with a specified effective number of bits (ENOB) and sampled at a sampling rate of R s • r os .The effects of bandwidth limitations and nonlinear distortion are partially compensated using an adaptive, fractionally spaced finite impulse response equalizer (FIR-EQ) with r os = 5/4, which is updated using the least mean squares algorithm.We analyze both linear feed-forward equalization (FFE) and second-order Volterra nonlinear equalization (VNLE) in this paper.The memory lengths for the first-order linear kernel and the second-order nonlinear kernel are n taps,1 and n taps,2 , respectively.The real-valued equalizer output sequence A hard-decision decoder with M − 1 inner decision levels b th,1 , b th,2 , . . ., b th,M −1 and two outer decision thresholds b th,0 and b th,M maps the equalizer output to a detected M -PAM symbol sequence â0 , â1 , . . ., âN−1 .The outer decision thresholds are defined as b th,0 = −∞ and b th,M = +∞.The ith detected symbol is âi = A j when b ∈ (b th,j , b th,j+1 ).A symbol error on the ith transmitted symbol occurs when a i = âi .The decoded symbol sequence is mapped to a decoded bit sequence using Gray coding if M is a power of two or the 2-dimensional 6-PAM mapping described in [24] when M = 6.The bit-error ratio (BER) is computed on the subset of decoded bits after the adaptive equalizer has converged.Denoted by BER target , the pre-forward error-correction (pre-FEC) threshold Fig. 2. EAM absorption vs. drive voltage based on [29].The solid line represents the instantaneous transfer characteristic and the dashed lines denote the region over which the EAM is operated, assuming the modulator is biased at an insertion loss of 2.38 dB and a peak-to-peak voltage of 2 V.The insertion loss, extinction ratio, and peak-to-peak voltage are indicated by the dashed lines.
B. Electro-Absorption Modulator
As intra-DC links have shifted from binary to multi-level modulation, these links have shifted toward external modulators [2], [26], [27].In our model, we assume an electro-absorption modulator (EAM), owing to its advantages for intra-DC links [28].Compared to directly modulated lasers, EAMs generally have a wider modulation bandwidth, a smaller linewidth enhancement factor, and a larger extinction ratio.Compared to Mach-Zehnder modulators, EAMs are typically smaller, consume less power, and require lower driving voltages.
Various models can capture modulator nonlinearity, including memoryless nonlinear transfer characteristics [30], Volterra series [31], or finite-difference time-domain models [32].We model the EAM as a Wiener system, which comprises a linear system with memory followed by a memoryless nonlinear system.Following common practice, we model the bandwidth limitations of EAMs using a two-pole LPF with 3-dB cutoff frequency f 3dB,mod [33].The instantaneous nonlinearity is modeled using a voltage-to-absorption transfer characteristic α dB (V (t)).Fig. 2 depicts the transfer characteristic of the EAM used in our analysis, which is derived from [29] using a cubic spline fit.The insertion loss (IL), extinction ratio, and peak-to-peak voltage are depicted in the figure.Assuming a fixed input laser power, the relationship between the transmitted electrical field amplitude and the drive voltage is given by The Kramers-Kronig relations imply that electro-absorptionbased intensity modulation necessarily leads to phase modulation of the optical signal.In EAMs, transient chirp is the Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
TABLE I SIMULATION PARAMETERS
dominant source of phase modulation [34], [35] and is modeled by a transient phase shift where α is the linewidth enhancement factor.In general, α is a function of the applied voltage and can be designed to fall within a certain range [32], [36].In this paper, we study α over a range of [0, 3] and assume α is independent of the applied voltage to reduce the dependency on a specific modulator model.When not explicitly studying the effect of varying α, we often assume α = 2 to represent a relatively high, but practically relevant value for the modulator chirp.
C. Simulation Parameters
The simulation parameters used in this study are listed in Table I.A row with exactly one numerical value indicates that the value is used in all simulations presented.Rows with two or more numerical values indicate simulation parameters that vary across different simulations.The parameter values used in a given numerical simulation are provided either in the associated figure caption or legend, except when the values can be inferred.r ex and V pp are one such example, as only one is typically specified to avoid providing redundant information.
III. GEOMETRIC SHAPING
Constellation shaping typically refers to the optimization of a transmitted constellation according to some criterion.Optimization criteria may include metrics such as mutual information, generalized mutual information, or symbol-error probability [37].In probabilistic shaping (PS), the input probability distribution is optimized while keeping the location of the constellation points fixed, while in GS, the locations of the constellation points are optimized without modifying their distribution.Fig. 3. PDFs of the equalizer output b conditioned on a transmitted symbol A j , j ∈ {0, 1, 2, 3}.p j,+ and p j,− denote the probabilities, conditioned on the transmission of symbol A j , that b is below and above b th,j and b th,j+1 , respectively.The b th,j , j = 1, 2, 3 are chosen such that p j−1,+ ≈ p j,− .In (a), the mean values of the conditional PDFs are uniformly spaced.In (b), the means of b conditioned on A 0 and A 3 are unchanged from part (a), while the means of b conditioned on A 1 and A 2 are shifted so that p j,− ≈ p j,+ for j = 1, 2.
In this section, we present our proposed GS scheme and the algorithmic implementation used throughout the paper.We then introduce a set of performance metrics that are used throughout the paper to quantify the benefits of the proposed scheme.Our terminology and notation are specific to IM/DD systems using M -PAM.
A. Proposed Geometric Shaping Scheme
In many IM/DD systems, the detected electrical signal is subject to significant signal-dependent distortion, and optimizing the transmitted intensity levels to account for this distortion may improve overall system performance.Signal-dependent distortion in unamplified IM/DD links can arise from a combination of component bandwidth limitations and nonlinear transfer characteristics and a combination of CD, modulator chirp, and DD.These two sources of signal-dependent distortion are analyzed further in Section IV.In presenting the proposed GS scheme, we first describe the optimization method using generic probability density functions (PDFs), then present a specific algorithm used for optimizing the transmitted intensity levels.
Fig. 3(a) and (b) depict generic conditional PDFs of the equalizer output b given a transmitted symbol A j , which are denoted p(b|A j ), j ∈ {0, 1, 2, 3}.The variances of identically labeled conditional PDFs are equal in the two subfigures.The variances of the conditional PDFs increase with the index j.In Fig. 3(a), the conditional means of the PDFs are equally spaced.In Fig. 3(b), the means of b conditioned on A 1 and A 2 are shifted from those in Fig. 3(a) so that p j,− ≈ p j,+ for j = 1, 2. Here, p j,+ and p j,− denote the probabilities, conditioned on transmission of symbol A j , that the equalizer output b is above and below the decision thresholds b th,j+1 and b th,j , respectively.The b th,j , j = 1, 2, 3 are set so that p j−1,+ ≈ p j,− and are denoted as equal-crossover decision thresholds.
As seen in Fig. 3(a), when equally spaced intensity levels are used, signal-dependent noise and distortion can lead to substantially different symbol-error rates at the various decision thresholds.In the proposed GS scheme, we adjust the transmitted intensity levels, subject to fixed minimum and maximum drive voltage constraints, in order that p j,+ ≈ p j,− , j = 1, 2, . . ., M − 1.Using equal-crossover decision thresholds further ensures that p j−1,+ ≈ p j,− , j = 1, 2, . . ., M − 1. Conditional PDFs optimized using this scheme are depicted in Fig. 3(b), and result in substantially equal error probabilities at all decision thresholds.
One may contrast equal-crossover decision thresholds with maximum-likelihood (ML) decision thresholds.The ML thresholds b ML th,j would be set such that p(b ML th,j , at the crossing points between adjacent conditional PDFs.The ML decision thresholds typically lie close to the respective equal-crossover decision thresholds.Our proposed GS scheme uses equal-crossover decision thresholds instead of ML decision thresholds for two reasons.First, in the proposed GS scheme, equal-crossover decision thresholds are required to achieve the goal of approximately equal error probabilities at all decision thresholds.Second, equal-crossover decision thresholds are found to speed up the convergence of our proposed iterative GS algorithm over ML decision thresholds when distortion is most significant on only a small subset of the transmitted intensity levels. 1) Geometric Shaping Algorithm: Algorithm 1 describes an iterative procedure for obtaining a set of drive voltage levels that achieves substantially equal error probabilities at all decision thresholds.The algorithm presumes a fixed drive voltage range defined by minimum and maximum voltages V 0 and V M −1 .The algorithm outputs a set of M − 2 drive voltages that correspond to the inner transmitted intensity levels.
The GS optimization scheme also takes as input the laser power input to modulator P in , modulation order M , BER target , and constants K, δ, and δ min .K, M , and BER target are used in computing C, which is, in turn, used as a regularization term.δ • P diff defines the maximum step size, in L1 norm, between Algorithm 1 is inspired by the classic gradient descent algorithm and uses regularized update steps and an adaptive learning rate.Lines 1 through 6 set the initialization values for the algorithm.Lines 7 through 16 constitute the iterative updates to the transmitted intensity levels.The number of symbols N is chosen to be sufficiently large to obtain accurate estimates of p (i) j,− and p (i) j,+ , which can be updated in an online fashion.The conditional decision error probabilities at the receiver are collected in line 9 and processed into a relative difference metric Algorithm 1: Geometric Shaping Algorithm.j in line 10.j is normalized to have unit L1 norm and then used to update the transmitted intensity levels.The algorithmic complexity of computing Line 13 in Algorithm 1 is a condition to infer whether the transmitted intensity levels have converged to the vicinity of some fixed points, and reduces the parameter δ by a factor of 2 when the condition is satisfied.The L1 norm, the decay rate of δ ← δ/2, and C are chosen heuristically to speed up convergence to a set of drive voltages achieving substantially equal decision-error probabilities.While other choices for these parameters are possible, Algorithm 1 is found to be capable of identifying intensity levels with substantially equal error probabilities over a wide range of scenarios.
Machine learning-based approaches to GS in optical communications can likewise be categorized in terms of their optimization objective and optimization methodology.The structures and loss functions of these schemes specify their optimization objectives.For example, autoencoder-based architectures using categorical cross-entropy and binary cross-entropy as loss functions can be used to optimize for mutual information and generalized mutual information, respectively [19].Backpropagation, reinforcement learning, and gradient-free methods have been studied previously to design the constellations for various machine learning-based GS schemes [17], [18], [19].While our analysis focuses primarily on nonmachine learning-based approaches to GS, Refs.[17], [18], [19] can provide a comprehensive introduction to machine learning-based methods.
2) Comparison With Other GS Schemes for IM/DD Links: Short-reach optical links have traditionally used IM/DD with hard-decision FEC.For these systems, possible natural choices for an optimization objective are BER, SER, or equality of hard-decision error probabilities at all decision thresholds, as each objective is closely related to the RS in systems using hard-decision FEC.Refs.[6], [7], [8], [9], [10], [11], [12], [13] are among the most relevant previously proposed schemes for IM/DD links, each of which uses at least one of the three aforementioned optimization objectives.
While these previous schemes were shown to improve RS of IM/DD links in certain operating regimes, these previously proposed GS schemes either make assumptions about the exact noise distribution at the receiver or parameterize the input constellation by up to several variables.While these assumptions and parameterizations reduce the complexity of the optimization methodologies, they also limit their effectiveness in IM/DD links that are subject to substantial non-Gaussian, signal-dependent distortion.Our proposed GS scheme is distinct from previously proposed GS schemes for IM/DD links in several ways.In contrast to previous schemes [6], [7], [8], [9], [10], [11], [12], [13], our proposed scheme makes no assumption about the specific noise distribution at the receiver nor parameterizes the signal constellation.In addition, our proposed scheme's optimization methodology is also unique among the surveyed GS schemes.Specifically, the update procedure described in lines 7 through 16 of Algorithm 1 constitutes a novel method that directly uses only the observed symbol-error statistics at the receiver to iteratively optimize all transmitted intensity levels simultaneously.We note the variable δ in Algorithm 1 is somewhat analogous to the trust region described in [39].
The proposed GS scheme also possesses several properties that, taken together, are unique among GS schemes for optical links and are highly desirable in low-complexity IM/DD links.For one, our proposed scheme uses a novel metric of normalized conditional differential error probability Δ j , which only requires a complexity of O(M ) per iteration.In addition, our algorithm only requires tracking the conditional hard-decision tail error probabilities (p (i) j,+ and p (i) j,− ) at the receiver.By contrast, the surveyed GS schemes that use either iterative pairwise optimization or gradient descent as their optimization methodology and, additionally, do not assume the conditional noise distribution is Gaussian require estimation of the entire conditional posterior distribution at the receiver or have a complexity of at least O(M 2 ) per iteration [39], [43].
We end this subsection by noting that machine learning-based approaches to GS for IM/DD systems have been investigated in Refs.[14], [15], [16].These approaches share some advantageous properties with our proposed scheme.For example, these machine learning-based techniques do not assume a particular analytic distribution for the noise or explicitly parameterize the signal constellation.A more thorough comparison between our proposed GS scheme and this class of techniques should consider the strict cost and complexity constraints of short-reach IM/DD links, and is beyond the scope of this paper.
C. Performance Metrics
The effects of GS and the aforementioned sources of interference are evaluated in terms of the average optical power required at the receiver input to achieve a BER equal to BER target , which is denoted by Preq .In an ideal thermal noise-limited IM/DD system using M -PAM modulation with Gray coding, where errors between adjacent symbols are dominant, the BER is related to the received optical power by [54] BER Let PM,req denote the minimum power required for an ideal reference system using M -PAM.For the special case when M = 2, we denote the minimum optical power by POOK,req .Substituting in BER target and M = 2, the average optical power required to achieve BER target for an ideal OOK system is Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
Substituting the values located in Table I into (6) yields POOK,req ≈ −14.6 dBm.For reference, the RS for an ideal 4-PAM system is P4,req ≈ −11.4 dBm.
To reduce the impact of the specific value chosen for the inputreferred noise I n on our analysis, the results are presented in terms of optical power penalty (OPP).The OPP is defined as the ratio of Preq to POOK,req .
It is often useful to compare the resulting OPP from an experiment to the OPP for an ideal reference system using M -PAM modulation.Ignoring the multiplicative factor in front of Q(•) in ( 5) and incorporating an additional power penalty due to a finite extinction ratio [21], the OPP for an ideal M -PAM system is To quantify the effectiveness of GS in reducing the impact of dispersion-induced signal-dependent distortion, we use a metric of dispersion tolerance, which we define as the total accumulated dispersion at which an additional OPP of 0.5 dB is incurred as compared to a zero-dispersion reference system.The zero-dispersion reference system has a design identical to the dispersion-impaired system except for at most two differences: (1) the accumulated dispersion is set to zero and (2) no GS is used in adjusting the transmitted intensity levels.
Using wavelengths longer and shorter than the zerodispersion wavelength results in positive and negative accumulated dispersion, respectively.Thus, dispersion tolerance can in principle be defined using positive or negative accumulated dispersion.In this paper, we often consider negative dispersion values.In such cases, the improvement in dispersion tolerance, which is the difference between the dispersion tolerances with and without GS, is stated as a negative value.
IV. APPLICATION TO BANDWIDTH-CONSTRAINED INTRA-DATA CENTER LINKS
In this section, we evaluate the effectiveness of the proposed GS scheme in mitigating signal-dependent distortion in 200 Gb/s intra-DC links using M -PAM.We focus on two sources of signal-dependent distortion: (1) the combined bandwidth limitation and nonlinear transfer characteristic of the transmitter and (2) DD of an optical signal generated by a modulator with a positive linewidth enhancement factor.
We begin by presenting simulations with parameters chosen to isolate the effects of signal-dependent distortion arising from the nonlinear transfer characteristic and transmitter bandwidth limitations.We then present simulations studying the effect of GS on CD tolerance in systems using linear FFE.We conclude the section by studying the effect of GS on CD tolerance in systems using VNLE.
A. Transmitter Bandwidth Limitations and Linear Equalization
An important source of nonlinear distortion in IM/DD systems originates from the combined nonlinear transfer characteristic and bandwidth limitations of the transmitter.Owing to the nonlinear transfer characteristic f mod ( ) in (3), the mapping from output intensities {P 0 , P 1 , . . ., P M −1 } to predistorted voltage amplitudes {V 0 , V 1 , . . ., V M −1 } often results in unequally spaced voltage levels.The bandwidth limitations imposed before the nonlinear transfer characteristic cause signal-independent distortion to all predistorted voltage levels.However, the nonlinear transfer characteristic amplifies variations in the drive signal at voltage levels where the transfer characteristic's first derivative has a large magnitude.In addition, the nonlinear transfer characteristic attenuates variations in the drive voltage at voltage levels where its first derivative has a small magnitude.In our model, the nonlinear transfer characteristic converts signal-independent distortion to signal-dependent distortion.
For a fixed nonlinear transfer characteristic, the range of output intensity levels from the modulator is parameterized by V pp and the modulator IL.When these two parameters are fixed, the uniform output intensity levels uniquely define a set of predistorted voltage amplitudes.The proposed GS scheme removes this constraint and determines an alternative set of output intensities subject to the output intensity range constraint.The effect of GS will thus depend on the choice of V pp , the modulator IL, and f 3dB,mod .
Figs. 4 and 5 study how GS mitigates signal-dependent distortion arising from transmitter bandwidth limitations for varying values of V pp and modulator IL.The other simulation parameters given in the caption are chosen to minimize other sources of signal-dependent noise and distortion.
We begin by fixing V pp and varying the modulator IL and f 3dB,mod .Fig. 4 depicts the OPP vs. modulator IL for three different values of f 3dB,mod with V pp = 2.5 V and M = 4.We observe several important trends.First, across the entire ranges of modulator IL and M studied, GS provides an OPP improvement between 0.1 and 0.7 dB.Second, the OPP decreases monotonically until IL = 2.5 dB, above which the OPP increases.The insertion loss of 3 dB and V pp = 2.5 V results in a driving voltage range that includes the upward-curving region of the nonlinear transfer characteristic.The lower values of OPP for higher values of modulator IL can be explained, in part, by an increase in r ex , which can be observed in Fig. 2. Lastly, the reduction in OPP obtained using GS generally increases when stronger bandwidth constraints are imposed at the transmitter.
Fig. 5 studies how the reduction in OPP obtained using GS varies with the values of M and V pp .The modulator IL is fixed to 2.5 dB because that value results in the lowest OPP in Fig. 4. We note that M = 6 is a two-dimensional modulation format employing a simple form of PS and is designed following [24].
The reduction in OPP obtained using GS varies with f 3dB,mod , M , and V pp .The trend of larger OPP reductions for lower values of f 3dB,mod , first seen in Fig. 4, extends to other values of M and V pp .In addition, the reduction in OPP is higher for M = 8 than for M = 4, owing to more closely spaced constellation points for M = 8.The OPP reduction for M = 6 is similar to that for M = 4, which is in part due to the PS inherent in the modulation format.Lastly, OPP reduction by GS increases as V pp increases, because a higher V pp results in a higher r ex , which generally leads to increased distortion when using uniform intensity levels.
Among the three modulation orders studied, M = 4 provides the best RS for all values of f 3dB,mod and V pp studied.This finding is consistent with [55], [56], which found that 4-PAM outperforms 6-PAM at the KP4-FEC BER threshold.It is important to emphasize that this finding is specific to the use of an error correction code with pre-FEC BER of 1.8 × 10 −4 .Using an alternative FEC scheme with a sufficiently high threshold BER, higher-order modulation formats may yield a lower OOP than M = 4.
B. Chromatic Dispersion and Linear Equalization
Signal-dependent distortion from the interaction between modulator chirp and CD has become an important design consideration for IM/DD transmission beyond 100 Gb/s per wavelength.In fact, the power penalty from CD may render 200 Gb/s-per-wavelength transmission impractical for intra-DC links beyond 2 km [56].In this subsection, we study the effect of GS in mitigating this form of signal-dependent distortion, focusing on 200 Gb/s-per-wavelength links.We first study the impact of modulator chirp on the OPP and show that GS can reduce the effect of chirp.We then consider higher-order modulation and examine how increasing M impacts OPP and CD tolerance in systems employing GS and linear FFE.
1) Modulator Chirp: It is well-known that a nonzero modulator linewidth enhancement factor causes the OPP to differ between negative and positive values of accumulated dispersion [22], [35], [57].For positive values of α, negative dispersion causes the transmitted pulses to narrow initially while propagating through the fiber.This results in an initial decrease in OPP for small negative values of accumulated dispersion relative to zero accumulated dispersion.
The impact of accumulated dispersion on OPP depends on the sign and magnitude of the accumulated dispersion and α.Fig. 6 shows the OPP vs. accumulated dispersion for M = 4 and various values of α.For positive values of accumulated dispersion, the OPP increases monotonically with increasing values of both the accumulated dispersion and α.GS reduces Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
the OPP by several tenths of a decibel at each accumulated dispersion value tested.
The relationship among OPP, α, accumulated dispersion, and GS is more complicated in the regime of positive α and negative accumulated dispersion.As the accumulated dispersion decreases from zero toward larger negative values, the OPP initially decreases and then increases.The expected initial decrease in OPP is consistent with the initial pulse narrowing expected for positive values of α and sufficiently low values of the accumulated dispersion.Increasing α in this regime initially causes the minimum OPP to occur at more negative values of accumulated dispersion.For sufficiently high values of α, the OPP minimum tends toward 0 ps/nm.GS causes the OPP minimum to shift to a larger negative accumulated dispersion for the three positive values of α shown.The relationships outlined here are consistent with industry results presented in [57].
The extent to which the CD-induced distortion is signaldependent is an increasing function of α over the range of α values studied.When α = 0, the CD-induced distortion is largely signal-independent.This can be seen in an almost constant OPP gap between a system using GS and a system not using GS at all accumulated dispersion values.For negative accumulated dispersion values and positive chirp values, the OPP gap resulting from GS increases monotonically with increasing modulator chirp.In addition, for constant α, the OPP gap between the OPP minima when using GS and uniform spacing grows larger as α increases.
Fig. 6 shows that the GS scheme enables IM/DD links to tolerate higher levels of modulator chirp.For example, at an accumulated dispersion of −25 ps/nm, an error floor occurs at α = 2 when uniform spacing is used.For the optimized level spacing, the GS scheme ameliorates the impact of chip and chromatic dispersion and allows the system to operate with lower OPP than at zero accumulated dispersion.
2) Higher-Order Modulation: Increasing the modulation order affects at least two relevant parameters that directly impact the dispersion tolerance of an IM/DD system.For a given choice of r ex , R b , and average transmitted power, increasing the modulation order M decreases the distance between adjacent symbols in the constellation, thereby decreasing the total amount of distortion the system can tolerate.However, increasing M reduces the bandwidth occupied by the signal, thereby decreasing the impact of CD on the transmitted signal.
We now examine how OPP varies with increasing dispersion and GS for different values of M .Fig. 7 shows the OPP vs. accumulated dispersion for M = 4, 6, and 8.The resulting dispersion tolerance for each of the six cases is depicted by a solid dot, and the minimum OPP is also provided as a reference.The modulator IL is set to 0 dB to allow for future increases in V pp without changing the modulator IL.The remaining parameters are chosen to minimize sources of distortion other than CD.
The proposed GS scheme increases dispersion tolerance by −12.71, −30.48, and −17.18 ps/nm for M = 4, 6, and 8, respectively.The modulation order M = 4 provides the lowest OPP for any dispersion level for uniform spacing and for dispersion values less than −32 ps/nm for GS, respectively.M = 6 provides the highest overall dispersion tolerance and a lower overall OPP for dispersion levels greater than −32 ps/nm.Both the dispersion tolerance and OPP for M = 8 are substantially worse than for the other modulation formats.
The effect of the proposed GS scheme differs for positive and negative values of accumulated dispersion.The distortion arising from positive accumulated dispersion is less signal-dependent than for negative accumulated dispersion.The effect of this asymmetry can be seen in Figs. 6 and 7 as a smaller OPP reduction between the uniform spacing and GS curves for positive dispersion than for negative dispersion.
We now study how the optimized intensity levels vary with accumulated dispersion under the proposed GS scheme.Fig. 8 shows the normalized output power vs. accumulated dispersion from Fig. 7 for M = 8 with GS.The proposed GS scheme changes the optimized intensity levels more for negative dispersion values than for positive dispersion values.The combined effects of negative dispersion and positive α lead to signaldependent distortion at higher normalized intensity levels, an effect observed experimentally in [34].The proposed GS scheme compensates for this effect by shifting the higher intensity levels to lower values.The magnitude of this downward shift increases with decreasing accumulated dispersion.Lastly, we note that the effect of GS in the absence of chromatic dispersion can be inferred by examining the transmitted intensities at zero dispersion.
The impact of signal-dependent distortion for negative or positive dispersion values can be observed in Fig. 8.At an accumulated dispersion of −16 ps/nm, the proposed GS scheme Fig. 8. Normalized output power vs. accumulated dispersion from Fig. 7 for M = 8 using linear equalization and GS.The normalized output power is obtained by dividing the power of each signal point by the power of the highest amplitude signal point.The signal-to-noise ratio is scaled so that the achieved BER ≈ BER target .The top x-axis indicates the corresponding fiber length assuming a laser wavelength of 1270 nm.The normalized output power is given by 10 (−α dB (V (t)) /10) .The other simulation parameters are the same as those used in Fig. 7.
shifts the second, third, and fourth highest output powers to lower values.By contrast, at an accumulated dispersion of +8 ps/nm, GS shifts these three intensity levels upward.We note that the case of positive α and negative dispersion is of particular interest in intra-DC CWDM links, as some WDM standards use wavelengths shorter than the zero-dispersion wavelength [25].Because of this observed asymmetry and consistent with the choice of CWDM wavelengths, the remainder of our study assumes negative accumulated dispersion values.
The modulator extinction ratio r ex is another important design parameter in IM/DD links due to its impact on two other important physical quantities.For a fixed M , increasing r ex results in a larger distance between signal points in a constellation, thereby improving noise tolerance and RS in the absence of other effects.However, increasing the modulator extinction ratio also increases the difference between the highest and lowest output powers, which exacerbates the phase modulation due to transient chirp.The results in Figs.7 and 8 are obtained for fixed r ex , which has so far prevented any analysis on the impact of r ex on dispersion tolerance.To study the net effect of increasing r ex in distortion-limited IM/DD links, we examine how the dispersion tolerance changes as a function of extinction ratio in systems using either uniform spacing or GS.Fig. 9 shows the dispersion tolerance for different combinations of extinction ratio and modulation order.The insertion loss is fixed to 0 dB to ensure that as r ex is varied, a similar range of the transfer characteristic is used.In most cases studied, increases in r ex are associated with decreasing values of the dispersion tolerance.Therefore, over the range of extinction ratios studied, the negative impact of greater chirp-induced phase modulation outweighs the positive impact of increased signal spacing on dispersion tolerance.The combination of M = 8 with GS yields equal dispersion tolerances for V pp = 2 and 2.5 V, because the effect of increased chirp-induced signal-dependent distortion is negated by the larger extinction ratio and GS.We also note that the dispersion tolerance increase obtained using GS is smaller for V pp = 2.5 or 3 V than V pp = 2 V.For V pp > 2 V, the largest improvements in dispersion tolerance from GS are −11.1 ps/nm and −22.5 ps/nm for M = 4 and M = 6, respectively.For V pp = 2 V, the improvements in dispersion tolerance from GS are −12.7 ps/nm and −30.48 ps/nm for M = 4 and M = 6, respectively.
The modulation scheme providing the largest dispersion tolerance depends on whether or not GS is used.When using conventional uniform spacing, M = 6 results in a lower dispersion tolerance than M = 4.Under uniform spacing, moving from M = 4 to M = 6 causes the upper constellation points to move closer together, which outweighs the benefits of reduced signal bandwidth occupation.By contrast, when the proposed GS scheme is used, M = 6 results in the largest dispersion tolerance among the modulation orders examined.For high levels of CD, the proposed GS scheme shifts the transmitted intensity levels downward, partially ameliorating some of the impact from the larger number of constellation points.Therefore, we conclude that the proposed GS scheme can change the value of M that yields the largest dispersion tolerance.
We observe in Fig. 9 that the accumulated dispersion tolerance varies substantially as a function of r ex over the range of values studied.In addition, the parameter ranges studied were chosen to reflect values specified by IEEE standards [25], [58] and available in commercial components [23].Thus, in the design of 200 Gb/s IM/DD systems, the choice of r ex can strongly influence the impact of signal-dependent distortion arising from CD, depending on the fiber lengths considered.
C. Chromatic Dispersion and Volterra Nonlinear Equalization
In Secs.IV-A and IV-B, we showed that the proposed GS scheme reduces the impact of signal-dependent distortion in IM/DD links using linear FFE, yielding improved RS and CD Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
tolerance.Algorithm 1 improves system performance by adjusting the transmitted intensity levels according to the residual signal-dependent distortion after equalization is applied to the received signal.Because the algorithm operates on the observed error statistics at the receiver, the proposed GS scheme can be used in conjunction with various pre-distorters and equalizers.We assumed a linear FFE in Secs.IV-A and IV-B owing to its widespread use in intra-DC links [26].Other possible nonlinear predistortion schemes include a Volterra series or Tomlinson-Harashima precoding.Decision-feedback equalization, MLSD, or VNLE can also be used in lieu of linear FFE.
The joint design of VNLE and GS is of particular interest, as VNLE has been widely studied for mitigating nonlinear distortions in intra-DC IM/DD links [3], [28], [59], [60], [61], often in combination with other DSP techniques [59].VNLE increases the maximum transmission distance and improves RS in IM/DD optical links at the expense of increased DSP complexity relative to linear FFE.VNLE improves performance by mitigating nonlinear distortions such as modulator chirp and CD.We have thus far found that the proposed GS scheme can improve RS and CD tolerance in IM/DD systems using linear FFE.We have not yet studied whether similar improvements can be obtained in systems using stronger forms of equalization, such as VNLE.
In this subsection, we study how improvements in RS and CD tolerance provided by GS change with an increasing memory length of the VNLE kernel.We limit our study to a secondorder VNLE because the signal-dependent distortion arising from modulator chirp and CD are primarily second-order nonlinearities [3].All subsequent references to VNLE presume a second-order VNLE.We begin by fixing the modulation order and varying the number of second-order taps n taps,2 and the linewidth enhancement factor α. We then extend the analysis to higher modulation orders.
1) Modulator Chirp: The combination of modulator chirp and CD induces signal-dependent distortion in the received electric field.The exact characteristics of the modulator chirp are dependent on the choice of a modulator.Adiabatic chirp and transient chirp are typically the dominant sources of spurious phase modulation in directly modulated lasers and EAMs, respectively [22], [34].While our analysis focuses on transient chirp in EAMs, our proposed scheme could also be applied to DML-based IM/DD links, which also exhibit substantial signal-dependent distortion [61].
Fig. 10(a)-(c) show the OPP vs. accumulated dispersion with M = 4 and varying values of α.Fig. 10(a), (b), and (c) assume α = 1, 2, and 3, respectively.The line marker and color combination in the bottom legend applies to all three subfigures.The other design parameters are chosen to simplify the model and to focus on the effects of modulator chirp and chromatic dispersion.
The proposed GS scheme reduces the OPP for all accumulated dispersion values and all VNLE memory lengths.In general, the absolute improvement in CD tolerance is higher when a smaller number of VNLE taps is used.GS provides the largest absolute increase in CD tolerance when using linear FFE for α = 1 and 2. GS provides the largest increase in CD tolerance for the 3-tap VNLE when α = 3.For a VNLE memory length of 7 or higher and α = 1 or 2, GS provides a CD tolerance increase of several ps/nm.In the case of α = 3, a larger CD tolerance improvement is observed for a VNLE memory length of at least 7 or more.
The RS achieved using GS and 21-tap linear FFE is similar to that using uniform level spacing and VNLE with 21 linear and 3 nonlinear taps.In Fig. 10(a) and (b), GS with linear FFE and uniform level spacing with 3-tap VNLE achieve almost identical optical power penalties for accumulated dispersion values down to −30 ps/nm.GS with linear FFE is subject to a higher OPP for values below −30 ps/nm.In Fig. 10(c), GS with linear FFE outperforms uniform level spacing with 3-tap VNLE at all accumulated dispersion values.
The relationship between CD tolerance and modulator chirp varies depending on the choice of VNLE memory and intensity level spacing.For linear FFE with GS, linear FFE with GS, and 3-tap VNLE without GS, the CD tolerance decreases monotonically as α increases.For all other combinations of VNLE memory and level spacing, the CD tolerance initially increases when α changes from 1 to 2, and then decreases when α = 3.
The increase in CD tolerance obtained using GS is generally larger for higher values of α.For almost all choices of VNLE memory length, the increase in CD tolerance obtained using GS is largest for α = 3 and larger for α = 2 than α = 1.A notable exception is for linear FFE, where the increase in CD tolerance obtained using GS is largest for α = 2.
We do not plot the case when α = 0 to simplify Fig. 10.Including α = 0 would not yield substantial insight, as both the VNLE and GS provide almost no improvement in CD tolerance.An error floor occurs before −30 ps/nm for all combinations of VNLE memory and GS studied.We note that the linear FFE with GS and 3-tap VNLE provide almost identical performance when α = 0.
2) Higher-Order Modulation: Increasing constellation density has two important effects on CD tolerance.For a fixed data rate, increasing M decreases the bandwidth occupied by the signal, thereby decreasing CD-induced signal-dependent distortion.The increased density of constellation points also increases the system's sensitivity to a given level of signal-dependent distortion, owing to a reduced distance between adjacent constellation points.The impact of GS on CD tolerance varies with the modulation order used.For M = 4 and M = 6, the increase in CD tolerance from GS is the largest for linear FFE and decreases with increasing VNLE memory length.For M = 8, by contrast, GS provides a consistent ∼ 10-15 ps/nm increase in CD tolerance for all VNLE memory lengths.
The relationship between modulation order and CD tolerance is dependent on the combination of VNLE memory and GS.For linear FFE with GS, linear FFE without GS, and 3-tap VNLE without GS, increasing the modulation order from M = 4 to M = 6 results in decreased CD tolerance.For the three aforementioned cases, 3-tap VNLE with GS, and 5-tap VNLE without GS, increasing the modulation order from M = 6 to M = 8 leads to decreased CD tolerance.The CD tolerance for a system using a 7-, 9-, and 11-tap VNLE increases monotonically with increasing modulation order, whether or not GS is used.
Given cost and complexity constraints in IM/DD links, the comparative performance of linear FFE with GS against a VNLE without GS is of particular interest.Fig. 10(b) shows that for M = 4 and α = 2, a linear FFE with GS and a 3-tap VNLE without GS deliver similar CD tolerances.Fig. 11(a) shows that for M = 6 and α = 2, a system employing a linear FFE with GS has a higher CD tolerance than a system employing a 3-tap VNLE without GS.By contrast, Fig. 11(b) shows that for M = 8 and α = 2, the 3-tap VNLE without GS enables a higher CD tolerance than the linear FFE with GS.These results highlight the ability of GS to improve CD tolerance when lower-order modulation is used and the necessity of VNLE (or perhaps other nonlinear equalization techniques) when employing denser transmit constellations.
V. IMPLEMENTATION CONSIDERATIONS
In this section, we address issues for implementing the proposed GS scheme in 200 Gb/s-per-wavelength links.We first study the data converter resolution required to support GS.We then study drive signal design for the highest-dispersion channel in an intra-DC CWDM system with GS and present the implications for per-channel optimization in WDM systems.Finally, we discuss options for estimating the conditional error probabilities and adjusting the transmitted intensity levels to implement the proposed GS scheme.
For the remainder of Section V, we assume linear FFE and a modulation order M = 4.These choices are motivated by several results from Section IV.The choice of linear FFE is motivated by the strong complexity constraints in intra-DC links.For linear FFE, Fig. 5 shows that M = 4 results in the lowest OPP for all choices of f 3dB,mod and V pp studied.While Fig. 7 shows that M = 4 provides less maximum dispersion tolerance than M = 6 when using GS, M = 4 with GS can provide a dispersion tolerance greater than −24 ps/nm with a reasonable V pp .For these reasons, M = 4 with linear FFE is seen to be a suitable combination.
A. DAC and ADC Requirements
High-bandwidth data converters, including DACs and ADCs [62], enable various DSP algorithms at the transmitter and receiver, for mitigating key impairments in intra-DC links [3].However, data converters and associated DSP algorithms are sources of substantial power consumption in intra-DC links [62].It is therefore important to assess the DAC resolution and ADC effective number of bits (ENOB) required by the proposed GS scheme.
In addition to using finite quantization at the ADC and DAC, we adjust other component parameters to reflect modern intra-DC links.We set the PIN dark current to 10 nA [22], [63].Following [64], we model the bandwidth limitations in the system by setting f 3dB,mod = f 3dB,DAC = f 3dB,PIN .This removes several degrees of freedom in choosing various component parameters while closely approximating the overall frequency response for many other choices of component bandwidth limitations.For similar reasons, we also assume that the DAC resolution and ADC ENOB are equal.
Fig. 12 shows the OPP vs. accumulated dispersion for ADC ENOB = 5, 6, and ∞.For both GS and uniform intensity levels, increasing DAC and ADC resolutions from 5 to 6 b decreases the OPP over the entire range of accumulated dispersion studied.An additional decrease in OPP is observed at ENOB = ∞ as compared to ENOB = 6 b, but the decrease is smaller.We note that the total dispersion tolerance here is higher than the M = 4 simulation depicted in Fig. 7.This is due, in part, to the additional bandwidth limitations of the DAC and PIN photodetector, which substantially reduce the bandwidth of the end-to-end system [64].
We now examine the OPP reduction resulting from GS for fixed data converter resolutions.Fig. 12 explicitly indicates the reductions in OPP at −16 ps/nm for ENOB = 5 and 6 b obtained using GS, which are 1.48 and 0.78 dB, respectively.GS is seen to yield a larger decrease in OPP over uniform spacing for ENOB = 5 b than for ENOB = 6 b, a trend that holds for other values of accumulated dispersion as well.
We alternatively study the relative OPP reduction resulting from increasing data converter resolution for a given level spacing scheme.At an accumulated dispersion of −20 ps/nm in Fig. 12, an increase in data converter resolution from 5 to 6 b decreases the OPP by 0.46 dB when GS is used.An error floor occurs when ENOB = 5 b at −20 ps/nm so an OPP improvement cannot be stated when uniform spacing is used.These improvements can be attributed, in part, to GS compensating for higher levels of signal-dependent distortion arising from quantization when ENOB = 5 b.We conclude that GS can be exploited to decrease the impact of quantization noise arising from low data converter resolutions.
B. Drive Signal Design in WDM Systems
In this section, we optimize the parameters of the drive signal to optimize the RS of the most dispersion-impaired wavelength in a WDM system.We jointly optimize the following three parameters of the drive signal: V pp , insertion loss, and the inner drive signal levels via GS.We compute the OPP difference between a system that optimizes all three parameters and an equivalent system that optimizes V pp and insertion loss but uses uniform intensity levels.We show that the optimal value of V pp varies widely as a function of the accumulated dispersion and discuss the implications this has on drive signal design in WDM systems with substantial dispersion.We use intra-DC CWDM as a design example and assume the corresponding wavelengths and fiber lengths to provide a concrete and practically relevant set of system parameters.Our optimization procedure, however, can be easily extended to other WDM systems.
Recent IEEE standards for CWDM have adopted either four or eight wavelengths near the zero-dispersion wavelength [25].Considering eight wavelengths, the 1270 nm wavelength will incur the largest negative accumulated dispersion.According to (1), the dispersion parameter at 1270 nm is −3.85 ps/(nm • km).Conservatively rounding to −4 ps/(nm • km) and assuming fiber lengths of 2 and 6 km, the most negative accumulated dispersion values in our CWDM design example are approximately −8 and −24 ps/nm, respectively.For simplicity, we perform optimization only for accumulated dispersion levels of −8 and −24 ps/nm, but our optimization could also be performed for the other wavelengths in a CWDM system.
In our model and assuming M = 4, there are three remaining degrees of freedom in the design of the transmitted intensity levels.Assuming equal intensity level spacing, the two remaining design parameters are the peak-to-peak voltage and the modulator insertion loss.The proposed GS scheme removes the uniform spacing constraint, allowing for an optimized design of the inner intensity levels.
Independent optimization of each of the three design parameters mentioned above can yield a suboptimal solution, as varying one parameter can affect the optimal values of the other two.Increasing the peak-to-peak voltage is desirable, as it increases the extinction ratio, thereby lowering the theoretical minimum OPP as described in (6).However, increasing the extinction ratio reduces dispersion tolerance, as shown in Fig. 9.The choice of modulator insertion loss is equivalent to choosing the interval of the nonlinear transfer characteristic being used, thereby affecting transmitter bandwidth-induced signal-dependent distortion and modulator extinction ratio.Fig. 13 shows OPP vs. peak-to-peak voltage using linear equalization for accumulated dispersion values of −8 and −24 ps/nm.Because intra-DC links are often subject to strict peakto-peak drive voltage constraints, we bias the modulator at −2.5 dB, which is the optimum value found in Fig. 4 and is near the maximum-extinction-ratio region of the modulator nonlinear transfer characteristic.We also fix DAC resolution = ADC ENOB = 6 b, which we showed in Section V-A is sufficient to provide near-optimal performance.We can optimize the peak-to-peak voltage by selecting the value that minimizes the OPP.The difference between the minimum OPPs for the two systems is depicted in the figure and quantifies the OPP reduction provided by the proposed GS scheme.
For an accumulated dispersion of −24 ps/nm, as V pp increases, the OPP decreases until the minima at V pp = 1.5 V and V pp = 1.625 V when using uniform spacing and GS, respectively, above which the OPP increases monotonically.For an accumulated dispersion of −8 ps/nm, the minimum OPP values are achieved at V pp = 2 V with uniform spacing and V pp = 2.5 V with GS.The OPP is at least 1.5 dB lower at the optimal values of V pp as compared to V pp = 1 V for all combinations of accumulated dispersion and level spacing design.
Using GS leads to additional reductions in the minimum OPP of 0.76 dB and 0.56 dB for accumulated dispersion values of −8 and −24 ps/nm, respectively.
In an IM/DD WDM system, where different channels are subject to different values of accumulated dispersion, the optimal value for V pp can vary substantially between channels.For example, in a 6 km WDM system, channels at wavelengths of 1296 and 1270 nm will be subject to accumulated dispersion values of approximately -8 ps/nm and -24 ps/nm, respectively.According to Fig. 13, the optimized values of V pp for the two wavelengths will differ by as much as 1 V. Therefore, a WDM system using CWDM wavelengths will achieve a better RS by optimizing V pp for each wavelength separately as compared to an equivalent system that uses the same value of V pp for each wavelength.Jointly optimizing V pp and the inner intensity levels using GS further improves RS for each channel.
1) Digital Pulse Shaping: The overall end-to-end system bandwidth can have an important impact on the dispersion tolerance of an IM/DD system [64].Digital pulse shaping for bandwidth-limited IM/DD channels is one possible method for reducing signal bandwidth and has been studied for several decades [21], [65], [66].
Digital pulse shaping can modify several physical properties of the transmitted signal, which can have both positive and negative effects on RS and CD tolerance.Digital pulse shaping can be used for signal bandwidth compression and digital predistortion, which should improve both RS and CD tolerance absent other factors.However, digital pulse shaping for IM signals has been shown to increase the peak value of the signal, which can exacerbate chirp-induced phase modulation.In addition, the unipolar constraint of IM signals may require the addition of a DC bias, which can result in an OPP of several dB [21].
The drive signal design procedure presented in Section V-B jointly optimized V pp , insertion loss, and inner drive signal levels and did not consider digital pulse shaping.Similar to other properties of the drive signal analyzed, we expect that digital pulse shaping would have an impact on RS and CD tolerance in an IM/DD system using GS.We consider our omission of digital pulse shaping appropriate in the context of intra-DC interconnects, where DAC-less transmitter designs are often used [27], [67].The joint design of digital pulse shaping and GS for IM/DD links is an interesting topic for future work.
C. Intensity Level Computation and Adjustment
In this section, we discuss how to estimate the conditional symbol-error probabilities p j,+ and p j,− , compute the updated intensity levels, and adjust the transmitted intensity levels in order to implement Algorithm 1.
A straightforward implementation of Algorithm 1 requires a mechanism to estimate the post-equalization symbol-error statistics (SES) at the receiver, computation of the updated intensity levels, and a mechanism for feedback from the receiver to the transmitter.GS schemes that use SES at the receiver can be collectively referred to as GS-SES.SES could be estimated periodically by transmitting a known sequence or by exploiting the syndrome computed in the error-correction decoder.Computation of the updated intensity levels could be performed at either the transmitter or receiver.Implementations using SES measured at the receiver to compute the updated intensity levels will require feedback of either the SES or the updated intensity levels to the transmitter, depending on whether the computations are performed at the transmitter or receiver.However, feedback is not currently available in DC links, and may need to be adopted to support GS-SES.
There are several other factors to account for when considering GS schemes that use a feedback channel.The transmission latency for intra-DC links is on the order of tens of microseconds.The delay between the estimation of SES at the receiver and the intensity level updates at the transmitter may induce system instability depending on the coherence time of the channel.In addition, the proposed GS scheme can be used to trade off between the complexity associated with the GS scheme and receiver DSP complexity.The optimal trade-off between the complexity of a feedback channel and receiver DSP complexity may depend on other factors, such as the link budget, modulator chirp, modulation order, link length, baud rate, etc.A feedback channel may not be strictly necessary to implement GS-SES.Offline modeling and static lookup tables may be employed to avoid requirements for SES estimation and a feedback channel.For example, pre-distortion based on offline modeling and lookup tables for 4-PAM IM/DD links has been investigated extensively [68], [69], [70].Methods based on offline modeling, however, may be of limited effectiveness in situations where the model does not reliably represent the actual system.In addition, predistortion and GS schemes that avoid a feedback channel may need additional information about the channel, such as the link distance or bandwidth limitations.
A transmitter DAC provides one straightforward way to adjust the transmitted intensity levels.Using a DAC has the further benefit of enabling various DSP techniques that may be central to mitigating key sources of distortion in IM/DD links [3].For example, [71] used a 4-b DAC, with 2 b dedicated to signaldependent FFE and 2 b used for nonlinear predistortion.The nonlinear predistortion technique is similar to the proposed GS scheme and was designed offline to account for signal-dependent ISI resulting from modulator nonlinearity.The importance of reducing the impact of nonlinearity and bandwidth limitations from individual components is heightened as data rates are increased.
An electrical DAC at the transmitter may not be strictly necessary for adjusting the transmitted intensity levels.For example, previously demonstrated IM/DD [27], [67] and coherent transceivers [72] employed DAC-less transmitter designs while supporting the transmission of 4-level signals.These systems [27], [67], [72] encoded 4-PAM symbols using segmented electrodes and independent drive signals for the least and most significant bits.By adding an additional bias voltage section to the modulator, the transmitter in [27] supported dynamic adjustment of the transmitted intensity levels.Independent control of each intensity level could be achieved by using the differential encoding of the least and most significant bits, adjustable modulator bias voltages, and a dual-parallel modulator structure [73].
VI. CONCLUSION
A GS scheme that optimizes transmitted intensity levels to achieve substantially equal conditional error probabilities at all decision thresholds at the receiver was presented in this paper.The scheme was analyzed for its suitability in IM/DD intra-data center links, where signal-dependent distortion from bandwidthlimited, nonlinear components and chromatic dispersion limit system performance.The GS scheme was shown to improve RS and increase CD tolerance, while reducing the OPP caused by finite-resolution data converters over a wide range of channel parameters.The proposed scheme was found to improve RS by 0.76 dB and 0.56 dB at accumulated dispersion values of −8 ps/nm and −24 ps/nm, respectively, over an equivalent system using an optimized modulator extinction ratio, uniformly spaced constellation points, and LE.The proposed scheme can also be used to reduce the complexity of the DSP required to achieve a target RS and transmission reach, as evidenced by the similar performances achieved by a standard linear FFE with GS and a three-tap, second-order VNLE without GS.Key implementation issues and data converter resolution requirements for the proposed GS scheme were discussed.
Manuscript received 19
October 2023; revised 10 November 2023; accepted 15 November 2023.Date of publication 22 November 2023; date of current version 12 December 2023.This work was supported in part by Maxim Integrated (Analog Devices, Inc.), Inphi Corporation (Marvell) and in part by the National Science Foundation under Grant DGE-1656518.(Corresponding author: Ethan M. Liang.)
Fig. 1 .
Fig. 1.(a) Block diagram of an IM/DD optical link without optical amplification.(b) The corresponding equivalent baseband model.
Fig. 7 .
Fig. 7. Optical power penalty (OPP) vs. accumulated dispersion using linear equalization for M = 4, 6, and 8.The top x-axis indicates the corresponding fiber length assuming a laser wavelength of 1270 nm.The solid dot on each marked line indicates the linearly extrapolated dispersion tolerance.The unmarked lines indicate the approximate minimum OPP given a fixed modulation format and finite extinction ratio.The other variable simulation parameters are V pp = 2 V, modulator IL = 0 dB, f 3dB,mod = 50 GHz, α = 2, I d = 0 nA, f 3dB,DAC = f 3dB,PIN = ∞, and DAC resolution = ADC ENOB = ∞.
Fig. 10 .
Fig. 10.Optical power penalty (OPP) vs. accumulated dispersion for M = 4, varying the number of second-order Volterra taps n taps,2 , for (a) α = 1, (b) α = 2, and (c) α = 3.The legend is common to all three subfigures.The numeric values in the legend indicate the number of 2nd-order Volterra taps used.The top x-axis indicates the corresponding fiber length assuming a laser wavelength of 1270 nm.The other variable simulation parameters are V pp = 2 V, modulator IL = 0 dB, f 3dB,mod = 50 GHz, I d = 0 nA, f 3dB,DAC = f 3dB,PIN = ∞, and DAC resolution = ADC ENOB = ∞.
Fig. 11 .
Fig. 11.Optical power (OPP) vs. accumulated dispersion for α = 2, varying the number of second-order Volterra taps, for (a) M = 6 and (b) M = 8.The legend is common to both figures.The numeric values in the legend indicate the number of 2nd-order Volterra taps used.The top x-axis indicates the corresponding fiber length assuming a laser wavelength of 1270 nm.The other variable simulation parameters are V pp = 2 V, modulator IL = 0 dB, f 3dB,mod = 50 GHz, I d = 0 nA, f 3dB,DAC = f 3dB,PIN = ∞, and DAC resolution = ADC ENOB = ∞.
Fig. 12 .
Fig. 12. Optical power penalty vs. accumulated dispersion using linear equalization for ENOB = 5, 6, and ∞.The solid dot on each marked line indicates the linearly extrapolated dispersion tolerance.The unmarked horizontal line indicates the approximate minimum OPP for M = 4 and r ex = 7.4 dB.The top x-axis indicates the corresponding fiber length assuming a laser wavelength of 1270 nm.The other simulation parameters are V pp = 2 V, modulator IL = dB, f 3dB,mod = f 3dB,DAC f 3dB,PIN = 65 GHz, α = 2, I d = 10 nA, and DAC resolution = ADC ENOB. | 15,962.8 | 2023-12-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Multigrid for Staggered Lattice Fermions
Critical slowing down in Krylov methods for the Dirac operator presents a major obstacle to further advances in lattice field theory as it approaches the continuum solution. Here we formulate a multi-grid algorithm for the Kogut-Susskind (or staggered) fermion discretization which has proven difficult relative to Wilson multigrid due to its first-order anti-Hermitian structure. The solution is to introduce a novel spectral transformation by the K\"ahler-Dirac spin structure prior to the Galerkin projection. We present numerical results for the two-dimensional, two-flavor Schwinger model, however, the general formalism is agnostic to dimension and is directly applicable to four-dimensional lattice QCD.
Introduction
Increasingly powerful computers and better theoretical insights continue to improve the predictive power of lattice quantum field theories, most spectacularly for lattice quantum chromodynamics (LQCD) [1]. However, with larger lattice volumes and finer lattice spacing, exposing multiple scales, the lattice Dirac linear system becomes increasing illconditioned threatening further progress. The cause is well known: as the fermion mass approaches zero, the Dirac operator becomes singular, due to the exact chiral symmetry of the Dirac equation at zero mass, causing critical slowing down [2]. The algorithmic solution to this problem for lattice QCD was recognized 25 years ago. The fine-grid representation for the linear solver should be coupled to multiple scales on coarser grids in the spirit of Wilson's real space renormalization group and implemented as a recursive multigrid (MG) pre-conditioner [3]. Early investigations in the 1990s introduced a gaugeinvariant projective MG algorithm [4,5] with encouraging results for the Dirac operator in the presence of weak (or smooth) background gauge fields near the continuum. However, in practice lattice sizes at that time were too small and the gauge fields were too rough to achieve useful improvements.
Not until the development of adaptive geometric MG methods [6,7], was a fully recursive MG algorithm found for the Wilson-Dirac discretization, which was able to transfer the strong background chromodynamics fields onto coarser scales and eliminate the illconditioning of the Dirac kernel in the chiral limit. In spite of this achievement for the Wilson-Dirac and closely related twisted mass formulation [8,9], these are not the only important Dirac discretizations in common use in lattice field theory. Three other discretizations used extensively in high energy applications, which more faithfully represent chiral symmetry on the lattice, are referred to as the domain wall [10], overlap [11], and staggered [12] fermions. The application of adaptive geometric MG to these discretizations has proven to be more difficult, perhaps related to the improved lattice chiral symmetry. A two-level MG solver for domain wall fermions has been implemented [13,14] which shows some promise, and a non-Galerkin algorithm has been implemented for overlap fermions [15], but there has been no success at formulating a MG staggered algorithm. Moreover, since staggered lattice ensembles are now the largest available for LQCD, requiring O(10 5 ) iterations for good convergence, improving staggered solvers is a critical issue. Here we introduce a novel solver with the Kähler-Dirac spin structure [16,17] that allows, at last, the construction of an effective multi-level adaptive geometric MG algorithm for staggered fermions.
The staggered fermion is a remarkable discretization [12,18] which closely resembles the continuum Dirac linear operator, The lattice discretization replaces the derivative by a gauge-covariant central difference, resulting in a sparse matrix operator on a hypercubic lattice with the background gauge fields U (x, y) represented by highly oscillatory SU (3) matrices on each link x, y of the lattice. The γ µ matrices are replaced by a single staggered ±1-sign: η µ = (−1) ν<µ x ν . Similar staggered lattice realizations of Dirac fermions have proven valuable not only for lattice QCD investigations, but also for a variety of physical systems such as graphene in condensed matter [19], supersymmetry [20], and strongly interacting conformal fixed points of possible interest for beyond the standard model (BSM) physics in the Higgs sector [21,22,23,24,25,26,27,28,29].
Unlike the Wilson and domain wall methods, the staggered discretization preserves the exact anti-Hermiticity of the continuum Dirac operator up to real mass shift. In this sense it represents the most primitive (or even fundamental) discretization. It has no explicit spin matrices (γ µ ), so the Dirac spin structure only emerges in the continuum limit. Each 2 d lattice sub-block in four dimensions reassembles into four Dirac flavors (or tastes), the content of a single Kähler-Dirac fermion [30]. This is the structure that our MG algorithm exploits: dividing out the 2 d Kähler-Dirac spin structure transforms the spectrum into a near "circle" in the complex plane as illustrated in Fig. 2.3. The striking similarity of the resultant spectrum to the Wilson and overlap spectra is, we believe, essential to the success of our staggered MG algorithm.
In LQCD applications with staggered fermions, the system D(U, m) ij ψ j = b i is typically solved via Krylov methods on the Schur decomposed even/odd operator (or, equivalently, the red/black operator). Because the preconditioned operator is Hermitian positive definite, the system can be solved by the conjugate gradient (CG) algorithm. This method has proven robust, and there are some well established methods to fend off critical slowing down, such as EigCG [31] eigenvalue deflation or block Krylov solvers [32,33,34,35]. Block solvers do not remove critical slowing down, and deflation methods scale poorly with the volume in terms of the number of eigenvectors need to remove critical slowing down. As explained in our earlier report [36], an adaptive geometric MG algorithm for the staggered normal operator can be easily formulated which removes critical slowing down. However this comes with a heavy overhead. A Galerkin coarsening of the normal equation introduces next-to-nearest neighbor (or corner) terms, resulting in a 2d + 2d(d − 1) site coarse operator stencil; in four dimensions increasing the off-diagonal terms from 8 to 32 terms. This becomes prohibitively expensive in terms communication pressure in parallel strong scaling MG solvers [37,38,39,40].
The solution to this problem is to develop an MG algorithm directly on the staggered operator. In the interest of algorithm development, we consider a two-dimensional model system as opposed to the full four-dimensional QCD. The two-dimensional staggered fermion, coupled to an Abelian gauge theory, U (x, x + µ) = exp[iθ µ (x)] is the two-flavor Schwinger model in the continuum limit [41,42]. This is a fully non-perturbative quantum field theory which is an ideal analogue to four-dimensional QCD. Like QCD it exhibits confinement with a zero mass triplet of "pion-like" bound states in the chiral (zero mass) limit, and instantons that present a topological mechanism which breaks chiral symmetry dynamically in the flavor singlet channel [43]. As such, this has proven to be a reliable test framework [6] prior to a full implementation for four-dimensional QCD. The reader is referred to an extensive literature to understand the physical features that guide our construction in two dimensions and the natural generalization to four dimensions. The lattice Schwinger model has the action Introducing the lattice spacing a, the bare mass (m) and the gauge coupling (g) are given by dimensionless parameters, m 0 = am and β = 1/(a 2 g 2 ) respectively. There are two important physical length scales determined by these parameters: (1) The fundamental gauge correlation length (or string length) measured by the Wilson loop area law is l σ = a √ 2β. (2) The fundamental fermion length scale measured by the "pion" Compton wave length is l Mπ = 1 Mπ ≈ 0.5a(am) −2/3 β 1/6 [42]. To approach the continuum both must be large relative to the lattice spacing. As an analogue to QCD, we should also approach the chiral regime with l Mπ /l σ 1. To control finite volume L d and finite lattice spacing a errors, the four length scales should obey the constraint: L l Mπ l σ a.
This two-dimensional theory has been carefully selected because of its remarkable similarity to four-dimensional QCD both in terms of the underlying physics and the formal mathematical structure. Although at present our numerical tests are restricted to two dimensions, the entire formal structure is applicable to higher dimensions. The numerical analysis of a four-dimensional algorithm for lattice QCD is under development in QUDA [44,45,46], an efficient GPU framework for LQCD applications. Results will be presented in a subsequent publication.
The organization of the paper is as follows. In Sec. 2 we give the mathematical framework of the staggered Dirac operator essential to our subsequent MG formulations. In Sec. 3 we consider a Galerkin projection of the original operator and explain why it fails as a MG preconditioner. We then constrast it with the coarse projection of our new Kähler-Dirac preconditioned operator. In Sec. 4 we present in detail the construction of the staggered MG algorithm, followed by detailed numerical tests for the two-dimensional Schwinger model. In Sec. 5 we discuss some alternatives to our current implementation, which may be useful in the application of our staggered MG algorithm to four-dimensional LQCD and other staggered lattice simulations. For example, a method for exactly preserving complex conjugate eigenpairing and numerical tests thereof is presented in Sec. 5 and in Appendix A, respectively.
Mathematical Preliminaries of Staggered Fermions
The geometric structure of the staggered Dirac operator Eq. 1.2 and its relationship to the low-lying eigenspectrum is important for our analysis. Many of its features are inherited directly from the discretization of the continuum action, (2.1) The naïve fermion discretization uses a central difference approximation for the first derivative, which causes the so-called "doubling" (or aliasing) problem [47]. In the continuum, a single naïve fermion gives 2 d Dirac fermions: 16 four-component spinors in four dimensions and 4 two-component spinors in two dimensions. The staggered construction reduces this multiplicity by spin diagonalizing the Dirac structure, then dropping all but one of the 2 d/2 copies. Explicitly, this spin diagonalization, is achieved by the unitary field redefinition, with Ω x = γ x 1 1 γ x 2 2 · · · γ x d d . Dropping all but one copy, with the replacement results in a partial solution of the doubling problem by reducing the fermion content to 2 d/2 staggered Dirac fermions: 4 in four dimensions and 2 in two dimensions. It is convenient to write the staggered operator succinctly as The staggered operator has a few special properties not shared by other fermion discretizations. The staggered operator is anti-Hermitian up to a mass shift and is normal: [D(U, m), D † (U, m)] = 0, just like the continuum operator. This is in contrast to the Wilson discretization, with its Hermitian second order Wilson (stabilization) term that decouples doublers but makes D W non-normal in the interacting case. The Wilson term also explicitly breaks chiral symmetry. On the other hand, the staggered operator retains a single exact chiral symmetry in the interacting case, with (x) being the generator of the chiral symmetry. These good chiral properties give (x)D + D (x) = 2m (x) → 0 as m → 0. The chiral projectors, 1 2 (1 ± (x)), partition the lattice into even and odd sub-lattices, Furthermore, D features an (x) Hermiticity, analogous to γ 5 Hermiticity of the continuum Dirac operator.
The normal equations for the staggered operator are diagonal, The Schur-preconditioned system takes on a similar structure and is also Hermitian positive definite. For the free problem (i.e., unit gauge fields, U (x, x + µ) = 1), there is an exact cancellation of all next-to-nearest neighbor "around-the-corner" terms in the normal operator. This is a result of the η µ phases preserving a key property of the Dirac algebra when taking the product of ηs around a plaquette, The result is a set of 2 d decoupled Laplace operators on a lattice with spacing 2a illustrated in Fig. 2.1. In this sense, the free staggered operator is truly the "square root" of the Laplace operator, similar to the continuum Dirac operator. We can immediately write down the eigenvalues of the free staggered operator, given by where the p µ = 2n µ π/L for integers n µ ∈ [−L/4 + 1, L/4] due to the shift-by-two translational invariance. The eigenvalues are imaginary (up to a real mass shift) and come in complex conjugate pairs. When an interacting gauge field is turned on, the "around-thecorner" terms no longer vanish, leaving the two decoupled components in Eq. 2.8. These The normal operator applied on an odd site "o". All contributions to even sites "x" cancel due to D being normal. Links in black (solid) and red (dashed) correspond to ±1, respectively, due to the contributions of η µ and the anti-Hermiticity of D. In the free field, it's clear that corner terms cancel.
next-to-nearest neighbor terms are the standard so called clover term, resulting in an irrelevant, in the Wilsonian sense, spin gauge interaction (σ µν F µν ) in the continuum. The spectrum cannot be found analytically, but (x) Hermiticity symmetry ensures that the eigenvalues still appear in exact complex conjugate pairs.
Kähler-Dirac Preconditioning
We now consider the spectral transformation which is essential to the staggered MG algorithm presented in Sec. 3 and tested numerically in Sec. 4. Here we will show that when the staggered operator is right-preconditioned by the 2 d Kähler-Dirac blocks, the spectrum on the resultant 2a blocked lattice is dramatically different. In the free case, we prove that this transformation gives an exactly circular spectrum in the complex plane, similar to the overlap lattice Dirac discretization [11]. The inclusion of gauge fields and/or the three link Naïk term [48] are relatively small modifications of this basic circular structure.
The argument proceeds as follows. The staggered operator is composed of blocks containing 2 d sites, corresponding to 2 d degrees of freedom that in the continuum limit are recombined into a multiplet of Dirac fermions [49,50]. It is straightforward to see that the decomposition of the staggered operator in 2 d blocks of sites partitions the lattice, as illustrated in red in Fig. 2.2, into independent 2 d blocks B containing a plaquette of links.
We will refer to these as Kähler-Dirac blocks. We also include the local mass term into this B block. The nearest-neighbor terms between the B blocks contibute to a block hopping term C, which is unitarily equivalent, up to the mass shift, to the block-local contributions in B. B and C only share sites at the corner of squares in two dimensions, cubes in three dimensions, and hypercubes in four dimensions. This is a dual decomposition: half of the links on the original lattice contribute to B, and half contribute to C, as represented in Fig. 2.2. We denote this partition between hopping terms within and across blocks as We remark that we can interchange this dual description by shifting the coordinates x i → x i + 1, where 1 is a vector of ones. We now construct the right-block-Jacobi or Kähler-Dirac preconditioned operator as This is a remarkably different operator with which we develop our MG algorithm.
To characterize these differences, we will first consider the free case. After rescaling and multiplying by (x), the generator of the exact staggered chiral symmetry, both terms are separately Hermitian, traceless, and unitary. More concretely, we define B = B (x)/ √ d + m 2 and C = C (x)/ √ d and note: as a trivial consequence of the perfect cancellation of the corner terms for Eq. 2.9. These properties imply that B and C have equal numbers of ±1 eigenvalues, and further that the product C B is a unitary matrix U . (The addition of a Naïk term does not changeB, but it does contribute to C.) With this observation, our free Kähler-Dirac staggered operator A is given by (2.14) The eigenvalues of DB −1 lie on a circle centered at 1 as illustrated in Fig. 2.3. The radius of the circle is ρ = d d+m 2 . In the massless limit, the radius is exactly 1. This leads to an identical structure to the overlap operator, under the mapping (γ 5 , γ 5 ) → ( C, B). Both C and B are algebraically similar to γ 5 and γ 5 , being Hermitian and unitary with an equal number of ±1 eigenvalues. Adding a mass term to the overlap operator similarly rescales the unitary portion of the spectrum, introducing a mass gap. For comparison, in the right panel of Fig. 2.3, we show the free spectrum of the massless two-dimensional Wilson operator, the two-dimensional overlap operator and our new two-dimensional Kähler-Dirac preconditioned operator. Very similar figures apply to four dimensions, except the Wilson spectrum now has four arcs in the positive real direction. Finally, we note that if we add a Naïk term to the original staggered operator, the right preconditioning perturbs the unitarity of the spectrum but preserves the qualitative geometric features. A comparison of the Kähler-Dirac preconditioned operator to the original staggered operator is given in the left panel of Fig. 2.3. We compare the massive spectrum against all other fermion discretizations in Fig. 2.4.
The Kähler-Dirac operator no loner admits a simple "γ 5 " Hermiticity condition. However, it does obey a modified asymmetric γ L/R 5 condition, which is essential for our discussions in Sec. 3. The key observation is to note that we can change the "convention" that A is given by a right block preconditioning of D to a left block preconditioning via the transformation (x)A † (x) = B −1 D. We can rearrange this identity and note and likewise we can take advantage of the (x) Hermiticity of D to note ). This is a generalization of the idea of γ 5 Hermiticity: now, γ L 5 γ R 5 = 1 and A † = γ L 5 Aγ R 5 . Also, just as is the case for the Wilson and staggered operator, these properties are enough to show that A features complex conjugate eigenpairs. Assume A|λ R = λ|λ R , where the superscript R . We can take the Hermitian conjugate of each side of the equation. Next, we can right-multiply by γ L 5 and take advantage of γ Hermiticity. This gives us λ R |γ L 5 A = λ R |γ L 5 λ * , that is, A also has an eigenvalue λ * with a left eigenvector λ * L | ≡ λ R |γ L 5 . This same exercise can be trivially repeated for left eigenvectors using γ R 5 to the same end.
Free Spectrum after Kähler-Dirac Preconditioning
For a detailed analysis of the spectrum, we introduce the flavor representation of the staggered operator [49,51,50], which is unitarily equivalent to a lattice Kähler-Dirac fermion in the free field [17,52]. Here each submatrix B is expressed in terms of the spintaste gamma matrices which enumerate the components of a single continuum Kähler-Dirac fermion [30]. Its action is where q(X) is the Kähler-Dirac field containing 2 d degrees of freedom, X is the Kähler-Dirac block index, b = 2a = 2 is the lattice spacing between Kähler-Dirac blocks, and the finite difference operators are defined as In the language of staggered fermions, the γ µ matrices generate the spin algebra, while the matrices τ µ = γ † µ generate the so-called taste algebra. It should be noted that if these lattice fermions are gauged on the lattice with twice the lattice spacing b = 2a, the resulting lattice theory of interacting Dirac-Kähler fermions [16,53] is no longer equivalent to the interacting staggered fermion and, of note, can generate a dynamical mass term. Likewise, on a continuum Riemann manifold, a Kähler-Dirac fermion admits a different gravitational gauging than Dirac fermions [54].
Our decomposition of D = B + C is now partitioning Eq. 2.19 into local and nearest neighbor contributions. The local block B is given by in the massless case. The inverse is given by The transformation D → A = DB −1 gives the kernel trivially diagonalized, and the imaginary part is prescribed by recalling the shifted unitary structure of the spectrum. This gives This spectrum is visualized on the left panel of Fig. 2.3. The spectrum can be written as 1]. We note again that, up to a scaling, the low spectrum is similar to the Wilson, overlap, and staggered spectrum.
Non-zero Mass Term
The spectrum undergoes a minor change when the original staggered operator is massive. The local block now becomes µ γ 5 ⊗ τ µ τ 5 + m, and the preconditioned spectrum becomes which parameterizes the arc of a circle 1 − ρe iθ centered at (1, 0) with radius ρ = d d+m 2 . The arc is bounded to the range cos(θ) = ρd −1 Naïk Term Many modern LQCD simulations add a next-to-next-to-nearest neighbor improvement term known as a Naïk [48] term. Two common realizations of this improvement, equivalent in the free-field limit, are AsqTad [55] and HISQ [56] fermions. The free operator [52] is given by The improved action admits the spectrum (2.28) The effect of the Naïk term on the Kähler-Dirac action is to modify the nearest neighbor term and add a next-to-nearest neighbor term. The 2 d Kähler-Dirac block B N aik = 9 16 B is unchanged up to a trivial rescaling. The new contributions are confined to C, which is no longer unitary: C † N aik C N aik = I. Likewise, the spectrum is no longer a shifted unitary spectrum. Indeed, in two dimensions, the massless free spectrum is given by where S n = 1 2 µ=x,y cos(np µ ) and x = −1/48 9/16 , the ratio of the improvement coefficients in the Naïk-improved action. The improved spectrum is shown on the left panel of Fig. 2.3, with the low modes emphasized in Fig. 2.4. We again make the critical observation that the spectrum is qualitatively similar to the original Kähler-Dirac spectrum, Wilson spectrum, and overlap spectrum.
Interacting Staggered Fermons in Kähler-Dirac Form: We are ultimately interested in performing this right-block-Jacobi preconditioning on the interacting staggered operator, not the free operator. Procedurally, this is done by first gauging the staggered operator, and then performing the same unitary blocking transformation between the staggered form and the Kähler-Dirac form. The local block no longer has a simple structure because of gauge links [17]. In two dimensions, the Kähler-Dirac block B attached to a unit corner at 2 n on the original staggered lattice is given by Like the free case, the block B is still anti-Hermitian plus a massive shift. However, unlike the free case, the interacting B and C are not unitary, and as such the product CB −1 does not have a unitary spectrum. Nonetheless, the spectrum is still approximately circular and centered at 1, as can be seen later in Fig. 3.5. This is a desirable property for matrix preconditioning in general [57,58], and is essential for a successful MG algorithm. Importantly, this operator still maintains the γ L/R 5 Hermiticity defined in Eq. 2.17 and 2.18. This is true because the proofs of γ L/R 5 Hermiticity solely depend on the (x) Hermiticity of D, which holds in the interacting case. By extension, the proofs of complex conjugate eigenpairing still hold. These comments carry over as appropriate when a Näik term is also included.
Multigrid Coarse Operator
In forming the Galerkin projection of the staggered operator, we follow the methods of previous successful formulations of MG for the Wilson-Dirac discretization for LQCD [6]. Near-null vectors, or vectors which predominantly span the low-right eigenspace of the Wilson operator, are constructed by relaxing on the homogeneous equation with random initial guess as is discussed in detail in Sec. 4. Later in this section we will also consider exact low eigenvectors programmatically as near-null vectors. The resulting near-null vectors are chirally doubled and block-orthonormalized to construct the rows of the restrictor matrix R, which aggregates fine degrees of freedom to a single site on the coarse lattice, and the prolongator matrix P , which maps coarse degrees of freedom back to the fine lattice. Unless otherwise noted, R = P † . For the staggered operator, this implies that coarsening preserves the anti-Hermitian plus mass-shift structure. Block orthonormalization implies P † P = I. The prologator and restrictor can be used to define the coarse operator, (3.1) The hat notation refers to an operator one level coarser than the "unhatted" operator.
We will begin by reviewing the Wilson formulation, largely to establish notation. We will extend this formulation to the staggered operator and show why this method fails to produce an effective recursive algorithm in this case. We will last repeat this formulation for the Kähler-Dirac preconditioned operator and show that, in contrast to the original staggered case, this method succeeds.
Review of Wilson Dirac Coarse Operator
We begin by a basic restatement of the procedure for the adaptive geometric MG developed for the Wilson operator in QCD [5,6]. It is important to first note that the Wilson operator does obey γ 5 Hermiticity, that is, Hermiticity is sufficient to prove that eigenvalues of D W come in complex conjugate pairs as the limiting case of γ L 5 = γ R 5 , as discussed in Sec. 2.1. Returning to MG, n 1 vec near-null vectors are generated, where the "1" refers to coarsening the finest level, as discussed in Sec. 4. A key next step is chiral doubling: every near-null vector |ψ i is "doubled," giving 1 2 (1 ± γ 5 ) |ψ i . For this reason, on the coarse operator, each coarse site has 2n 1 vec internal degrees of freedom (dof), or, alternatively, a dense structure of n 1 vec "coarse color" dof times two "chirality" dof. A successful implementation of Wilson MG critically depends on the preservation of chirality.
After performing a chiral doubling of the near-null vectors, we pack the doubled vectors into the prolongator and again define R = P † . The tilde convention here is an indication that we have not (yet) block orthonormalized the 2n 1 vec vectors on each block. The chiral doubling implies γ 5 P = P σ 3 , where σ 3 = diag[1, · · · , 1, −1, · · · , −1] is a block Pauli matrix, or alternatively, the traditional σ 3 acting on the coarse chirality dof. It is easy to see that P † D W P is "σ 3 " Hermitian: The essential property γ 5 P = P σ 3 is unchanged after we perform the last step, block orthonormalizing P to get P , because we performed our chiral doubling with a bona fide projector. The top chiral components and the bottom chiral components are already trivially orthonormal. This gives the final essential properties γ 5 P = P σ 3 , and D W = P † D W P is σ 3 Hermitian. This methodology can be trivially extended to a recursive coarsening.
Failure of Galerkin Projection of Staggered Operator
The prescription for (recursively) generating a coarse refinement of the Wilson operator D W fails when naïvely translated to the staggered operator D with the only change being the replacement of γ 5 with (x), as noted by Eq. 2.6. While the iterative inversion of the even/odd preconditioned system exhibits critical slowing down, it does converge. However this attempt at a Galerkin MG on the staggered operator D stalls completely at large volumes as illustrated in Fig. 3.1. We need to understand the cause of the failure of the Galerkin projection D = P † DP as a preconditioner. A MG algorithm may fail because the coarse operator does not accurately reproduce the low eigenspace of the fine operator, or because the coarse error "correction" is ineffective. We will study each of these properties for the staggered operator to attempt to understand the issue.
As a spectral preconditioner, we expect the coarse operator to approximately preserve the low eigenmodes of the fine operator. In Fig. 3.2 we address this issue by comparing our failed staggered MG spectra with the successful Wilson MG spectra in Fig. 3 Table 1.
due to (x) Hermiticity on the fine level and coarse levels. The other four columns give the spectrum for a recursively-coarsened operator, constructing the prolongator/restrictor from exact low eigenvectors (left side) and near-null vectors (right side), where the nearnull vectors are again generated as discussed in Sec. 4. Filled shapes correspond to the first coarse level. Hollow shapes correspond to the operator from a recursive coarsening. The horizontal black lines trace the low modes of the fine operator across the coarsened operators. While these physical low modes are well preserved in all cases, there many additional, spurious low eigenvalues in the coarse spectrum.
These spurious eigenvalues have a simple origin. Consider the normalized eigenvector of the coarse operatorD|λ =λ|λ . We note thatλ = λ |D|λ = λ |P † D P |λ . If Table 1. with complex conjugate eigenvalues. This eigenvector may have nothing to do with the low modes of D.
This would be less of an issue if higher modes were gapped along the real axis. This is true of the Wilson operator, as can be seen for a representative case in Fig. 3.3 * . For the fine operator, whose eigenvalues are given by red squares, high modes are gapped along the real axis. For the coarse operator, whose eigenvalues are blue triangles, low modes are well preserved. Higher modes "collapse" towards the complex origin but are still well gapped along the real axis. This could be why MG on the Wilson operator does not break down, and may identically predict success for the Kähler-Dirac preconditioned operator. * In the interacting case, the Wilson operator is no longer normal, and our convex hull proof breaks down. It appears that it is still sufficiently true, perhaps because free Wilson operator is exactly normal. In a perturbative sense, the interacting Wilson operator is then "approximately" normal. Local Co-linearity versus the Oblique Projector The Galerkin MG scheme involves two different projection operators: • The projection operator, P = P R, from the fine space into a coarse subspace. Using the right eigenvectors as a basis for the fine vector space, V = {|v λ , 0 < |λ| ≤ |λ max |}, our goal is for eigenvectors with small (near-null) eigenvalues, |λ|/|λ max | < ε, to be approximately represented within the span of the coarse subspace, V = PV , in a least-squares sense. • The oblique (or Petrov-Galerkin) projector, P ob = 1 − P (RDP ) −1 RD, that defines space of error components that are returned to the fine level with a complete solve in the coarse subspace. To not overburden the smoother this should at least not unduly amplify large eigenvectors, |λ|/|λ max | > ε.
Both are true projectors dividing the fine vector space V into disjoint subspaces, P(1−P) = 0 and P ob (1 − P ob ) = 0, though they do not define the same subspaces. The orthogonality, P ob P = 0 is one-sided since PP ob = 0. Table 1. Both panels are sorted by increasing magnitude of the eigenvalues.
Let us see how well the staggered MG handles these two requirements. In our construction, R = P † , so the coarse space projector is Hermitian. The statement of preserving the low eigenspace in the least-squares sense can be formulated as sufficiently minimizing for small eigenvalues of fine operator D. Since we generate our coarse space by geometric aggregation, this can be thought of as the local co-linearity of near-null vectors with low eigenmodes. In the top left panel of Fig. 3.4, we see that starting either with a blockorthonormalized basis of near-vectors or of low eigenvectors results in a good converage of the low spectrum. This is typical of MG methods. At the bottom left this is extended to the next coarsest level with similar result. This has important implications for eigenvector compression methods [59].
However, this is not sufficient for a successful coarse correction in a MG algorithm. The coarse correction should address the low modes of the fine operator without introducing large errors in the high mode subspace. The error after solving the coarse level is updated as e ← e − P (RDP ) −1 RDe = P ob e. This is quantified by the magnitude of each eigenvector acted on by the Petrov-Galerkin or the so-called oblique projector , (3.5) The oblique projection of the coarse error (Pe) is zero: P ob P = [1−P (RDP ) −1 RD]P R = 0. However, the oblique projection is not Hermitian so this does not imply the error in the orthogonal complement space (e − P Re) vanishes. This is illustrated on the right side of Fig. 3
.4.
A magnitude less than or greater than one corresponds to a reduction or enhancement of the complementary error component, respectively. A successful coarse operator should strongly reduce the error component for low eigenmodes. In the context of MG, the enhancement from higher modes is addressed by the smoother. A larger enhancement requires a more expensive smoother, otherwise the solve stalls. In the top right panel of Fig. 3.4, we see that, for high modes, there is a large error enhancement. This is worse for a prolongator generated from near-null vectors than one generated from eigenvectors. In the lower right panel, we see the situation is even worse for a three-level algorithm. In all cases, an aggressive smoother is needed, increasingly so at coarser levels. This is why we saw the MG algorithm fail. Now we turn to the same analysis for the Galerkin construction of the Kähler-Dirac preconditioned operator, which has in contrast minimal error enhancement, evident in Fig. 3.6.
Coarse Kähler-Dirac Staggered operator: A
We will coarsen the Kähler-Dirac preconditioned operator similarly to how we coarsened the staggered operator, still using 1 2 (1 ± (x)) (unitarily rotated into the flavor basis) as a chiral projector on the near-null vectors. We will denote the method of coarsening the Kähler-Dirac preconditioned operator using 1 2 (1 ± (x)) as chiral projectors. Again, we will use R = P † . In Sec. 5, we will discuss an asymmetric coarsening where R = P † . While not being of merit in two dimensions, it may be an interesting point of investigation in fourdimensional QCD. In this section we will consider the spectrum, co-linearity, and oblique Table 1.
projector for a symmetric coarsening. Looking forward, in Sec. 4, we will demonstrate that symmetric coarsening produces a well behaved and robust recursive algorithm independent of the volume and the mass for physically relevant values of β.
As we described previously, the Wilson operator has a well behaved spectrum for MG as the high modes are well gapped along the real axis. This is also true for the Kähler-Dirac preconditioned operator in Fig. 3.5. As we discussed at the end of Sec. 2.1, the interacting spectrum is no longer a perfect circle in the complex plane. This does not undermine the qualitative benefits of the spectrum. Additionally, in the interacting case, a mass term still gaps the spectrum. In the right panel of Fig. 3.5, where we zoom in on the origin of the complex plane, we see that low modes are well preserved under our coarsening prescription, and there are no spurious modes near the complex origin. Eigenvalues of the coarse operator do not come in exact complex conjugate pairs, a consequence of using (x) as the chiral projector. This is inescapable because, in general, γ L/R 5 does not define a good projector. The eigenvalues are approximately paired, which is consistent with a general preservation of the low spectrum. This may also be consistent with 1 2 (1 ± (x)) becoming equivalent to 1 2 1 ± γ L/R 5 , up to a unitary transformation, in the continuum limit, and as such preserving complex conjugate eigenpairs.
A careful study of the right panel of Fig. 3.5 shows that both the original operator and its coarsening feature eigenvalues with negative real part, that is, lying in the left-half plane. We refer to these eigenvalues as exceptional eigenvalues, borrowing the language from Wilson-clover fermion literature [60]. The existence of modes in the left-half plane invalidate proofs which bound the convergence of Krylov solvers [61]. We will see in Sec. 4 Table 1.
that, because error components in these exceptional modes are well solved by the coarse error correction, a recursive MG algorithm can successfully address this problem. As we will see in Sec. 4, this stabilizes the MG solve, independent of mass and volume, and is consistent with the success of MG for the Wilson operator beyond the critical mass.
Local Co-linearity versus the Oblique Projector The overall failure of MG for the staggered operator stemmed from the large error enhancement to the high modes from the coarse correction. A predictor of success for MG on the Kähler-Dirac preconditioned operator would be a significant reduction of this enhancement. We would also still need to see strong local co-linearity and a significant coarse error correction on low modes. In the left and right panels of Fig. 3.6, we consider the local co-linearity and oblique projector, respectively, of the Kähler-Dirac preconditioned operator on a representative configuration. We explore using both near-null vectors and right eigenvectors to define the prolongator P and restrictor R = P † .
On the left, we see that local co-linearity of low modes of the Kähler-Dirac operator is well maintained, similar to the original staggered operator. The benefit of coarsening the Kähler-Dirac preconditioned operator as opposed to the original staggered operator is most clearly noted by the action of the oblique projector as displayed on the right panel of Fig. 3.6. The oblique projector reduces the error component on the fine level for roughly the lowest 15% of the spectrum. Above this threshold, the error component is enhanced, but only minimally. Table 1.
MG Algorithm Numerical Results
The convergence rate of our new MG algorithm on the Kähler-Dirac preconditioned operator, illustrated in Fig. 4.1, is a dramatic improvement relative to the failed MG algorithm applied to the original staggered operator in Fig. 3.1. The only methodological difference is coarsening the Kähler-Dirac preconditioned operator instead of the original staggered operator. Moreover, as we scan in the quark mass as shown in Fig. 4.2, we see that our formulation has eliminated ill-conditioning due to critical slowing down: unlike using CG on the even/odd preconditioned system, an MG solve takes a roughly constant number of outer iterations as the chiral limit is approached.
Let us now describe in detail the new algorithm and the numerical analysis for MG applied to the Kähler-Dirac preconditioned staggered operator. The parameters we choose are summarized in Table 1. First, we consider a two-level algorithm. We construct a right near-null vector ψ by relaxing on the homogeneous normal system AA † ψ = 0, using Gaussian distributed random vectors ψ 0 as the initial guess † . In practice, this is performed in multiple steps.
• We convert the homogeneous system to the residual system AA † e = r ≡ −AA † ψ 0 .
• We relax on the residual system using CG to a relative tolerance of 10 −4 or a maximum number of 250 iterations.
• We reconstruct the near-null vector ψ = ψ 0 + e, where e is the result of relaxation. † We remark that A † A generally works just as well. We have also explored relaxing on A directly using BiCGstab and BiCGstab(l), l = 6 [62], which in practice works well at small volumes but degrades for larger volumes. The use of the normal operator may be why we can effectively capture exceptional eigenvalues. This is performed n 1 vec times, and then we globally orthonormalize the full set of nearnull vectors. We subsequently chirally double the near-null vectors using 1 2 (1 ± (x)), and form the second-level operator A = P † AP from the block-orthonormalized chirally-doubled null vectors. The coarse correction follows three steps. (1) Relax on the current residual, a process known as the pre-smoother, (2) approximately solve the second-level system: [RAP ] Re = Rr (or, equivalently, approximately solve Aê =r), giving the prolonged error correction e = Pê, and (3) post-smooth on the error accumulated from steps 1 and 2. In step 2 we use a Krylov solver, and as such the MG preconditioner is not stationary. For this reason, we use the restarted generalized conjugate residual (GCR) [63] as a flexible outer solver, forming a K-cycle. We use a global MR for our pre-and post-smoother. The specific details of these steps are given in Table 1. In practice, we iterate on the even/odd preconditioned system on the fine level, with the prescription where we coarsen assuming the odd contributions are all zero, and we also ignore the odd contributions in the prolonged error. This technique proved successful for the Wilson operator [64].
A two-level algorithm does not fully eliminate critical slowing down, it just shifts it to the second level. We address this by generalizing to a recursive algorithm, where we perform a still coarser correction to the system in step (2) of the above description.
We generate a third level, A , similar to how we generate the second level: we generate near-null vectors with A A † , chirally double the near-null vectors using 1 2 (1 ± σ 3 ), and subsequently form a third level.
This clearly generalizes to still coarser levels. For our numerical experiments in Sec. 4, we only study a three-level algorithm. Unlike on the fine level, the Krylov solve we perform on the intermediate level is an iteration directly on A, as we found this was more stable in practice. We approximately solve the coarsest level via CG on the normal error. Due to the exceptional eigenvalues which propagate to coarser levels, as noted in Fig. 3.5, numerical experiments with Krylov solvers acting on A were in general not successful. This was either due to stability reasons (using BiCGstab(l) [62]) or due to cost (using GCR). We believe using the normal operator is of critical importance.
Results
A successful, recursive MG algorithm will shift critical slowing down to the coarsest level. In the context of the Schwinger model, and four-dimensional QCD, this means we want consistent convergence independent of mass and volume. We are also interested in the MG algorithm being successful in all physically interesting regimes. In the case of our Table 1: The parameters we use for our K-cycle. For consistency, we use the same setup parameters throughout the procedures described in this paper. target problems, this means we need to study the behavior with the bare coupling β.
The continuum limit is taking β → ∞ at constant physics, where the relevant region is l Mπ > l σ . When β is too small, close to the cutoff scale, we are no longer studying relevant physics. A breakdown of MG for very small β is acceptable. The values of β studied, 3.0, 6.0, and 10.0, correspond to l σ ≈ 2.4, 3.5, and 4.5, respectively. The lowest value of β is becoming rather unphysical.
Elimination of critical slowing down: fine level The indication of a successful twolevel algorithm is the elimination of critical slowing down for the fine operator A, that is, constant iterations with respect to the mass and volume per each β. In Fig. 4.3, we present the number of applications of the fine operator A between the GCR algorithm and the MG preconditioner, which is proportional to the number of iterations for the outer GCR solve. On the left we consider the case of fixed physical β = 6.0 at varying volume. The number of A applications is roughly constant, independent of the volume and mass in the chiral limit. In the right panel, we consider our largest volume, 256 2 , fixed for three different values of β. We see that at β = 10.0 and 6.0, critical slowing down has been essentially eliminated as a function of mass. At β = 3.0, where we are probing somewhat cutoff scale physics, the number of iterations appears to not be diverging with power law behavior, and as such critical slowing down has still been eliminated.
Elimination of critical slowing down: intermediate level A successful recursive algorithm eliminates critical slowing down at each level. Thus, we consider the average We remark that in a highly optimized and tuned implementation, it is important that we use a K-cycle at the second level. In such an implementation, the maximum number of iterations on the coarsest level may be capped to some reasonable amount. This would cause the number of iterations on the intermediate level to increase. Since in a K-cycle the second level is solved to a fixed residual as opposed to a fixed number of iterations, the number of iterations at the finest level remains stable. Critical slowing down: coarsest level The previous two paragraphs demonstrate an elimination of critical slowing down from finer levels. Thus, there should be critical slowing down on the efficiently solvable coarsest level. In Fig. 4.5 we consider the average number of iterations for the coarsest solve via CGNE. In contrast to the previous two figures, these plots are on a log-log scale instead of a log-linear scale. In the left and right panels, we consider constant β = 6.0 and a constant volume of 256 2 , respectively. The number of iterations is divergent with power law behavior § . Critical slowing down has been shifted to the coarsest level.
Comparison with a direct solve In looking at the outermost level, the intermediate level, and the coarsest level in a three-level solve, we see that we have formulated a MG algorithm which shifts critical slowing down to the coarsest level. Furthermore, the solve is stable: in Fig. 4.1, we saw that a MG-GCR solve converges smoothly at our most chiral point for β = 10. There are large reductions in the relative residual on each iteration. On the other hand, the traditional solve with CG on the even/odd operator, despite converging successfully, converges very slowly, an indication of critical slowing down.
This behavior persists independent of mass. In Fig. 4.2, we trace the number of iterations away from the chiral limit, seeing that it is roughly constant. Critical slowing down has been eliminated. On the other hand, the number of iterations for a solve with the even/odd operator diverges with mass with power-law behavior. This is exactly the critical behavior that's been shifted to the coarsest level in Fig. 4.5. The benefit of our MG algorithm is drastic.
Continuum Limit
It should be emphasized that our fixed prescription is effective in the most relevant regime: towards the continuum, where the lattice spacing vanishes relative to fixed physics, and in the chiral limit, where l Mπ diverges relative to l σ .
For the two-dimensional Schwinger model, taking the continuum limit at constant physics corresponds to simultaneously doubling the length scale of the fine volume, halving the mass, and quadrupling β. In Table 2, we consider the use of MG while taking the continuum limit from two base configurations. First, we consider a base configuration of 64 2 at m = 0.01 and β = 3.0, where we have discussed earlier that a MG algorithm is successful. On two successive refinements towards the continuum limit, we see that there is a reduction in the number of outer applications of A and in the average number of iterations on the intermediate level. In tandem, the average number of iterations on the coarsest level increases: there is more critical slowing down to shift to the coarsest level, which is to be expected, towards the continuum limit. Our MG algorithm performs better as the continuum limit is taken. Next, we consider a base configuration of 64 2 at m = 0.004 and β = 0.75, an unphysically coarse configuration. In this case, a MG algorithm fails to converge. Again, on progressive refinements, the MG algorithm becomes convergent and becomes better behaved as the continuum limit is taken.
Preserving Complex Conjugate Pairs
A possible, if not necessary, generalization for four-dimensional QCD or other staggered fermion problems could be the exact preservation of complex conjugate pairs upon coarsening. Indeed it is possible to develop a prolongator P and a restrictor R = P † , abandoning chiral doubling with projectors, which preserves complex conjugate eigenpairs after coarsening the Kähler-Dirac preconditioned operator, or any operator satisfying γ L/R 5 Hermiticity with γ L 5 γ R 5 = I. The resulting formalism gives what we will call an asymmetric coarsening with σ L/R 1 Hermiticity on the coarse level.
We consider a set of left and right vectors, ψ i | and |ψ i , respectively, which can generally be arbitrary and unequal. We perform a chiral doubling which gives These prolongators and restrictors obey P σ 1 = γ R 5 R † and σ 1 R = P † γ L 5 . This is sufficient to prove RA P is σ 1 Hermitian. The next step is to block bi-orthonormalize R and P , enforcing RP = I, by-products of which give us σ L/R 1 . As a clarifying tangent, we will consider the case γ L 5 = γ R 5 and |ψ i = |ψ i , that is, R = P † . This is true, for example, for the Kähler-Dirac preconditioned operator in the free field limit, or when considering the Wilson operator in general. The critical observation in this case is to recall that the process of (block) orthonormalization via a Gram-Schmidt is equivalent to a thin-QR decomposition. We define the block-dense matrix M of block dimension (coarse dof) × (coarse dof) as P † P = M = Σ † Σ, (5.2) where in the last step we have performed a Cholesky decomposition. We can rearrange Eq. 5.2 as By definition, P ≡ P Σ −1 is block orthonormal. With the definition σ Σ 1 ≡ Σσ 1 Σ −1 , we have γ 5 P = P σ Σ 1 , and P † AP is σ Σ 1 Hermitian.
We return to the (block) bi-orthonormalization of R and P . The above procedure generalizes to a "thin-LU" decomposition. Eq. 5.2 generalizes to R P = M = LU, (5.4) where in the last step we have performed an LU decomposition. We can rearrange Eq. 5.4 as R and P are block bi-orthonormal. We can show A ≡ RAP admits a σ L/R 1 Hermiticity condition via defining and noting The pair σ L/R 1 obeys σ L 1 σ R 1 = I, as can be verified by explicit calculation, requiring the critical and subtle observation thatRP is σ 1 Hermitian itself.
We emphasize that this construction is fully generic, whether or not |ψ i = |ψ i . We defer a discussion of numerical experiments with preserving complex conjugate eigenpairs to appendix A. Our deference to an appendix reflects our observations that, in two dimensions, (recursively) preserving eigenpairing actually leads to a less effective, and sometimes unstable algorithm. This method, or a further development thereof, may bear some fruit in four dimensions.
We make the additional remark that we can now make the algorithmic choice to rightblock-Jacobi precondition A, analogous to the transformation we made to the staggered operator in the Kähler-Dirac form in the first place, and continue to preserve complex conjugate eigenpairs if we coarsen again. Let us denote A = B + C, where B is the block-local contribution. The resulting right-block-preconditioned operator AB −1 obeys a σ rbj,L/R 1 Hermiticity condition with σ rbj,L 1 = σ L 1 B −1 and σ rbj,R 1 = Bσ R 1 . This recursive rightblock-Jacobi preconditioning did not lead to an effective algorithm in two dimensions.
Exact Preservation of Eigenvectors
In the case of, for example, the Wilson operator, chiral doubling with 1 2 (1 ± γ 5 ) preserves complex conjugate eigenpairs. We can choose the vectors |ψ i ≡ |ψ i to be right eigenvectors |λ +,R i with eigenvalues λ + i , where the + denotes that the eigenvalue has positive real part.
The coarse operator P † D W P exactly preserves the eigenvalue λ + i , and |λ +,R i is exactly preserved on the coarse subspace, that is, P P † |λ +,R i = |λ +,R i . However, even though chiral doubling guarantees the eigenvalue λ − i is also preserved by the coarse operator, it is not because |λ −,R i is exactly preserved by the coarse subspace, that is, We can use asymmetric coarsening to preserve |λ −,R i . We can choose |ψ i = |λ +,R i and ψ i | = λ +,L i |, then chirally double using Eq. 5.1 and subsequently block bi-orthonormalize the P and R. This operator preserves the eigenvalues λ ± i , and additionally P R|λ ±,R i = |λ ±,R i and λ ±,L i |P R = λ ±,L i |.
Conclusion
The first successful MG algorithm in LQCD was constructed for the Wilson discretization of the Dirac operator nearly a decade ago [6,7]. This advance relied on, at the time, the novel approach in LQCD to adaptively discover the near-null space and geometrically project onto coarse lattices. Remarkably, with the exception of the similar twisted-mass discretization, the basic method has not been easily generalized to two important methods: staggered and domain wall fermions, each of which feature improved chiral symmetry. A more fundamental understanding of MG methods in LQCD is clearly lacking. Here, we have taken a step towards this. For the staggered operator, we identified the spectral feature that was responsible for the failure of a straightforward generalization of Wilson MG and have overcome this problem by preconditioning by the Kähler-Dirac (spin-flavor) block structure. We demonstrate that this has a dramatic effect on the spectrum: in the singular, zero mass limit, the pure imaginary spectrum of the anti-Hermitian operator maps to a unitary circle of the form seen in the overlap operator.
The success of the resultant MG algorithm for this Kähler-Dirac preconditioned operator has been demonstrated numerically for the two-dimensional Schwinger model. Both the theoretical framework and the phenomenological features naturally generalize to the case of four-dimensional QCD. On this basis, we are optimistic that our staggered multgrid algorithm will have similar success in this application. Numerical tests for this conjecture are underway by extending the high performance MG framework of the QUDA library to coarsen staggered-like operators. These tests will be made on the largest available lattices to explore the scaling of the algorithm over a range similar to the two-dimensional tests presented here.
We have also made an effort to explore a range of projection methods that are capable of exactly preserving the complex conjugate pairs of eigenvalues present in the Kähler-Dirac preconditioned operator. We hope our emphasis on spectral analysis and transformations will provide some flexibility in adapting our algorithm not only to four-dimensional QCD but also to similar Dirac discretizations found in BSM theories [2], supersymmetric Yang-Mills theory [20], and quantum critical behavior in condensed matter [19].
A Studies of Preserving Complex Conjugate Eigenpairs
In Sec. 5, we developed a formalism to exactly preserve complex conjugate eigenpairs for a coarsened Kähler-Dirac preconditioned operator. This used an asymmetric coarsening which gave a σ L/R 1 on the coarse level. This formulation is largely successful, however, it can suffer from anomalously large real eigenvalues in the negative half plane, destabilizing the MG preconditioned solve, in cases where the symmetric coarsening proceeded without issues. If these stability issues can be addressed, it may lead to a better algorithm in two dimensions and four dimensions. As appropriate, this will be the topic of a future publication. This appendix will follow the structure of Sec. 3.3, where we study the spectrum, local co-linearity, and oblique projector of the asymmetrically coarsened operator in the case where a recursive algorithm is successful. We will then scan the iteration counts as a function of mass, similar to in Sec. 4.1, and identify cases where the algorithm breaks down. Last, we will investigate one of these cases.
In Sec. 3.3 we considered a representative spectrum of the Kähler-Dirac preconditioned operator and a symmetric coarsening. In the case of asymmetric coarsening, we again expect the low modes to be preserved well, but additionally come in complex con- jugate pairs. This is exactly the case in Fig. A.1, where we overlay the spectrum of the asymmetric coarse operator. We also see a "feature" of σ L/R 1 Hermiticity: there are pairs of purely real eigenvalues.
In the case of the Wilson or overlap operator, pairs of purely real eigenvalues have a significant physical interpretation. The smaller real eigenvalue corresponds to a physical chiral mode via the lattice index theorem [65], which thus needs to be well captured by a MG algorithm. The paired large real eigenvalue is merely a quirk of being on a finite lattice, and thus lives as an isolated large eigenvalue near the cutoff. On the other hand, the pairs of real eigenvalues for the coarsened Kähler-Dirac operator do not have an obvious physical intuition, just as the naïve staggered fermion operator does not trivially correspond to an index theorem [43]. These purely real eigenvalues are a symptom of unstable solves at Returning to stable solves, we consider the local co-linearity and the oblique projector under an asymmetric coarsening. These are overlaid on the data for a symmetric coarsening in Fig. A.2. An asymmetric coarsening is roughly comparable to quality to a symmetric coarsening, indicative of a successful MG algorithm. ¶ As a next task, we consider MG preconditioned solves with the asymmetric coarsened operator. We will only present a subset of the cases considered in Sec. 4.1 and instead focus on the cases where the solve is unstable: large volumes. The number of fine operator applications and average intermediate applications are presented in Fig. A.3. In the cases where a data point is marked by a "×", the solve failed. The failures are largely confined to smaller masses, but not with a discernable pattern; indeed, for β = 6.0, the lowest masses had stable solves! We present the spectrum of the asymmetric coarsened operator, where an MG solve with an asymmetric coarsened operator fails, in the left panel of Fig. A.4, where we see there are now large, real eigenvalues far in the right plane and also in the left plane. There ¶ In general, the local-colinearity is not bounded by 1 when R = P † . This is because (1 − P R) is not a normal operator. Thus, for a normalized vector v, v † 1 − R † P † (1 − P R) v isn't bounded by 1. This can be realized by the bi-orthonormal basis p 1 = (1/2, 1/2, 1/2, 1/2), p 2 = (1/2, 1/2, 1/2, −3/2), r 1 = (1/2, 1/2, 1/2, 1/2), r 2 = (1/2, −1/2, 1/2, −1/2). is also a large negative real eigenvalue at approximately -26.75. These pathological real eigenvalues are not part of the low subspace and are therefore not well captured by our MG algorithm. However, in the right panel, we see that the low spectrum is still well behaved. It is a point of future research to see if these anomalously large, real eigenvalues can be addressed. | 13,941.2 | 2018-01-23T00:00:00.000 | [
"Physics"
] |
Assessment of Mechanical Specific Energy Aimed at Improving Drilling Inefficiencies and Minimize Wellbore Instability
Mechanical specific energy is equivalent to the proportion of all the input energy to the bit to output penetration rate. By using this parameter, it can be used to optimize drilling performance, the performance of bit and minimize wellbore instability. By analyzing these parameters, the costs of drilling operations by increasing the rotational speed of drilling equipment, maximizing bit life, and will be reduced. In all parts of the Iran’s South Pars field numerous wells have been drilled. But research activity that could be a major step in the evaluation of the mechanical specific energy and reduction of inefficiencies during the drilling operation has not taken place. The purpose of this research using mechanical specific energy is to analyze drilling parameters performance such as bit rotation speed, bit penetration rate in the formation due to rock and fluid properties. According to high costs of hiring drilling rigs and importance of mechanical specific energy for increasing drilling velocity and costs reduction, several researches in many ways for optimizing mechanical specific energy had been studied. However, by using SPSS software in middle formations of one of the phases of Iran’s South Pars field statistical studies has been done, which contains formations such as Hith, Surmeh, Neyriz, Dashtak and Kangan. Assessment of Mechanical Specific Energy Aimed at Improving Drilling Inefficiencies and Minimize Wellbore Instability
Introduction
The concept and formula of mechanical specific energy was introduced by Thiel in 1964. Calculation of mechanical specific energy depends on the torque, well diameter, and rotational speed, rate of penetration drill in formation and weight of the bit. Concept of Mechanical specific energy for evaluating the performance of drilling operations and bit performance, as well as are used for the drilling inefficiencies operations. Mechanical specific energy to describe the input amount of energy used in a drilling system that includes mechanical description (weight on bit and torque), for the specified unit (length, area and mass) and required energy to the work (force, time and distance) [1].
Specific energy (SE), required energy was introduced for drilling rock volume, so this concept hasn't been considerably on rock studies as an index [2].
For excavating of a given volume of rock, a minimum energy is required to theoretically calculate that this amount is entirely dependent on the nature and characteristics of the rock. Although, Energy can also be used particularly as an indicator to show changes in lithology and select the correct type of bit according to the drilling operation. Specific energy techniques as a means to help through other methods, such as examining records of previous bits and drilling costs in length unit, is used to select the bit [3][4][5].
Rotary drilling can be done in two parts: a) the axial force (weight on the bit) work by rotational component (torque). Rotational speed (N), level of bit involvement (AB) and the penetration rate is ROP. So total done work by bit (WOB) by increasing to Y and forgave lost energy or works that was done on non-drilling operations, it can be written by this correlation: W Total = WOB × Y + 2 × π × N × (TQ ⁄ ROP) (1) Given that AB × Y will be equal to the volume of rocks, so specific energy will be equal to: So, that Db is the bit diameter.
The use of specific energy was done in two contemporary appraisal methods by bit drilling (to determine when to replace the bit) and evaluated after drilling (evaluate of bit performance).
By using above principle description, an equation for calculating mechanical specific energy (MSE) due to the torsional and axial work by bit in removing of rock volume is obtained. So, this equation in atmospheric and hydrostatic conditions according to correlation 1-4 was obtained.
In some cases, productivity factor is considered one. But in some cases, due to bit efficiency in the best performance, is reached to 30-40 percent, drilling operators considered this coefficient without type of bit and weight on it, 0.35.
When the mud pump was used in well, correlation 1-5 for obtaining mechanical specific energy is, Statistics is the science of collecting, arranging or organizing, analyzing and interpreting data to determine the validity and generalizability of the results is discussed. Statistical analysis helps researchers to process raw data, extract required information and, if necessary, to generalize the results. If the data volume is large, the use of various methods of statistical analysis will be very tedious and difficult. Today, a variety of statistics software are available, and they can do a variety of statistical analyzes in this research, by using the SPSS software, which is one of the oldest applications in the field of statistical analysis. The word SPSS stands for Statistical Package for Social Science (Software for Social Sciences).
Statistics is divided into two parts: Statistical Parametric and Nonparametric statistics. Statistics parameter split into descriptive and inferential statistics [9,10].
Descriptive statistics
Descriptive statistics were used to collect, summarize, display and processing of statistical data addresses. In this research, in order to display specification a set of data used, ranges, minimum, maximum, average, standard deviation and coefficient of variation will be provided. The number of samples collected can be evaluated to show importance of relationships extracted from their help. In this context, minimum and maximum values to calculate the ranges that were mentioned.
Average is the easiest and the most important indicator of the data center. Central index is a value that specifies the data center. Researchers can use out of a vast number of variables to compare together. Standard deviation is an important indicator to measure the values of a variable dispersion. If the value of a variable standard deviation divided by the average, the coefficient of variation is a relative index to compare different variables. If the variation coefficient of values is higher it describes greater values of dispersion.
Inferential statistics
Some methods to analyze observations validate and determine its results. Among these statistical methods we can refer to types of regression. In inferential statistics firstly determined linear relationship between mechanical specific energy with other drilling parameters in both the intercept and without intercept are displayed. Criteria Selection for each of the relations with intercept and without intercept, R 2 and test is (F Value). The R 2 (coefficient of determination) shows precision of mechanical specific energy and other drilling parameters.
The amount of correlation coefficient shows relationship between parameters. So that: R > 0.8: high relationship between the two parameters. 0.2 < R > 0.8 the relationship between the two parameters. R < 0.2 poor relationship between the two parameters.
Linear correlations
Examine the linear correlation without intercept between mechanical specific energy and other drilling parameters: In Table 1, R (correlation coefficient) which represents the relationship between the parameters and its value is 0.904. R 2 (Coefficient of determination) indicates the relationship between parameters. Accuracy of correlation is about 0.818. So by these seven independent parameters, mechanical specific energy can be especially carefully predicted by 81.8%. Or in the other hand, else these seven parameters, other parameters have an 18.2% impact on mechanical specific energy precision. As a result, in addition to being the determining factor as an accuracy parameter (5)
Compressive strength
The compressive strength is equal to the amount of uniaxial compressive stress, when the considered element was completely severed. The amount of compressive strength, usually obtained by pressure testing and experimentally. Pressure testing machine, is used for tensile testing. The difference is that instead of applying a uniaxial compressive load, uniaxial tensile load is applied. In Pressure test, test sample (usually cylindrical) is shorter and fatter. Using the results of stress-strain curve is plotted in Figure 1 [6][7][8].
Uniaxial compressive strength (UCS)
Rock strength parameters that reflect the actual energy required to remove the rock. In other words, the highest axial tension that rocks can tolerate. Mechanical specific energy determines the parameters of drilling and energy unit to remove the rock. The distances of Mechanical specific energy with uniaxial compressive strength means the energy of the formation. As a result, the aim of optimizing to minimize the distances between Mechanical specific energy and uniaxial compressive strength. Mechanical specific energy can assess the relative efficiency of drilling operations alone but by compressive strength of rock can be a good scale for drilling.
Working procedure
In this study, 25/12-inch cross-sectional data from all wells on a platform in one of the phases of the Iran's South Pars field were collected and studied. According to the same rock strength parameters in the same formation, the required amount of energy needed to drill formations in all wells should be similar, otherwise they are not optimum parameters or the bit is not suitable.
In the first part of study, the optimum range for each drilling parameter that increasing drilling velocity and reducing mechanical specific energy that was presented for this region.
In part two, between varieties of statistical methods, SPSS software was used in this research. It should also be noted that the proposed equations are related to the values indicated in the first part. If you use the equations in this area, for parameters outside of this range you should be more careful.
In general, the methods of forecasting engineering problems can be divided into two categories: data analyzing and statistical methods. Data analyzing methods include techniques such as neural networks. Statistical methods are also included types of regression. it can be an error parameter. The average forecast error, especially mechanical specific energy can fluctuate between 291.6665 (Table 1). Table 2 average test that named "the analysis of variance test" is called (ANOVA). F is statistical value, significant regression coefficients, and average error of regression to the average actual error of variable, minus the value predicted by the regression equation. The high value of the statistical test showed good relationship between independent and dependent parameters. Sig (significant level) shows the relationship between the independent parameters and dependent parameters. The total errors, medium errors and comparison, are shown in this Table 2. Table 3, B is proposed the coefficients of the independent variables in the formula are. Std. Error shows error of deviation that determines the parameter dispersion. Beta is modified coefficient that the software offer these factors and it can be replaced by B. t represents the difference between each of the variables significantly associated with the independent variable. Sig, relationship of independent parameter specifies the correlation.
As a result, in Table 4, proposed linear correlation without intercept was presented. All variables are a significant level that was less than 0.1. This represents a 99% relationship between the variables that represent the validity of this correlation.
Examine the linear correlation with intercept between mechanical specific energy linear and other drilling parameters:
Tables of summarized model, the analysis of variance and linear correlation coefficients with intercept between mechanical specific energy and drilling parameters. Proposed linear correlation with intercept for the studied area, which has a coefficient of determination (0.62) and correlation coefficient is 0.788.
Examine of non-linear correlations between mechanical specific energy with the other parameters when we used mud pumps:
Nonlinear correlations 1-9 and 1-10 for while use mud pumps are Table 3: Coefficients of the linear correlation without intercept between mechanical specific energy with other drilling parameters.
Select the most appropriate correlation and determine the most effective parameters on mechanical specific energy Most favorable linear correlation: Among linear correlation, linear correlation without intercept (Tables 4 and 5) compared to the linear correlation with the intercept (Tables 1-5) is superior because the values of the statistic test, correlation coefficients and coefficients of determination is greater and has the significance level. (Significance level is less than 0.01 of all variables in a linear correlation without intercept represents the correlation between the variables that represent the 99% level in this regard).
Most favorable nonlinear correlation:
Proposed nonlinear correlation between mechanical specific energy with other parameters, when we use mud pump in well (correlations 1-9) has maximum amount of determination coefficient and relationship coefficient that presents high accuracy and more relation with parameters in this correlation than other nonlinear correlation for studied field.
Most favorable correlation to determine the most influential independent parameters on mechanical specific energy: Among the seven proposed correlation, linear correlation without intercept in Table 1 despite a much lower coefficient of determination (slightly) than the proposed non-linear correlations 1-7, 1-9 and 1-10, because linear correlation lead to non-linear correlation and to be considered the most appropriate correlation. Therefore, to identify the most influential independent parameters on mechanical specific energy to linear correlation should be referred without intercept [11][12][13].
How to determine the most influential independent parameters on mechanical specific energy: In Table 1, which corresponds to the linear correlation without intercept, to compare the significance level parameters, each parameter that is significantly smaller level, would be more effective. In case of equality between the parameter levels, we consider t and whichever is higher absolute of "t", would be more effective parameter. The most influential independent parameters on the dependent variable (mechanical specific energy) in the studied area in order of torque, the bit penetration rate in the formation, weight on bit, mud flow rate, rotational speed, pressure difference and measure of drilling area. As a result for better and more effective to optimize the mechanical specific energy, it is recommended to optimize effective parameters listed in priority. It should be noted between these parameters, weight on bit, mud flow rate and rotational speed on the rig can be controlled by Drillers. Torque and pressure difference cannot be controlled independently.
On the other hand, torque is directly related to the rotational speed of the rig that can be controlled. (By increasing rotational speed, torque increases). Also, by optimizing the control parameters (weight on bit, mud flow rate and rotational speed) can increase speed of drilling.
Weight on bit was an independent parameter and controllable that due to formation properties and hole-angle would be changed. (If formation was high density, increasing weight on bit due to increasing rotational speed was important. If our purpose increase the hole-angle, weight on bit will be increasing and for decrease hole-angle, reduce weight on bit).
Mud flow rate was a controllable parameter and its effect to create pressure. However, the pressure difference parameter was a parameter that be controlled independently from the rig so by mud flow rate can change it (by increasing the mud flow rate, pressure increases) [14,15].
Summary
Descriptive statistics, including the number of samples, the highest and lowest dispersion and its causes were investigated. For studied field described investigation of properties reveals standard deviation and coefficient of variation, high changes of parameters such as mechanical specific energy, pressure difference and drilling measure that expresses the dispersion of these variables. Small amounts of these items for the parameters of torque, rotational speed, and mud flow rate, representing less dispersion of this parameter.
Then by inferential statistics we have correlations to predict mechanical specific energy in the studied area was presented by using SPSS statistical software. Most favorable Linear and nonlinear correlations were investigated. By investigating proposed correlations in studied field and their comparison, the following results were obtained: well correlation coefficients obtained 0.837 and 0.832, which compared to the other proposed non-linear correlation are above. This represents a further connection between the parameters and higher accuracy of mechanical specific energy correlation with other independent parameters of drilling in the correlation.
Of the seven suggested correlations for the studied area, linear correlation without intercept: (MSE = -28.120 × ROP + 3.287 × N + 14.083 × WOB-0.511 × Q + 0.532 × ΔP-37.89 × TQ-2.747 × Drilled) There is a slight difference in coefficient of determination than some of the non-linear correlation (linear correlation of superiority toward non-linear correlation) is considered to be the most appropriate correlation. To obtain the most influential independent parameters on mechanical specific energy and "t" the correlation is investigated. As a result, the most effective independent parameters depend on parameters such as torque, bit penetration rate, weight on bit, mud flow rate, rotational speed, pressure difference and measure depth of drilling. So, more suitable optimization of mechanical specific energy, for drilling other rigs in this region (in the formations such as Hith, Surmeh, Neyriz, Dashtak and Kangan) is recommended. Drilling parameters (with priority influencing on mechanical specific energy), that are ranged in the proposed optimum limit.
It should be noted that between these parameters, weight on bit, mud flow rate and rotational speed on rig can be controlled by drillers. Torque and pressure difference cannot be controlled independently (rotational speed and torque directly related to the pressure difference between a direct relationship with the mud flow rate). Also by changes on three parameters such as weight of bit, mud flow rate and rotational speed, can be also changed in the formation of bit penetration rate. E) Kangan formation include rotational velocity from 105 to 120 with an average of 114.33 (RPM), weight on bit from 18 to 33 with an average of 25.7 (klbs), flow rate of 800 to 900 with an average score of 854.2 (gal / min), the pressure difference of 80 to 335 with an average of 157 (psi) and torque from 9 to 29 with an average of 17.5 (kFT-lb).
Conclusion
Since the bit penetration rates compared to the control parameters (weight on bit, mud flow rate and rotational speed) is more effective on mechanical specific energy, changing three parameters aimed at accelerating drilling speed was more important than exposure these parameters are proposed in the optimum range.
To use the results of this research should be considered within these parameters to provide the specification describes, in normal curve and histogram drilling parameters. As well as from a variety of statistical methods, SPSS software was used in this research.
As noted above, the mechanical specific energy can check mechanical drilling performance (selected and optimized drilling parameters), check the performance of bit (design bit more efficiently) and help to the failure of drilling. However, we don't have appropriate processing statistical method for optimizing drilling parameters and their effect on mechanical specific energy in the studied area has not done and whereas mechanical specific energy has an important role in reduction of costs in this research by using mechanical specific energy in analyzing and optimizing drilling parameters in part of Iran south pars field. So it is recommended by using mechanical specific energy for investigating drilling instabilities (which can include bit plunge and BHA in the mud, vibrations, improper cleaning of wells, etc.) and check the performance of the bit in the area to be addressed. By selecting the appropriate type of bit (fixed cutter, PCD, etc.), appropriately designed bit (number of blades, size and density of cutters, side angle Rick and, number and size of nozzles) and taking cutting depth adjacent pressure etc., must be increased drilling speed and minimize damage to the bit. | 4,419.8 | 2016-12-02T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Regulation of Translation Initiation under Abiotic Stress Conditions in Plants: Is It a Conserved or Not so Conserved Process among Eukaryotes?
For years, the study of gene expression regulation of plants in response to stress conditions has been focused mainly on the analysis of transcriptional changes. However, the knowledge on translational regulation is very scarce in these organisms, despite in plants, as in the rest of the eukaryotes, translational regulation has been proven to play a pivotal role in the response to different stresses. Regulation of protein synthesis under abiotic stress was thought to be a conserved process, since, in general, both the translation factors and the translation process are basically similar in eukaryotes. However, this conservation is not so clear in plants as the knowledge of the mechanisms that control translation is very poor. Indeed, some of the basic regulators of translation initiation, well characterised in other systems, are still to be identified in plants. In this paper we will focus on both the regulation of different initiation factors and the mechanisms that cellular mRNAs use to bypass the translational repression established under abiotic stresses. For this purpose, we will review the knowledge from different eukaryotes but paying special attention to the information that has been recently published in plants.
Introduction
One of the main responses of cells to stress conditions involves partial or virtually total cessation of energetically consumptive processes normally vital to homeostasis, including transcription and protein synthesis. Translation consumes a substantial amount of cellular energy and, therefore, it is one of the main targets to be inhibited in response to most, if not all, types of cellular stresses. However, under conditions where global protein synthesis is severely compromised, some proteins are still synthesised as part of the mechanisms of cell survival, as these proteins are able to mitigate the damage caused by the stress and enable cells to tolerate the stressful conditions more effectively [1]. Appearance of abiotic stresses, as environmental conditions, is in many cases sudden. Therefore, a quick response to stress should be established to assure cell survival. In such a context, translational regulation of preexisting mRNAs provides a prompt and alternative way to control gene expression, as compared to other slower cellular processes such as mRNA transcription, processing, and transport to cytoplasm [2].
In animals and yeast, there are many known examples of global translational inhibition and preferential production of key proteins critical for survival under different abiotic insults [3][4][5][6][7][8]. This scenario also begins to be envisioned in plants where several studies demonstrate that general mRNA translation inhibition and selective translation of some mRNAs are key points in the adaptation process of plants to different abiotic stresses, including hypoxia, heat shock, water deficit, sucrose starvation, and saline stress [9]. Thus, in Arabidopsis seedlings subjected to oxygen deprivation, mRNAs coding proteins involved in glycolysis and alcoholic fermentation are efficiently translated, meanwhile the translation of other constitutively synthesised proteins is inhibited [10]. In a similar way, a decrease in the de novo protein synthesis has been demonstrated in Brassica napus seedlings after being subjected to heat shock for several hours. Under these conditions, in an opposite way to the proteins synthesised under normal conditions, only the translation of heat shock proteins is observed [11]. Furthermore, a reduction of protein synthesis with an increase in the synthesis of membrane proteins and of sulphur assimilation enzymes and transporters has been described in Arabidopsiscultured cells subjected to sublethal cadmium stress [12]. In addition, the translational repression of specific components of the translation machinery and cell cycle-related mRNAs has been observed during sucrose starvation using the same system [13]. Other examples of rapid impairment of de novo protein synthesis by osmotic stress in Arabidopsis and rice have recently been published [14].
Initiation of Translation: Main Target of the Translation Regulation in Response to Abiotic Stress
To date, the different experiments carried out to unravel the translational phase regulated under stress conditions point to a regulation mainly at the initiation step. In eukaryotes, under physiological conditions, the vast majority of mRNAs initiate translation via a canonical cap-dependent mechanism that begins with the recognition by the eIF4E factor of the cap structure (7-methyl guanosine) placed at the 5 end of the mRNAs to be translated. eIF4E interacts with eIF4G and with eIF4A, forming the cap binding complex called eIF4F. This complex allows the further recruitment of the preinitiation complex 43S, which consists of the small ribosomal subunit 40S, the ternary complex eIF2/GTP/tRNA met i and the factors eIF3, eIF1 and eIF1A. The resulting preinitiation complex scans the mRNAs in the 5 → 3 direction until an initiation codon is found. There the ribosomal subunit 60S is loaded and the elongation phase begins [15]. However, under abiotic stress conditions this canonical translation initiation is impeded by different mechanisms that affect mainly the activity of the initiation factors eIF2α, eIF4E, and eIF4A [1,2,5,[16][17][18].
Translation Regulation by eIF2α Phosphorylation.
In eukaryotes, one of the main mechanisms of translation inhibition in response to stress is the regulation of the subunit α of the eIF2 factor by phosphorylation. eIF2α phosphorylation is mediated by different kinases that are specifically activated in response to different stresses promoting the inhibition of translation by hindering the formation of the eIF2/GTP/tRNA met i ternary complex [17]. eIF2α kinases and their activation by stress conditions are different among different eukaryotes. In vertebrates four different eIF2α-kinases, namely, GCN2, PERK, PKR and HRI that are activated by nutrient limitation [19], protein misfolding in the endoplasmic reticulum (ER) [20], virus infection [21], and heme group availability [22], respectively, have been described ( Figure 1(a)). However, other eukaryotes have a different number of these enzymes. For instance, Schizosaccharomyces pombe has three eIF2α kinases (two distinct HRI and a GCN2), Drosophila melanogaster and Caenorhabditis elegans have only two (PERK and GCN2), and Saccharomyces cerevisiae has only one (GCN2) [23].
A strong inhibition of protein synthesis by eIF2α phosphorylation under different stress conditions has also been reported in plants, demonstrating that this mechanism of regulation of translation is conserved in these organisms [24]. Genome-wide searches for the presence of eIF2α kinases in Arabidopsis and rice suggest that higher plants only contain a GCN2-like eIF2α kinase [24]. In agreement with these in silico searches, so far only the eIF2α kinase GCN2 has been characterized in plants [24,25], although some reports also suggest the controversial existence in plants of an eIF2α kinase with the biochemical properties of the mammalian PKR [26][27][28][29]. Arabidopsis GCN2 is activated under different stress conditions including amino acid and purine deprivation, cadmium, UV, cold shock, and wounding ( Figure 1(a)), or in response to different hormones involved in the activation of defence response to insect herbivores [24,25]. Although AtGCN2 activity is linked to a strong reduction in global protein synthesis under the aforementioned conditions, the activity of this enzyme does not account for the general inhibition of translation under all stresses in plants, as treatments using NaCl or H 2 O 2 do not promote actively the phosphorylation of eIF2α. Moreover, results in Arabidopsis demonstrate that heat shock does not lead to eIF2α phosphorylation either, confirming previous results obtained in wheat [30]. Interestingly, heat shock causes a striking inhibition of protein synthesis in plants, suggesting that different mechanisms might be involved in the global protein synthesis inhibition observed under these conditions.
Translation Regulation by the Association of eIF4E with
Interacting Proteins. The regulation of mammalian eIF4E under abiotic stress conditions is by far the mechanism that has been better studied. This regulation in mammals involves the interaction of eIF4E with the 4E-binding proteins (4E-BPs). 4E-BPs show the same conserved eIF4Ebinding domain as eIF4G, so their action mechanism is based on their capability to compete out the eIF4G-eIF4E interaction, thereby inhibiting further recruitment of the ribosome to the mRNA "cap" structure. This mechanism is regulated by the phosphorylation status of 4E-BPs. Under physiological conditions, the TOR (target of rapamycin) kinase phosphorylates 4E-BPs, which turns 4E-BPs unable to interact with eIF4E. In response to different stresses, TOR is inhibited and 4E-BPs become dephosphorylated. This hypophosphorylation state increases the affinity of 4E-BPs for eIF4E, inhibiting cap-dependent translation and setting up a switch in the translational initiation mechanism from cap-dependent to cap-independent [18] (Figure 1(b)).
Regulation of eIF4E activity in budding yeast S. cerevisiae shares common features with that of mammals. In S. cerevisiae two functional homologs of the mammaliam 4E-BPs, p20 and EAP1, have been described [31,32]. Both proteins block cap-dependent translation by interfering with the interaction of eIF4E with eIF4G, a mechanism analogous to that of the mammaliam 4E-BPs [31,32]. In addition, TOR signalling pathway also plays a critical role in yeast, as in higher eukaryotes, in the modulation of translation initiation via regulation of eIF4E activity. Indeed, disruption of the EAP1 gene confers partial resistance to the growth-inhibitory properties of rapamycin, implicating EAP1 in the TOR signaling pathway controlling cap-dependent translation in S. cerevisiae [32]. Cap-independent translation has also been observed in plants subjected to both abiotic and biotic stresses (Figure 1(b)). In maize, two cellular mRNAs, the alcohol dehydrogenase ADH1 and the heat shock protein HSP101, are translated in a cap-independent manner in oxygendeprived roots [33] and during heat stress [34], respectively. These data, together with the fact that plant viruses use a capindependent translation strategy to translate their mRNAs lacking the cap structure in the host cells [35], demonstrate that plant translational apparatus is able to support capindependent translation under stress conditions. In addition, TOR also plays an important role in the regulation of protein synthesis in plants as RNAi reduction of TOR results in a strong inhibition of translation initiation in Arabidopsis, while TOR-overexpressing lines show an increase in translation initiation efficiency [36]. Moreover, in these lines the expression levels of AtTOR are correlated to the tolerance of Arabidopsis to osmotic stress indicating that AtTOR, possibly by its role in protein synthesis, modulates the response to abiotic stress conditions [36].
Regardless these striking parallelisms, the link between the role of TOR and the regulation of the eIF4E activity under abiotic stress in plants, if it exists, is far from being understood (Figure 1(b)). Indeed, no homolog of the 4E-BPs has been found in the plant genomes available to date. In spite of that, it has been described that the β subunit of the nascent polypeptide-associated complex (NAC) and the plant lipoxygenase 2 (AtLOX2) could putatively act as 4E-BP analogs since they interact with the Arabidopsis eIFiso4E in yeast two hybrid assays and these interactions can be displaced by the addition of AtIF4G in vitro [37,38]. Moreover, AteIF4E has been proven to coimmunoprecipate with AtLOX2 from Arabidopsis extracts [38]. However, their role in the regulation of protein translation has not been demonstrated, as no evidences for changes in translation mediated by these proteins or for the regulation of their activities by TOR have been described either in vitro or in vivo.
Translation Regulation by eIF4A.
Recently, new alternative mechanisms for the regulation of translation initiation under stress conditions which involve the regulation of the eIF4A RNA helicase have been discovered. A clear example is shown in yeast [5], where the authors demonstrated that glucose depletion causes a global translation inhibition due to a reduction in the amount of eIF4A bound to eIF4G. Concomitant with this reduction, changes in the levels of eIF3 associated to eIF4G are observed indicating that eIF4A could be required for the turnover in the association of eIF4G-eIF3 complex in a way that modulates translation initiation. Furthermore, the involvement of the regulation of eIF4A in translation in the response to lithium stress in S. cerevisiae has also been described [39] (Figure 1(c)).
As shown for the yeast eIF4A, plant eIF4A activity seems to be involved in the regulation of translation under abiotic stress in these organisms, as the overexpression of the pea DNA helicase 45, which seems to be the eIF4A ortholog, has been proven to confer high salinity tolerance in tobacco [40]. However, this observation should be further studied as the exact mechanism underlying this stress tolerance is not currently completely understood (Figure 1(c)).
Differential Translation of mRNAs in Response to Abiotic Stress Conditions
Under general translational inhibition conditions induced by abiotic stresses, some mRNAs involved in triggering stress responses are able to be selectively and efficiently translated. These transcripts have special characteristics that allow them to bypass specifically the different regulation points of translational inhibition. In this section we will focus on understanding the features that allow these mRNAs to circumvent downregulation of translation and we will deepen our knowledge in the information available in plants.
Differential Translation Mediated by eIF2α Regulation.
Specific examples of mRNAs immune to eIF2α regulation under a variety of stress conditions as GCN4 and ATF4 have been characterized in yeast [41] and mammals [42] ( Figure 1(a)). Both mRNAs are able to be translated by a complex mechanism based on the fact that when eIF2α is phosphorylated and, therefore, the ternary complex is scarce, the scanning ribosome fails to initiate translation at upstream reading frames (uORFs), which are terminated in premature stop codons. In this case, scanning continues downstream towards the functional initiation codon allowing, with this long scanning, the enough time for ternary complex recruitment and, therefore, to promote the subsequent translation of the functional peptide [16,41]. In plants, eIF2α phosphorylation causes a drastic inhibition of protein synthesis during amino acid starvation that is correlated with a partial inhibition of mRNA association to polysomes [24], demonstrating that, under eIF2α phosphorylation, there are some transcripts still able to be translated. However, at the moment, it is not known whether or not eIF2α phosphorylation leads to stimulation of translation of specific mRNAs, as reported for other systems (Figure 1(a)). In plants, no homolog to GCN4 transcription factor has been characterized and there is no evidence for the involvement of GCN2 in the transcriptional activation of Arabidopsis genes homologous to those regulated by GCN4 in yeast [25].
Differential Translation Mediated by IRESs and CITEs.
In the late 1980s, the study of viral gene expression led to the discovery of the most studied alternative mode of translation initiation, the IRES-driven initiation. This mechanism allows the 40S ribosome to be directly recruited to sequences located within the 5 -UTR of viral RNAs called Internal Ribome Entry Sites (IRES) without the need of cap-recognition by eIF4E [43][44][45]. Since then, IRES activity has been described in an increasingly number of cellular transcripts including those coding for translation initiation factors, transcription factors, oncoproteins, growth factors, and homeotic and survival proteins. The presence of these cellular IRESs (cIRESs) allows the efficient translation of mRNAs under conditions, where cap-dependent initiation is inhibited or seriously compromised, as it is the case of abiotic stress or during physiological processes as mitosis, apoptosis, or cell differentiation [46,47].
In plants, three cIRESs have been characterized to support cap-independent translation in vitro. These cIRESs have been found within the 5 -leader sequences of the mRNAs coding for the Arabidopsis ribosomal protein S18 subunit C (RPS18C) [48], the maize heat shock protein 101 (HSP101) [34], and the maize alcohol dehydrogenase (ADH1) [33]. Two of these mRNAs, the HSP101 and the ADH1 mRNAs, are efficiently translated under heat shock and under hypoxia, respectively [33,34], suggesting an important role of cIRESs in the mechanism for selective translation under abiotic stress in plants. Indeed, the 5leader of ADH1 was able to provide efficient translation of a reporter gene in vivo in Nicotiana benthamiana cells both under oxygen shortage and heat shock, while translation of the same construct lacking this sequence was significantly reduced [33]. Although promising, the examples of known plant cIRESs are scarce and, therefore, whether the use of cIRESs as translational enhancers of specific cellular mRNAs under abiotic stress is a generalized mechanism in these organisms remains still an open question.
For years the presence of cIRESs has been considered the only possible mechanism underlying cap-independent translation of cellular mRNAs. Interestingly, new mechanisms of cap-independent translation have been proposed to explain the translation observed under conditions where eIF4E activity is reduced [49,50]. One of them is the translation of the mouse HSP70 mRNA under heat stress conditions [4]. In this paper, Sun and collaborators demonstrate that the HSP70 5 -UTR is able to drive the translation of reporter genes under cap-independent conditions. However, the same sequence is unable to maintain cap-independent translation when placed in the intercistronic region of a bicistronic construct, ruling out the presence of an IRES within the sequence. Examples of such sequences have been described within plant viral mRNAs. The mRNAs of a large portion of all plant viruses lack the cap structure and, therefore, are forced to be translated in a cap-independent manner. To do so, in addition to viral IRESs, they use special elements termed cap-independent translational enhancers (CITEs). CITEs are able to recruit eIF4E and eIF4G cognates, or directly the 40S ribosomal subunit to the proximity to the AUG initiation codon, licensing in such a way the mRNA to initiate translation in a cap-independent manner [35,51]. Although the existence of CITE-like elements is still considered exclusive of plant viral mRNAs, it would not be surprising if such elements are also discovered in plant cellular mRNAs. Cellular CITE-like elements, if present, might provide an alternative to cIRESs to drive translation of plant mRNAs [33].
Differential translation of some mRNAs under certain abiotic conditions could also be explained by the binding of specific RNA binding proteins to certain sequences within the mRNAs, acting as cap-dependent translational enhancing factors and cap-dependent enhancers, respectively. Most abiotic stress conditions reduce cap-dependent initiation and, therefore, enhancers acting synergistically with the cap could increase selectively the translational rate of those transcripts containing them. A good example of cap-dependent enhancing factors is the protein disulfide isomerase (PDI) that is a key regulator of insulin translation in response to glucose in mammals [52]. PDI is able to bind specifically to glucose responsive mRNAs under glucose stimulation and recruits the poly(A)-binding protein (PABP) to unknown enhancer elements in their 5 -UTR. Although how PABP binding could increase translation of such mRNAs is still unknown, it is reasonable to think that it is by the interaction of PABP with eIF4G. Cap-dependent enhancers of translation in plant viruses have also been described [53][54][55], being one of the better known examples the Ω sequence found in plant tobacco mosaic virus (TMV) [56]. This sequence is recognized by the HSP101 that, in turn and through its interaction with the Ω sequence, recruits eIF4G subunit to the 5 -UTR of the viral RNA [55].
The Ω sequence has been used to promote translation of cellular mRNAs enhancing both cap-dependent and capindependent translation of the downstream gene by 2-10fold. Therefore, these enhancers of cap-dependent translation could facilitate cap-dependent translation and even sustain some cap-independent translation under low eIF4E activity. If these kind of enhancers are also found in plant cellular mRNAs is a question that remains unanswered but that should be studied.
Differential Translation
Mediated by eIF4A Regulation. Sequence analysis of polysome-bound mRNAs during glucose starvation in yeast, where a reduction of eIF4A association within the initiation complexes was observed, demonstrates that a common feature of these mRNAs is the low G+C content immediately upstream of the AUG [5]. These results suggest that the specific translation of mRNAs with low secondary structure could be selectively promoted under low eIF4A activity (Figure 1(c)). However, other alternative explanations cannot be fully excluded as, for example, the activation of IRES-driven translation of unstructured mRNAs by low level of helicase activity [6] or the possibility that other RNA helicases, with substrate preference for poorly structured mRNAs, may substitute the function of eIF4A. In a similar way, a study in Arabidopsis demonstrated that ribosome loading of mRNAs with high G+C content is differentially reduced under mild dehydration conditions [57]. These results may reflect, as in the previous case, a higher requirement for RNA helicase activity to initiate translation under stress in plants and may point to a low mRNA G+C content as a mechanism to bypass the restrain in eIF4A activity under abiotic stress (Figure 1(c)).
Unique Features of Regulation of Translation Initiation in Plants
It is well known that plants have unique translational characteristics as the existence, in addition to the canonical eIF4E and eIF4G factors, of IF(iso)4E and eIF(iso)4G isoforms. In Arabidopsis, one eIF(iso)4E and two eIF(iso)4Gs have been described; however, the number of these isoforms varies between plant species. eIF(iso)4E and eIF(iso)4G isoforms interact specifically between them to form eIF(iso)4F complexes [58]. The ability of the eIF(iso)4F complexes to support translation initiation of specific mRNAs has been proven different to that of the eIF4F complexes, suggesting that certain mRNA features allow different transcripts to interact preferentially with either complexes [59,60]. Indeed, Lellis and coworkers have recently demonstrated that the double-mutant in the two Arabidopsis eIF(iso)4G factors displays strong phenotypes in growth and development, which, in the apparent absence of general protein synthesis inhibition, could be caused by the selective translation of specific genes [61]. Moreover, in maize it has been demonstrated that eIF(iso)4E is particularly required for the translation of stored mRNAs from dry seeds, and that eIF4E is unable to fully replace this eIF(iso)4E function [62]. If eIF4F and eIF(iso)4F complexes regulate translation of different sets of mRNAs, this would mean a plant-specific layer of gene expression regulation that is worth studying in depth.
Conclusion
The conservation of mechanisms to globally inhibit protein synthesis concomitant to mRNA translation reprogramming under different stresses points out to the fundamental importance of translation regulation during the response to abiotic stresses in all eukaryotes. Although we already know that there are multiple parallel mechanisms across eukaryotes that modulate translation under abiotic stresses, we are still far away from understanding completely this regulation, as new alternative mechanisms taking part in this regulation are still being described. In plants, the study of translational regulation under stress is still in its infancy, and some of the most conserved regulators have not been found in these organisms yet. A considerable effort should be done in this respect, since understanding how plants respond to environmental conditions can only be fulfilled by a complete knowledge of how translation is regulated. | 5,150.6 | 2012-04-23T00:00:00.000 | [
"Biology"
] |
Identification of SERPINE1, PLAU and ACTA1 as biomarkers of head and neck squamous cell carcinoma based on integrated bioinformatics analysis
Background Head and neck squamous cell carcinoma (HNSCC) is the six leading cancer by incidence worldwide. The 5-year survival rate of HNSCC patients remains less than 65% due to lack of symptoms in the early stage. Hence, biomarkers which can improve detection of HNSCC should improve clinical outcome. Methods Gene expression profiles (GSE6631, GSE58911) and the Cancer Genome Atlas (TCGA) HNSCC data were used for integrated bioinformatics analysis; the differentially expressed genes (DEGs) were then subjected to functional and pathway enrichment analysis, protein–protein interaction (PPI) network construction. Subsequently, module analysis of the PPI network was performed and overall survival (OS) analysis of hub genes in subnetwork was studied. Finally, immunohistochemistry was used to verify the selected markers. Results A total of 52 up-regulated and 80 down-regulated DEGs were identified, which were mainly associated with ECM–receptor interaction and focal adhesion signaling pathways. Importantly, a set of prognostic signatures including SERPINE1, PLAU and ACTA1 were screened from DEGs, which could predict OS in HNSCC patients from TCGA cohort. Experiment of clinical samples further successfully validated that these three signature genes were aberrantly expressed in the oral epithelial dysplasia and HNSCC, and correlated with aggressiveness of HNSCC patients. Conclusions SERPINE1, PLAU and ACTA1 played important roles in regulating the initiation and progression of HNSCC, and could be identified as key biomarkers for precise diagnosis and prognosis of HNSCC, which will provide potential targets for clinical therapies. Electronic supplementary material The online version of this article (10.1007/s10147-019-01435-9) contains supplementary material, which is available to authorized users.
Introduction
Head and neck squamous cell carcinoma (HNSCC), which arises from the oral cavity, larynx and pharynx, ranks as the sixth most common malignancy with an estimated 835,000 new cases and 43,000 associated deaths worldwide in 2018 [1,2]. Unfortunately, diagnosis of HNSCC is usually made Ke Yang and Shizhou Zhang contributed equally to this work.
Electronic supplementary material
The online version of this article (https ://doi.org/10.1007/s1014 7-019-01435 -9) contains supplementary material, which is available to authorized users. at advanced stages due to lack of symptoms in the early stage of head and neck tumorigenesis, and the 5-year survival rate is still less than 65% now [3]. It is widely believed that accumulation of numerous genetic alterations in epithelial cells is the essential process driven by the initiation and progression of HNSCC [4]. Therefore, investigation of the potential key biomarkers may help to further uncover the biological basis of HNSCC and improve clinical therapy.
Recently, microarrays based on high-throughput platforms for analysis of gene expression are increasingly valued as a promising and efficient tool to screen significant genetic alternations in carcinogenesis and identify biomarkers for diagnosis and prognosis of cancer [5]. A number of gene expression profiling microarrays have been conducted to find various differentially expressed genes (DEGs) in HNSCC [6]; however, considerable efforts in the identification of biomarker have met with limited success, primarily because of independent numbers of gene profiling. Now, by the means of integrated bioinformatics analysis for available microarray data, it is possible to make more reliable and precise screening results via overlapping relevant datasets.
In the current study, microarray data of gene expression profiles (GSE6631 [7], GSE58911 [8]) and the Cancer Genome Atlas (TCGA) HNSCC data [9] were integrated and analyzed by a series of biological informatics approaches, aberrantly DEGs and pathways were identified in HNSCC, protein-protein interaction (PPI) network was also constructed and hub genes were revealed. Subsequently, we investigated the relationship between the hub genes of subnetwork and overall survival (OS) in TCGA database, and tested the expression status of these hub genes in clinical samples at different stages of tumorigenesis. By this mean, we may bring to light the underlying mechanisms and identify the potential candidate biomarkers for diagnosis and prognosis of HNSCC.
Microarray data
In the present study, the gene expression profiles (GSE6631, GSE58911) were obtained from Gene Expression Omnibus (GEO, https ://www.ncbi.nlm.nih.gov/geo/). Totally 22 paired samples of HNSCC and normal tissues were consisted in GSE6631, which based on GPL8300 platform (Affymetrix Human Genome U95 chips). GSE58911 dataset was already deposited in GPL6244 (Affymetrix Human Gene 1.0 ST Array), including 15 paired normal and HNSCC samples. Moreover, the TCGA HNSCC data (https ://cance rgeno me.nih.gov/) were also downloaded, including 44 normal and 502 HNSCC tissues. We chose these 3 datasets for integrated analysis to identify commonly DEGs.
Data processing and identification of DEGs
The original raw array data were subjected to background correction, quartile data normalization, and converted into gene expression values. Data were normalized using the Bioconductor R package (https ://cran.r-proje ct.org/mirro rs.html). Then, the DEGs between HNSCC samples and normal controls were identified using the empirical Bayes approach in linear models for the microarray data (limma) package. |logFC| > 1 and p < 0.05 were selected as the cutoff criterion.
Functional and pathway enrichment analysis of DEGs
To analyze the identified DEGs at the functional level, the significant gene ontology (GO) biological process terms [10] and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis [11] were performed using the Database for Annotation, Visualization and Integrated Discovery (DAVID, https ://david .ncifc rf.gov/) with the thresholds of p < 0.05 and false discovery rate (FDR) < 0.01 [12].
Modules from the PPI network
To evaluate the interactive relationships among DEGs, we mapped the DEGs to the Search Tool for the Retrieval of Interacting Genes (STRING) database (https ://strin g-db. org) [13]. Then, the interactive DEGs were selected to construct the PPI network (combined score ≥ 0.4) and visualized using Cytoscape [14]. The Molecular Complex Detection (MCODE) plugin in Cytoscape was used to screen the modules of PPI network with MCODE score > 3 and number of nodes > 4.
Survival analysis of the hub gene in TCGA database
The association between the corresponding genes in the top modules and patient OS for 5 years was analyzed using HNSCC samples from the TCGA data. All HNSCC patients were classified into high or low expression based on whether Z-score expression was > median (high) or < median (low), log-rank analysis and Kaplan-Meier plots were produced using Bioconductor R package.
Clinical samples and clinical staging system
A total of 52 paraffin-embedded HNSCC (39) and oral epithelial dysplasia (OED, 13) samples were obtained from the archives of the Department of Pathology of Shandong Provincial Hospital, Jinan, China. Of the HNSCC samples, 1 3 there were 23 (59%) well, 10 (26%) moderately and 6 (15%) poorly differentiated HNSCC tissues; 10 matched adjacent non-cancerous oral mucosa (NOM) tissues were selected from the above-mentioned patients and detailed sample information is presented in Supplementary
Statistical analysis
Comparison of two Kaplan-Meier curves was performed using the log-rank test of the R-package survival. The mean LS for HNSCC, OED and NOM samples was compared among the three groups by analysis of variance (ANOVA) using the SPSS 10.0 software package. p < 0.05 was considered statistically significant.
Identification of aberrantly DEGs in HNSCC
Data from each microarray were separately analyzed to screen DEGs. As represented in Fig. 1, our integrated bioinformatics analysis indicated that a total of 132 genes were consistently and significantly deregulated in the same direction in these datasets, including 52 overlapping up-regulated (163 in GSE6631, 172 in GSE58911 and 5679 in TCGA) and 80 down-regulated genes (128 in GSE6631, 427 in GSE58911 and 3436 in TCGA) in HNSCC tissues, as compared to normal epithelial tissues (Table 1). Table 1. 132 DEGs were identified from three profile datasets, including 52 up-regulated genes and 80 down-regulated genes in the HNSCC tissues, compared to normal tissue
DEGs functional and pathway enrichment analysis
The top 5 significant terms of GO analysis in DAVID were illustrated in Table 2. In the biological process (BP) group, GO analysis results showed that up-regulated DEGs were significantly enriched in extracellular matrix organization, collagen catabolic process, extracellular matrix disassembly, cell adhesion and collagen fibril organization; the downregulated DEGs were mainly enriched in muscle contraction and muscle filament sliding. For molecular function (MF), the enrichment of up-regulated DEGs was in metalloendopeptidase activity, extracellular matrix structural constituent, serine-type endopeptidase activity, collagen binding and endopeptidase activity, and down-regulated genes were enriched in structural constituent of muscle. Besides, GO cell component (CC) analysis indicated that the enrichment of up-regulated DEGs was predominantly in extracellular matrix, proteinaceous extracellular matrix, extracellular region, extracellular space and basement membrane, and down-regulated DEGs were enriched in extracellular exosome, muscle myosin complex and Z disc.
We also determined the canonical signaling pathways associated with the commonly DEGs in the carcinogenesis of HNSCC by performing KEGG analysis. The activated pathways were enriched in ECM-receptor interaction, focal adhesion, PI3K-Akt signaling pathway, while the suppressed pathways were mainly involved in drug metabolism-cytochrome P450, tyrosine metabolism and tight junction (Table 3).
PPI network construction and module analysis
Using the STRING online database and Cytoscape software, 90 DEGs (37 up-regulated and 53 down-regulated genes) of the 132 commonly DEGs were filtered into the PPI network complex, containing 131 nodes and 289 edges (Fig. 2), and the top 10 node degree genes were FN1, MMP9, SPP1, COL3A1, MMP13, POSTN, SPARC, COL4A1, ACTA1 and SERPINE1. According to the importance of degree, we chose two most significant modules from the PPI network complex for further analysis using Cytoscape MCODE. Pathway enrichment analysis showed that the module 1 consisted of 12 nodes and 40 edges, which were mainly associated with ECM-receptor interaction, focal adhesion and PI3K-Akt signaling pathway, while the module 2 included 12 nodes and 31 edges, which were also enriched in focal adhesion and ECM-receptor interaction (Fig. 3), suggesting that ECM-receptor interaction and focal adhesion signaling pathways were essential in the carcinogenesis of HNSCC.
The validation of hub genes as independent predictors for OS in the TCGA database
We subsequently sought to assess the significance of hub genes for HNSCC; the relationships between expression of hub genes and OS were verified in the TCGA HNSCC cohort. For most of the hub genes, our results showed that poor OS was associated only in those patients with high expression of SERPINE1 (p = 0.00054) or PLAU (p = 0.00289), as well as the low expression of ACTA1 (p = 0.04147), MYL1 (p = 0.01405), MYH2 (p = 0.04987) or MYLPF (p = 0.02122) (Fig. 4), suggesting that these candidate genes are associated with clinical outcome of HNSCC patients.
SERPINE1, PLAU and ACTA1 are aberrantly expressed in the carcinogenesis of HNSCC.
To further clarify the potential biological role of these prognosis-associated genes in HNSCC transformation, we next characterized the expression changes of signature genes by microarray analysis (Supplementary Fig. 1). Among these hub genes that had large change levels in HNSCC samples, we compared their degrees in the highest ranked modules, two up-(SERPINE1, PLAU) and one down-regulated (ACTA1) genes were particularly selected to further test the protein expression in NOM, OED and HNSCC tissues. As expected, we found that the expression of SERPINE1 and PLAU increased from NOM to OED and HNSCC. there was a significant difference in the mean LS of SERPINE1 or PLAU between NOM and OED (p = 0.000) or HNSCC samples (p = 0.000), respectively; however, the LS of SER-PINE1 or PLAU between OED and HNSCC had no significant difference. Representative microphotographs of SERPINE1 and PLAU staining for NOM, OED and HNSCC are shown in Fig. 5. Instead, the level of ACTA1 showed an opposite trend; the LS was reduced from NOM (136.10 ± 49.249%) through OED (81.77 ± 5.403%) to HNSCC samples (60.66 ± 9.089%).
There was a significant difference in the expression of ACTA1 among the 3 groups (Table 4). Thus, combined with the TCGA data analysis, these results suggested that SERPINE1, PLAU and ACTA1 are required for the initiation of head and neck tumorigenesis.
SERPINE1, PLAU and ACTA1 are correlated with clinical aggressiveness of HNSCC patients
As the expression levels of SERPINE1, PLAU and ACTA1 were validated as independently predicted factors for OS of HNSCC patients, we continued to define the association of these genes and clinical histology classification in HNSCC samples. As shown in Fig. 6, there was a significant difference in these three hub genes expression between well and moderately or poorly differentiated HNSCC. The results showed that the LS of SERPINE1 increased significantly from well (166.78 ± 13.426%) through moderately (234.60 ± 36.439%) to poorly differentiated HNSCC samples (282.00 ± 7.589%) ( Table 5). A significant difference in the LS of PLAU was also found between poorly and well (p = 0.000) or moderately differentiated HNSCC tissues (p = 0.005). In contrast, the expression of ACTA1 showed an obviously downward trend between well and moderately or poorly HNSCC samples; the mean LS of ACTA1 in well-differentiated HNSCC was 64.78 ± 9.400%; however, the results of LS between moderately and poorly differentiated HNSCC were all zero. Thus, our findings indicated that SERPINE1, PLAU and ACTA1 were correlated with clinical malignancy of HNSCC patients.
Discussion
Identifying oncogenic biomarkers and elucidating the underlying mechanism of the initiation and development of HNSCC would greatly benefit the early diagnosis and effective treatment for patients with high malignancy [17]. Emergency bioinformatics analysis has provided a powerful tool for the identification of biomarkers and therapeutic targets relevant to tumor progression and treatment response [18]. In the present study, we identified 52 up-regulated and 80 down-regulated DEGs through analyzing available data of gene expression profile datasets (GSE6631, GSE58911 and TCGA) in HNSCC by multiple bioinformatics tools. Functional analysis demonstrated that these DEGs are mainly associated with activation of ECM-receptor interaction and focal adhesion and suppression of drug metabolism-cytochrome P450 pathways. More importantly, based on TCGA dataset, our clinical experiments proved that a set of prognostic signatures including SERPINE1, PLAU and ACTA1 were identified as biomarkers for diagnosis and prognosis of HNSCC, which may provide novel insights for unraveling pathogenesis of HNSCC.
Recently, some basic studies have been conducted to identify the DEGs in HNSCC [19,20]. For example, Yang et al. analyzed the gene expression profile of GSE6791 and identified 550 up-regulated and 261 down-regulated genes [21]. Similarly, Zhao found that PLAU, CLDN8 and CDKN2A could predict OS using gene expression profiles of GSE13601, GSE30784, GSE37991 and TCGA in oral squamous cell carcinoma [22]. Our integrated bioinformatics analysis indicated that 132 genes were consistently and significantly deregulated in GSE6631, GSE58911 and TCGA. Interestingly, our results revealed that there were also examples of genes that did not overlap compared with these reports; the main reason of this discrepancy may be because we used 3 different multiple profiles, which could greatly minimize the intra-tumoral heterogeneity and diversity of anatomical sites of HNSCC.
As was suggested by DAVID analysis, the up-regulated DEGs were mainly involved in extracellular matrix organization, collagen catabolic process, extracellular matrix disassembly, cell adhesion and collagen fibril organization at the level of BP. Extracellular matrix (ECM), as a crucial component of the cancer cell niche, provides the mechanical support for the tissue and mediates the cell-microenvironment interactions [23]. Significantly, collagens are one of the major proteins found within the ECM, and have themselves been implicated in many aspects of neoplastic transformation. Therefore, it is consistent with the findings that active functions of these cellular processes through ECM were the main cause for tumor development, progression and metastasis [24], whereas the down-regulated DEGs in HNSCC were mainly enriched in actin-mediated cell contraction and filament sliding, which were associated with decreased muscle function-mediated cytoskeleton remodeling in cancer development and progression [25]. Furthermore, the enriched KEGG pathway of up-regulated DEGs mainly induced ECM-receptor interaction, focal adhesion and PI3K-Akt signaling pathway. Significantly, 12 overlapping genes, including ITGA6, SPP1 and FN1, were identified to functionally involve in interactions between ECM and cells by activating these three signaling pathways, which lead to a direct or indirect control of cellular activities such as cell migration, differentiation and proliferation [26][27][28]. As a contrast, down-regulated DEGs were related to drug metabolism-cytochrome P450. The recent study has reported that the cytochrome P450 slowed metabolizers CYP2C9*2 and CYP2C9*3, which could directly regulate tumorigenesis via reduced epoxyeicosatrienoic acid production [29]. Together, these data suggested that deregulated pathways may be a major factor of HNSCC tumorigenesis, detecting these aberrant signaling pathways could precisely predict tumor progression [30].
After constructing PPI network with DEGs and listing the top degree of hub genes, the most significant two modules were filtered from the PPI network complex; consequent functional analysis showed that most of corresponding genes were associated with ECM-receptor interaction and focal adhesion. Furthermore, survival analysis of hub genes in these two modules revealed that SERPINE1, PLAU, ACTA1, MYL1, MYH2 and MYLPF were identified as prognostic markers for clinical outcome in the TCGA cohort. Among the up-regulated hub genes, PLAU, one of the major proteolytic enzymes involved in degradation of extracellular matrix, has been demonstrated to play critical roles in tissue remodeling and migration in the developmental as well as tumorigenesis process, whereas SERPINE1, as the most important physiological inhibitor of the PLAU, could in turn reverse this process and regulate the adhesion/ deadhesion balance of cells to the ECM [31]. However, it has been reported that SERPINE1 could induce the EMT process and promote tumor cell survival in breast and ovarian cancers [32,33]. In our study, the bioinformatics analysis revealed significantly increased expressions of PLAU and SERPINE1 in HNSCC tissues, which were associated with poor clinical outcome. In contrast, for the down-regulated actin-family genes, ACTA1 gene encodes a protein exerting functions in cell motility, structure and integrity. Consistent with our observation, ACTA1 is also down-regulated in colorectal cancer [34]. In addition, our results showed that the other three specific down-regulation genes (MYL1, Fig. 6 Immunohistochemical staining of SERPINE1, PLAU and ACTA1 in tumor nests of HNSCC tissues, and the expression levels of three genes were associated with tumor differentiation (× 100) MYH2 and MYLPF) were involved in muscle contraction process, which might play a regulatory role in remodeling of muscle function in HNSCC tissues; however, the specific roles of these genes in cancers still need to be elucidated. Of note, in view of the prognostic potency of these hub genes for HNSCC in TCGA database, by the validation of their top degree of genes and change levels of mRNA in microarrays, we selected SERPINE1, PLAU and ACTA1 to further detect their protein level by immunostaining. Our clinical analysis showed that SERPINE1, PLAU and ACTA1 were significantly changed in the progression of HNSCC. They were aberrantly expressed in the epithelium of OED and HNSCC and correlated with aggressiveness of HNSCC patients, which implied that these signature genes are possibly not only involved in the initiation of tumorigenesis but also late stages of cancer. Therefore, SERPINE1, PLAU and ACTA1 could be potentially utilized as diagnostic and prognostic biomarkers for HNSCC. More importantly, by comparing the extent of protein changes, the overexpressed SERPINE1 and PLAU are the most promising markers, and its detection could help to identify tumor cells in tissues.
In conclusion, the current study was intended to identify DEGs with comprehensive bioinformatics analysis to find the potential biomarkers and predict progression of HNSCC. We found that SERPINE1, PLAU and ACTA1 might be exploited as diagnostic and prognostic indicators for HNSCC. Finally, the results also suggested that the function of ECM-receptor interaction and focal adhesion may be essential signaling pathways in the development of HNSCC. Hence, our findings could significantly improve our understanding of the cause and underling molecular events of HNSCC, and provide potential targets for anticancer therapies. | 4,519.2 | 2019-04-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Synopsis of the Strophariaceae ( Basidiomycota , Agaricales ) from Floresta Nacional de São Francisco de Paula , Rio Grande do Sul State , Brazil
(Synopsis of the Strophariaceae (Basidiomycota, Agaricales) from Floresta Nacional de São Francisco de Paula, Rio Grande do Sul State, Brazil). In a survey of the Strophariaceae from a natural reserve at Southern Brazil, a total of 16 species were studied: Deconica coprophila, D. horizontalis, Hypholoma ericaeum, H. subviride, Leratiomyces ceres, Pholiota limonella, P. spumosa, Psilocybe caeruleoannulata, P. wrightii, P. zapotecorum, Stropharia acanthocystis, S. agaricoides, S. araucariae, S. earlei, S. rugosoannulata and S. venusta. Full descriptions and illustrations on Pholiota limonella and Psilocybe zapotecorum are presented due to a lack of the detailed description for the State of Rio Grande do Sul.
Introduction
The agaric family Strophariaceae Singer & A.H. Sm. includes dark-spored mushrooms habiting a wide variety of substrates, including litter, wood decay, mosses, dung, fields, pastures, gardens and swamps (Singer 1986).Current systematic studies have modified the circumscription of the family in its generic composition, with incorporation of several secotioid and gasteroid forms (Bridge et al. 2008, Noordeloos 2011), as well the inclusion of some genera previously added in other agaric families (Walther & Weiβ 2008).With progress of molecular investigations, generic limits are under revision and discussion, including some defined genera as for example Psilocybe (Fr.) Quél.(Noordeloos 2009, Norvell 2010, Redhead et al. 2007).
In the present paper a synopsis of Strophariaceae from the National Forest of São Francisco de Paula is discussed.
Material and methods
Fifty two specimens were studied, the majority were collected by the authors at the National Forest of San Francisco de Paula (abbreviated onwards as FLONA), Rio Grande do Sul State, Southern Brazil, from May 2006 to July 2007, or selected in ICN Herbarium (Institute of Biosciences, Universidade Federal do Rio Grande do Sul).For detailed information on the study area, see Dobrovolski et al. (2006) and Silva et al. (2009).Microscopic observations were made obtaining thin free-hand sections of the pileus and stipe of dried specimens, mounted on 5% KOH (potassium hydroxide) simple or with 1% Congo red solution.At least 25 measurements of each microstructure were taken, and drawn under a light tube.For the basidiospore descriptions, Q is the ratio of length and width, Qm is the medium value of Q, and n is the number of measured basidiospores.All collected material are deposited in the ICN herbarium A coprophilous and small mushroom, with a distinctly striate pileus, distributed worldwide, growing on cow and horse dung.It is known from Northeastern (Wartchow et al. 2007) to Southern Brazil (Cortez & Coelho 2004, Silva et al. 2006) Deconica horizontalis can be recognized by the crepidotoid habit, beige color and lilaceous gray lamellae.The species was considered in the genus Melanotus Pat., which was later reduced to a subgenus of Psilocybe (Noordeloos 1999) and more recently as a synonym of Deconica (W.G.Sm.) P. Karst.(Noordeloos 2009), since Psilocybe has been applied to psilocybin containing taxa.Deconica horizontalis was previously recorded from Rio Grande do Sul as Melanotus proteus (Kalchbr.)Singer, also growing on Araucaria angustifolia wood (Cortez & Coelho 2004).
Habitat: among grasses in native plateau meadows.
Specimens examined: BRAZIL.Rio GRande do Sul: São Francisco de Paula, FLONA, 14-V-2005, V.G.This is a common mushroom in native meadows of Rio Grande do Sul highlands, recognized macroscopically by the slender exanullate stipe and convex to umbonate, slightly viscid pileus (Cortez & Silveira 2007b).It was recently noticed, from the same region of the present study, that males of the orchid bee Eufriesea violacea are attracted by volatile substances produced by this mushroom, evidencing an interesting ecological association between fungi and bees (Capellari & Harter-Marques 2010).This species produces numerous caespitose to gregarious basidiomata and it is diagnosed by the greenish yellow color of pileus and gills, which become dark violaceous with the maturity of the basidiospores.Although similar to the northern temperate H. fasciculare (Huds.)P. Kumm., the lesser stature and little developed veil separate well both taxa.A detailed description of material from Rio Grande do Sul is presented by Cortez & Silveira (2007b).Pholiota limonella (Peck) Sacc., Syll.Fung.5: 753.
Habitat: very common on the barks and fallen trunks of coniferous and dicotyledonous trees.This species typically occurs on coniferous woods (Pinus, Araucaria), fruiting in autumn months in the studied area.It is distributed in North America (Smith & Hesler 1968) and Europe (Noordeloos 2011), and has been reported from the states of São Paulo and Rio Grande do Sul in Brazil (Cortez & Coelho 2003).Guzmán, Mycotaxon 7: 235. 1978.
Psilocybe caeruleoannulata
Habitat: On soil and herbivorous dung, in grasslands.
Specimens examined: BRAZIL.Rio GRande do Sul: São Francisco de Paula, FLONA, 14-V-2005, V.G.This is one of the most common bluing species of Psilocybe and has been collected on soil or manure in grasslands in the studied area.Description of the material from Southern Brazil is presented by Silva et al. (2006).Guzmán, Mycotaxon 7: 251. 1978.Specimens examined: BRAZIL.Rio GRande do Sul: São Francisco de Paula, FLONA, 14-V-2005, V.G.Cortez 048/05 (ICN).
Psilocybe wrightii
The distribution of P. wrightii seems to be restricted to Northern Argentina and Southern Brazil, based on the current data available (Guzmán & Cortez 2004).Rossato et al. (2009) presented a full description of central Rio Grande do Sul and provided chemical data about psychotropic compounds, showing high concentrations of psilocybin and psilocin.
This bluing species presents a wide variation on pileus and stipe morphology, as well as on color, which might cause confusion on its concept (Guzmán 1983).It is hallucinogenic mushroom, very important among the Mexican indians Zapotec and Mazatec, who know it with many popular names (Guzmán 1983) The species Stropharia acanthocystis is know only from the region of FLONA, but it is expected to occur in the ombrophilous mixed forests of Southern Brazil.Cortez & Silveira (2007a) described the species based on a single collection, but it has been frequently collected in the area.The presence of hymenial acanthocytes is undoubtedly the most important feature of the species, as well as lack of a membranous annulus, combined with other microscopic features.
Sixteen specific taxa representing six genera of Strophariaceae are briefly discussed.
(Cortez & Silveira 2007b, Silva et al. 2006), white velar remnants and fibrillose and not persistent annulus.It was previously reported from Rio Grande do Sul as Hypholoma aurantiacum (Cooke) Faus(Cortez & Silveira 2007b, Silva et al. 2006), however, a recent revision by Bridge et al. (2008) elucidated the species complex in which the taxon belonged, proposing the correct epithet and formal transfer to the genus Leratiomyces Bridge, Spooner, Beever & D.-C.Park.Redhead & McNeill (2008) discussed the nomenclatural problems regarding this name and the genus was finally typified by L. similis (Sacc.& Trotter) Redhead & McNeill.Based on these results (Bridge et al. 2008, Redhead & McNeill 2008), we concluded that the name L. ceres is the appropriate to southern Brazilian specimens previously reported as H. aurantiacum.
. It was recently reported to Rio Grande do Sul by Sobestiansky (2005). | 1,598.8 | 2012-09-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Identification of miRs-143 and -145 that Is Associated with Bone Metastasis of Prostate Cancer and Involved in the Regulation of EMT
The principal problem arising from prostate cancer (PCa) is its propensity to metastasize to bone. MicroRNAs (miRNAs) play a crucial role in many tumor metastases. The importance of miRNAs in bone metastasis of PCa has not been elucidated to date. We investigated whether the expression of certain miRNAs was associated with bone metastasis of PCa. We examined the miRNA expression profiles of 6 primary and 7 bone metastatic PCa samples by miRNA microarray analysis. The expression of 5 miRNAs significantly decreased in bone metastasis compared with primary PCa, including miRs-508-5p, -145, -143, -33a and -100. We further examined other samples of 16 primary PCa and 13 bone metastases using real-time PCR analysis. The expressions of miRs-143 and -145 were verified to down-regulate significantly in metastasis samples. By investigating relationship of the levels of miRs-143 and -145 with clinicopathological features of PCa patients, we found down-regulations of miRs-143 and -145 were negatively correlated to bone metastasis, the Gleason score and level of free PSA in primary PCa. Over-expression miR-143 and -145 by retrovirus transfection reduced the ability of migration and invasion in vitro, and tumor development and bone invasion in vivo of PC-3 cells, a human PCa cell line originated from a bone metastatic PCa specimen. Their upregulation also increased E-cadherin expression and reduced fibronectin expression of PC-3 cells which revealed a less invasive morphologic phenotype. These findings indicate that miRs-143 and -145 are associated with bone metastasis of PCa and suggest that they may play important roles in the bone metastasis and be involved in the regulation of EMT Both of them may also be clinically used as novel biomarkers in discriminating different stages of human PCa and predicting bone metastasis.
Introduction
Prostate cancer (PCa) is the most frequently diagnosed malignant tumor and the second leading cause of cancer deaths in western countries [1]. The principal problem arising from PCa is its propensity to metastasize to bone. Skeletal metastases occur in as many as 90% of patients with advanced PCa. Importantly, once tumors metastasize to bone, they are virtually incurable and result in significant morbidity prior to a patient's death [2,3]. It is very important to understand the mechanism of metastasis formation for preventing metastasis and developing anti-metastatic therapies that may provide additional reduction on the morbidity and mortality of PCa patients.
Skeletal metastasis of tumor is a complicated multi-step process that includes cellular disengagement and motility from the local microenvironment, degradation of the surrounding extracellular matrix, cellular movement, arrested at distal capillaries, extravasate and finally proliferate to form distant secondary bone tumors. All of these processes are regulated by multiple factors and molecular pathways [4]. Although basic knowledge related to this structured process has increased recently, many of the key elements are still poorly understood.
MicroRNAs (miRNAs) are a class of small noncoding regulatory RNAs (19-25 nucleotides) expressed by plants and animals involved in regulation of gene expression. They exert their function by binding to the 39-untranslated region of a subset of mRNAs resulting in their degradation or repression of translation [5]. Bioinformatic analyses have predicted that single miRNA has multiple targets, and thus miRNAs could mediate the regulation of a great number of protein-coding genes. Recent estimates suggest that one-third of human mRNAs may be regulated by miRNAs [6,7]. miRNAs have been shown to interfere cellular functions such as cell proliferation, cell differentiation, and apoptosis [8].
In PCa, several miRNAs have been identified as mediators of metastasis. It was demonstrated that the deregulation of miR-221 and miR-222 was associated with PCa progression, poor prognosis, and the development of metastasis [28]. miR-21 was also over-expressed in PCa and acts as a key oncogenic regulator that contributes to tumor growth, invasiveness and metastasis [29,30,31]. A study has revealed that miR-146a targets ROCK1, and elevated ROCK1 levels promote cell proliferation, invasion and metastasis in the PCa cells [32]. In addition, the genomic loss of miR-101 in human PCa, involved in cancer progression, leads to over-expression of EZH2 [33,34]. However, the importance of miRNAs in bone metastasis of PCa has not been elucidated to date.
Epithelial-mesenchymal transition (EMT) is a certain signal pathway of describing one key step of the progression of tumor cell metastasis which includes consecutive processes of cell-detaching, migrating, invading, dispersing and final residing [35]. It has been identified as a hallmark of metastasis in multiple tumors, connecting to plenty of transcriptional factors [36,37,38,39]. miRNAs are also components of the cellular signaling circuitry that regulates the EMT program [40]. Recent work has demonstrated several miRNAs, including miR-200 family and miR-205, played critical roles in EMT [41,42]. Until now, the precise role of miRNAs in regulating EMT is still unclear.
To investigate the role of miRNAs in bone metastasis of PCa and their relationship with EMT, it is firstly need to know miRNA expression profiling in primary and bone metastatic PCa. In the present study, we compared miRNA expression profiles in primary and bone metastatic PCa of humans and identified miRs-143 and -145 related to bone metastasis. Furthermore, we demonstrated that the upregulations of miRs-143 and -145 repressed migration and invasion in vitro, tumor development and bone invasion in vivo, and EMT of PC-3 cells, a human PCa cell line originated from a bone metastatic PCa specimen.
Tissue samples
Tissue samples from two groups of PCa patients were studied. Primary PCa tissues were from prostatectomy or transurethral resection in the treatment of local prostate carcinoma. Skeletal metastatic tissues of PCa were from the operation in the treatment of bone metastasis. All samples were formalin-fixed and paraffin-embedded (FFPE) with standard procedures. Regions of tissue specimens .70% cancerous tissue were used for the extraction of total RNA. The histological diagnosis was made by a pathologist and has been re-confirmed by a second pathologist (D.H.). Bone metastasis was diagnosed according to clinical symptom and sign, bone scan, radiography, computed tomography, and MRI. None of the patients had received neoadjuvant hormone, radiation, or chemotherapy before getting the tumor tissues. The clinical information was reviewed about age, bone metastasis, total PSA level, free PSA level and the Gleason score in primary PCa patients. The study was approved by the Institutional Ethical Board (IRB) in the First Affiliated Hospital of Sun Yat-sen University and consented by patients involved.
RNA extraction
All samples were sent to CapitalBio Corp. and total RNA from FFPE tissue samples was isolated as previously described [43]. In brief, tissue samples were cut into slices from paraffin blocks and placed in 1.5 mL nuclease-free microcentrifuge tubes (Eppendorf), then deparaffinized three times in 1 mL Limonene, followed by wash with 1 mL 100% ethanol twice and air drying at room temperature. Samples were then incubated with digestion buffer (20 mM Tris-HCl, 10 mM EDTA, 1% SDS) and proteinase K (Merck) at 55uC overnight in order to obtain complete digestion of the samples. Subsequently, TRIzol reagent (Invitrogen) was added, and the remainder of the protocol was carried out according to the manufacturer's instructions. RNA samples were resuspended in RNase-free water after the final precipitation step. RNA quality and quantity were assessed using a biophotometer (Eppendorf).
Microarray analysis
Total RNA samples were analyzed by CapitalBio (CapitalBio Corp.) for miRNA microarray experiments. Each miRNA microarray chip contained 924 probes in triplicate, corresponding to 677 human (including 122 predicted miRNAs), 461 mouse, and 292 rat miRNAs found in the miRNA Registry (http://microrna. sanger.ac.uk; miRBase Release 10.0, 2007). Procedures were performed as described in detail on the website of CapitalBio (http://www.capitalbio.com). Briefly, the low-molecular-weight RNA (LMW-RNA) was isolated using PEG solution precipitation method according to a previous protocol [44]. LMW-RNA was dephosphorylated by Alkaline Phosphatase (NEB) at the first following the protocol given by Wang H, et al. [45]. Then the dephosphorylated LMW-RNA was labeled with 500 ng 59phosphate-cytidyl-uridyl-cy3-39 (Dharmacon) with 2 units T4 RNA ligase (NEB) [44]. Labeled RNA was precipitated with 0.3 M sodium acetate, 2.5 volumes ethanol and resuspended in 20 ml of hybridization buffer containing 36SSC, 0.2% SDS and 15% formamide. The array was hybridized at 42uC overnight and washed with two consecutive washing solutions (0.2% SDS, 26SSC at 42uC for 4 min, and 0.2% SSC for 4 min at room temperature). Arrays were scanned with a double-channel laser scanner (LuxScan 10K/A, CapitalBio). The scanning setting was adjusted to obtain a visualized equal intensity of U6 spots across arrays. Data was extracted from the TIFF images using LuxScanTM 3.0 software (CapitalBio Corp). Raw data were normalized and analyzed using the Significance Analysis of Microarrays (SAM, version 2.1, Stanford University, CA, USA) software. Clustering analysis was performed by Cluster 3.0 [46]. All data is MIAME compliant and that the raw data has been deposited in a MIAME compliant database (GEO, accession ID: GSE26964). The cDNA obtained using TaqMan miRNA Q-PCR Detection Kit (GeneCopoeia). Briefly, miRNA was reverse transcribed using sequence specific stem-loop primers (invitrogen) to the following miRNAs: hsa-miR-125b, hsa-miR-145, hsa-miR-153, hsa-miR-210, hsa-miR-143, hsa-miR-100, hsa-miR-363, hsa-miR-451, hsa-miR-572 and hsa-miR-508-5p, based on microarray analysis and their predicted target genes. The reaction was performed with the following parameter values: 15 min at 37uC, 10 minutes at 65uC, 5 min at 85uC, and 220uC until use. Real-time PCR analysis was performed on an iQ5 Real Time PCR Detection System (Bio-Rad) with 20 mL volume reaction containing 2 mL reverse transcription product, 10 mL 26All-in-One TM Q-PCR Mix, 2 mL PCR Forward Primer (2 mM), 2 mL Universal Adaptor PCR Primer (2 mM), 4 mL ddH2O. The reactions were incubated in 96-well plates at 95uC for 10 min, following by 40 cycles, and then ramped from 66uC to 95uC to obtain the melting curve. Each sample was analyzed in triplicate. No template and no reverse transcription were included as negative controls. U6 snRNA was used as normalization control. Relative expression values from three independent experiments were calculated following the 2 2DDCt method of Schmittgen and Livak [47].
Cell Culture
Metastatic PCa cell lines included PC-3 and LNCaP in the present study. PC-3 was purchased from American Type Culture Collection (ATCC) and maintained in F-12 culture medium (Hyclone) supplemented with 10% fetal bovine serum (Hyclone). LNCaP was purchased from Shanghai Cell Bank, Chinese Academy of Sciences, and maintained in RPMI-1640 culture medium (Gibico, Invitrogen) supplemented with 10% fetal bovine serum (Hyclone). Stably-transfected cells were maintained in media with the presence of puromycin (Sigma-Aldrich). Cells were grown at a humidified atmosphere of 5% CO 2 at 37uC.
Generation of Stably Transfected Cell Lines
The sequence of pri-miR-143 and pri-miR-145 were cloned into pMSCV-puromycin plasmid with restriction enzyme Bgl II and EcoR I (New England Biolabs). 293FT cells were then transfected with aforementioned constructed plasmids combined with PIK vector or blank pMSCV-vector as control, using the calcium phosphate method as described previously [49]. After incubation at 37uC for 6 h after transfection, the media were changed and the cells were incubated overnight. To produce new virus, the media were collected thrice a day until 293FT cells reach to total confluence. Viruses are used to infect PC-3 and LNCaP cells. 24 h after addition of viruses, infected cells were selected by adding puromycin to growth medium. Stable cell lines were verified by qRT-PCR. Both pMSCV and PIK plasmids were granted by generous Prof. Song LB, Sun Yat-Sen University Cancer Center, Guangzhou, China.
Wound healing assay
One day before scratch, stable cell lines of PC-3 and LNCaP cells were trypsinized and seeded equally into 6-well tissue culture plates, and grew to reach almost total confluence in 24 h. When non-serum starvation kept for 24 h after cell monolayer formed, an artificial homogenous wound was created onto the monolayer with a sterile 100 mL tip. After scratching, the cells were washed with serum-free medium. Images of cells migrating into the wound were captured at time points of 0 h, 6 h, 12 h and 24 h by inverted microscope (406).
In vitro invasion assay
The invasion assay was done by using Transwell chamber consisting of 8 mm membrane filter inserts (Corning) coated with Matrigel (BD Biosciences) as previously described [50]. Briefly, cells were trypsinized and suspended in serum-free medium. Then 1.5610 5 cells were added to the upper chamber, whereas lower chamber was filled with medium with 10% FBS. After incubated for 48 h, cells were invaded through the coated membrane to the lower surface, in which cells were fixed with 4% paraformaldehyde and stained with hematoxylin. The cell count was done under the microscope (1006).
Adhesion assay
The adhesion assay was performed as described previously [51]. Briefly, 96-well plates were coated with 50 ml fibronectin (50 mg/ ml) in original media at cell incubator for 1 h. After washed with warm media, the plates were blocked with 1% BSA at 37uC for 1 h and washed twice. After trypsinization, suspended cells were seeded to each well with serum-free media at a density of 1.5610 4 cells per well. When incubated the plates for 30 min, nonadherent cells were removed and plates were gently washed twice with PBS. Adherent cells were fixed in 4% paraformaldehyde for 20 min at room temperature, then stained with hematoxylin and counted under inverted microscope (1006).
Western blotting
For the expression analysis of EMT-related proteins, immunoblotting assay was carried out. All the stable cell lines, including PC-3/vector, PC-3/miR-143, PC-3/miR-145, LNCaP/vector, LNCaP/miR-143 and LNCaP/miR-145, were seeded in 100 mm tissue culture dishes. After 24 h, cells were washed with prechilled PBS when the confluence reached to 60-70%, followed by being harvested in sample buffer [62.5 mmol/L Tris-HCl (pH 6.8), 2% SDS, 10% glycerol, and 5% 2-b-mercaptoethanol]. Equal amounts of protein from the supernatant were loaded per lane and resolved by SDS-polyacrylamide electrophoresis. In sequence, protein was transferred onto PVDF membrane (Millipore), blocked by 5% nonfat milk for 1 h at room temperature, and probed with primary antibodies (1:1000) for 3 h, including mouse anti-E-Cadherin (BD Biosciences), mouse anti-Fibronectin (BD Biosciences) and mouse anti-Vimentin (BD Biosciences). Membranes were washed thrice (10 min each) in TBS-T buffer and incubated for 40 min at room temperature with horseradish peroxidase-conjugated anti-mouse secondary antibodies. Blots were washed thrice (10 min each) in TBS-T and developed using the ECL system. Protein loading was normalized by reprobing the blots with mouse anti-a-Tubulin antibody (Abcam).
In vivo models of prostate cancer bone metastasis
Intra-tibial injection model was used. ten male severe combined immunodeficient (SCID) mice of 3,4 weeks old were purchased from HFK Bio-Technology.CO., LTD (Beijing, China). Before inoculation, PC-3 cells were resuspended in 40 mL serun-free F-12 medium at the density of 2610 5 cells per 40 mL, and injected with a 26-gauge needle into the tibia using a drilling motion. Animals were randomized into two groups equally, where each 5 animals were treated with PC-3/miR-143 or PC-3/miR-145 on right tibias respectively. All the 10 mice were injected with PC-3/vector on left tibias as self-control. Mice were monitored weekly for tumor growth. On week 5, hindlimbs was radiographed using a Faxitron x-ray machine (Faxitron X-ray Corp, USA) to detect the bone lesions. Then mice were sacrificed, and tibias were collected, decalcified and fixed in formalin for further histologic analysis. Bone lesions were evaluated and calculated as described as previously described [52], where 0 grade for no lesion, 1 for minor lesions, 2 for small lesions, 3 for significant lesions with minor break of margins, and 4 for significant lesions with major break in peripheral lesions.
Statistical analysis
To determine different expression of miRNAs in Microarray, Significance Analysis of Microarrays (SAM, version 2.1) was performed using two class unpaired comparison in the SAM procedure. Significantly differentially expressed miRNAs was selected as following standards: |Score(d)|$2 [Numerator(r)/ Denominator(s+s0)], Fold Change$2 or #0.5, and q-value(%)#5 (false discovery rate, FDR) in bone metastasis of PCa compared to primary PCa.
Data were expressed as mean 6 standard deviation (SD). Statistics were assessed using SPSS 17.0 (SPSS, Inc., Chicago, IL, USA). In real-time PCR and animal experiments, data were compared by Student t-test. The relationship between downregulated miRNA expression and clinicopathological features in primary and bone metastatic PCa was analyzed using the Spearman rank correlation test. In metastasis assay-based experiments, the data were analyzed with one-way ANOVA. For understanding the relationship between miRNAs, the significant correlations were determined using the kendall rank correlation test. p-values of ,0.05 were considered significant.
miRNA expression profiling between primary PCa and bone metastasis by microarray analysis
To investigate whether miRNAs are differentially expressed in primary PCa and bone metastatic tissues, we collected six matched-pairs of primary and metastatic tissues (from same patient) and compared their expression profiles using a miRNA microarray. Because the total RNA in five pairs of samples was not enough for a microarray experiment, only a matched-pair of samples was successfully performed with a microarray experiment. We observed an obviously increased expression of 18 miRNAs in bone metastasis compared with primary PCa, including miRs-451, -210, -141, -19b, -29b, -16, -20a, -30b, -193a-3p, -15a, -181a, -26b, -200a, -106b, -20b, -486-5p, -15b, -363. The expression of three miRNAs (miRs-145, -143, -612) was obviously decreased in bone metastasis, especially the expression of miR-145 and miR-143 with the reduction of 5.4-fold and 2.7-fold, respectively.
In order to further determine whether the expression of miRNAs had statistically difference in primary PCa and bone metastatic tissues, we compared the miRNAs expression in 6 primary PCa samples and 7 bone metastatic samples using a miRNA microarray. We found that the expression of 5 miRNAs had statistically significant decreased in bone metastasis compared with primary PCa, including miRs-508-5p, -145, -143, -33a and -100 with the reduction of 4.1fold, 8.1-fold, 5.7-fold, 3.2-fold and 5.3-fold, respectively. No miRNA expression was significantly increased (Table 1).
Verification of miRNA microarray data by real-time PCR analysis in primary PCa and bone metastasis
To confirm our microarray data, real-time PCR was performed to analyze the expression of the most significantly regulated miRNAs, including miRs-508-5p, -143, -145, -33a and -100. We examined the expression of miRNAs above from independent samples of 16 primary PCa and 13 bone metastases, which had not been used for microarray analysis. After individual miRNA level in each sample was quantified and normalized to U6 expression, real-time PCR data confirmed that the expression of miRs-145, -143, -33a and -100 with the reduction of 17.3-fold, 12.9-fold, 1.7fold and 1.7-fold in the bone metastatic tissues, respectively. The expression levels of miRs-143 and -145 were down-regulated significantly in metastasis samples versus primary PCa (p = 0.012 and p = 0.014, respectively) ( Figure 1B). However, the expression of miRs-33a and -100 had no statistic significance (p = 0.236 and p = 0.448, respectively). miR-508-5p did not express in all primary PCa samples and bone metastatic samples. Although the expression of miRs-125b, -153, -210, -363, -451 and -572 was over 2-fold changes in bone metastasis compared with in primary PCa samples in microarray analysis, there were no statistically significant difference except for the expression of miR-125b with the reduction of 3-fold in the bone metastatic tissues by real-time PCR analysis. This was statistically significant down-regulation in bone metastatic samples (p = 0.012) ( Figure 1B). Thus, the results indicated that there was a significant down-regulation of miRs-145, -143, and -125b when PCa tumors metastasized to bone.
To further identify the major expression sources in primary PCa samples, the LNA-ISH technique was applied. The results showed that miRs-143 and -145 mainly expressed in cancer cells, and their expression in the stromal cells were lower or absent (Figure 2).
Relative expression of miRs-143 and -145 in the same sample
To further investigate whether the expression tendency of miR-145 and miR-143 was identical in the same sample, the relative expression of miR-145 and miR-143 in the same sample was plotted from the real-time PCR in all 22 samples of primary PCa (including 6 microarray samples) ( Figure 3A) and 20 samples of bone metastases (including 7 microarray samples) ( Figure 3B), respectively. The significant correlations of miR-145 and miR-143 were found in primary PCa (kendall correlation = 0.850, p,0.001) and bone metastases (kendall correlation = 0.765, p,0.001).
Downregulation of miRs-143 and -145 is negatively correlated to bone metastasis, serum PSA level and the Gleason score in primary PCa
Since we found that miRs-143 and -145 was downregulated in bone metastasis, we postulated that downregulation of miRs-143 and -145 might also be associated with clinicopathological features of PCa patients. Firstly, we performed a retrospective investigation of 22 patients with primary PCa. The results showed 12 patients without bone metastasis and 10 patients with bone metastasis. The distribution of age in 22 patients with and without bone metastases was no significant difference. The expression of miRs-143 and -145 in 10 patients with bone metastases was significantly lower than that in 12 patients without bone metastases (p = 0.039 and p = 0.041, Figure 4, A and D). Secondly, we assessed whether the expression of miRs-143 and -145 was related to total serum prostate-specific antigen (PSA) level and free PSA level in primary PCa. The results showed significant inverse correlations between the expression of miRs-143 and -145 and free PSA level (Spearman correlation = 20.501, p = 0.018; Spearman correlation = 20.536, p = 0.010. Figure 4, B and E), and a significant inverse correlation between the expression of miR-145 and total PSA level (Spearman correlation = 20.456, p = 0.033, Figure 4F); whereas no correlation between the expression of miR-143 and total PSA level (Spearman correlation = 20.403, p = 0.063). Finally, we investigated whether the expression of miRs-143 and -145 was related to Gleason score in primary PCa. There is also a statistically significant inverse correlation between the expression of miRs-143 and -145 and Gleason score (Spearman correlation = 20.574, p = 0.005; Spearman correlation = 20.546, p = 0.009, Figure 4, C and G). These results indicated that downregulations of miRs-143 and -145 were associated with tumor progression and bone metastasis. Downregulation of mir-125b was not correlated to bone metastasis, PSA level and the Gleason score in primary PCa (data not shown).
Upregulation of miRs-143 and -145 reduced the skeletal aggressiveness of PC-3 cells in vitro and in vivo
To investigate the role of miRs-143 and -145 in the development and progression of PCa metastasis, miRs-143 and -145 over-expressing cell lines (PC-3/miR-143, PC-3/miR-145, LNCaP/miR-143 and LNCaP/miR-145) were established by retrovirus transfection. Blank plasmid transfected cells, PC-3/ vector and LNCaP/vector were used as control groups. As showing in Figure 5, A and B, fold changes in the relative expression of miRs-143 and -145 transfected PC-3 and LNCaP cell lines were much higher than that these cells transfected with vector (p,0.01). Migration, invasion and adhesion assays were performed in vitro. Interestingly, cell migration was observed by wound healing assay that it was much slower than PC-3 cells transfected with vector when PC-3 cells transfected with miRs-143 and -145 in a time-dependent manner ( Figure 6A). The invasive property of PC-3 cells was examined by Transwell-Matrigel penetration assay, which depicted much fewer cells penetrated through the gel-membrane section when PC-3 cells transfected with miRs-143 and -145 than PC-3 cells transfected with vector ( Figure 6B, p,0.01). The invasive property of PC-3 cells was significantly inhibited by miR-143 and -145, even more obviously inhibited by miR-145.
We also examined the effects of miRs-143 and -145 on the adhesion ability of PC-3 cells in order to understand how miRs-143 and -145 affected PCa cells residing to secondary site. The results showed that miR-145 significantly enhanced adhesive ability of PC-3 cells when compared with PC-3 transfected with vector ( Figure 6C, p,0.05). PC-3 cells transfected with miR-143 also presented a higher adhesive ability, but it is not statistically significant. However, LNCaP/miR-143 and LNCaP/miR-145 cells and LNCaP/vector cells did not show significant difference in cell migration, invasion and adhesion (data not shown). Moreover, these results of cell migration, invasion and adhesion indicated that the ability of ectopic miR-145 repressing aggressiveness was more significant than that of miR-143. To further investigate the role of miRs-143 and -145 in the development and progression of PCa metastasis in vivo, an intra-tibial injection mouse model was used. Five weeks after intra-tibial inoculation, skeletal lesions of all animals in the left tibias were remarkably larger than those in the right tibias (Figure 7, upper panel), which means PC-3/miR-143 and PC-3/miR-145 had less skeletal invasive ability compared with PC-3/vector. Histological confirmations were made by H&E-stainning (Figure 7, middle panel). The extents and areas of skeletal lesions were assessed by X-ray scores (Figure 7, lower panel), from which PC-3/miR-143 and PC-3/miR-145 revealed to form significantly smaller tumors and bone invasion compared with PC-3/vector (p = 0.035 and p = 0.014, respectively). The results suggested that miRs-143 and -145 could also repress the development and aggressiveness of PCa in bone.
Upregulation of miRs-143 and -145 repressed EMT of PC-3 cells
To investigate whether miRs-143 and -145 regulated bone metastasis by repressing EMT, western blotting analysis was performed for detection of protein expression of E-cadherin, fibronectin and vimentin as described special characteristics of PC-3 and LNCaP cell lines during EMT. The result illustrated that Ecadherin, which is one of epithelial markers and supposed to be down-regulated during EMT, was increased in PC-3 cells transfected with miR-143 or miR-145. Moreover, fibronectin, which is a sort of mesenchymal markers and should be upregulated during EMT, was repressed in stably expressing miR-143 or miR-145 transfected PC-3 cells, compared to PC-3 cells transfected with vector. However, Vimentin, another mesenchymal marker, was just down-regulated in PC-3 cells when transfected with miR-143 ( Figure 8A). Nevertheless, all these proteins did not exhibit significant difference in LNCaP cells whatever the cells transfected with miR-143 or -145 ( Figure 8B).
We tested the ability of miRs-143 and -145 to reverse the mesenchymal phenotype of metastatic PCa cells. PC-3/vector cells are highly invasive and displayed typical fibroblastic morphology, which is in consistent with a very low level of E-cadherin expression. Over-expression of miRs-143 and -145 produced a dramatic shift in morphology, from a stick-like or long spindleshaped mesenchymal population to a short spindle-shaped or round and flat epithelial population ( Figure 8C). These suggested that miRs-143 and -145 had negative effects on EMT in PCa and could conduct mesenchymal cells to transdifferentiate toward epithelial cells.
In a recent study, the processing of miRs-143 and -145 were also involved in metastasis. In microvascuiature, miR-145 expressed in pericytes and repressed the migration of microvascular cells by directly targeting Fli-1 [70]. In breast cancer, miR-145 was identified to suppress cell invasion and metastasis by directly targeting MUC1 [27]. By direct deregulation of FSCN1, miR-145 inhibited invasion of esophageal squamous cell [71]. Furthermore, Sachdeva M, et al. found that miR-145 could target multiple metastasis-related genes including MMP-11 and ADAM-17 [72]. The miR-143 was also demonstrated to abrogate PCa progression in mice by interfering with ERK5 signaling, which is involved in EMT pathway [62,73]. In our study, the results in vitro and in vivo both supported that the deregulations of miR-143 and -145 might promote bone metastasis of PCa.
Our study demonstrated that upregulation of miR-143 in PC-3 cells repressed mesenchymal markers of fibronectin and vimentin, and increased E-cadherin, one of epithelial markers. Moreover, up-regulation of miR-145 in PC-3 cells exhibited the same effects on these proteins except for vimentin. Re-expression of miR-143 in SW620 cells of colorectal cancer also increased E-cadherin expression and the cells were in consistent with a transition to a more epithelial-like cell phenotype [60]. These finding indicated that miRs-143 and -145 may be a suppressor of the transition to a more mesenchymal-like phenotype. Given that EMT was considered to be one of the critical steps in tumor invasion and metastasis by allowing cancer cells acquire mesenchymal features that permit escape from the primary tumor [74], E-cadherin plays a critical role as a regulator of signaling complexes and loss of E-cadherin function is a clinical indicator for poor prognosis and metastasis [75]. We can expect that miRs-143 and -145 may inhibit migration and invasion of PC-3 cells by repressing EMT.
Although upregulations of miRs-143 and -145 were able to repress the aggressiveness and EMT of PC-3 cells from bone metastasis, it cannot reverse the metastatic characteristics and regulate EMT markers of LNCaP cells from lymph node metastasis. Especially, deregulation of miRs-143 and -145 was not found in lymph node metastasis comparing to primary PCa tumor with microarray analysis [76]. These findings suggest that miR-143 or -145 may have a cell type-specific function and only inhibit bone metastasis instead of lymph node metastasis, or loss of miRs-143 and -145 could promote the bone metastasis other than lymph node metastasis, which might be regulated by other miRNAs such as miR-221 [76].
Our results also showed a similar expression pattern of downregulated miRs-143 and -145 in a same primary PCa tumor or bone metastasis of PCa. Due to their DNA loci were very close to each other within approximate 2.0 kb at chromosome 5q32 [77] and both precursors might originate from the same primary miRNA [78], we speculate that miRs-143 and -145 could be regulated by some events with a similar mechanism. Moreover, we want to figure out whether one controls the expression of the other one, but there's no study about the interaction between miR-143 and miR-145. Further mechanism should be explored.
miRs-143 and -145 were downregulated to much lower levels in primary PCa patients with bone metastasis, compared with the attenuation in those without bone metastasis. Furthermore, the expression levels of miRs-143 and -145 in primary PCa patients were inversely correlated with Gleason Score, one of the strongest conventional predictors of tumor recurrence [79], indicating higher miRs-143 and -145 expressions might indicate a less possibility of bone metastasis and a better clinical condition, vice versa. There is the same relationship between miRs-143 and -145 expressions and free PSA level, one predictor of pathologic stage through clinical stage and biopsy Gleason score [80] and a direct predictor of biochemical progression for PCa-specific mortality [81]. Given the facts above, we expect that the levels of miRs-143 and -145 could be considered as novel biomarkers in discriminating different clinical stages of human PCa and predicting bone metastasis.
A recent study showed that chemically modified miR-143 can be a candidate for an RNA medicine for the treatment of colorectal tumors [82], which could function as anti-cancer drugs in the future. This is a great contribution to a fresh new perspective that can cast light on miRs-143 and -145 as therapeutic targets in bone metastasis of PCa clinically.
In summary, our findings suggest that miRs-143 and -145 may play important roles in the bone metastasis of PCa and be involved in the regulation of EMT. Both of them may also be clinically used as novel biomarkers in discriminating different stages of human PCa and predicting the possibility of metastasis or even as therapeutic targets in bone metastasis of PCa. | 7,158.2 | 2011-05-27T00:00:00.000 | [
"Biology",
"Medicine"
] |
An SVM fall recognition algorithm based on a gravity acceleration sensor
ABSTRACT To address the increasing health care needs for an ageing population, in this paper, a method of detecting human movements using smartphones is proposed to decrease the risk of accidents in the elderly. The method proposed in this paper uses a mobile phone that has an embedded acceleration sensor to record human motion information that are divided into daily activities (walking, running, going up stairs, going down stairs, and standing still) and falling down. In the process of data acquisition, motion noise contains some interference, and thus the median filter is employed to de-noise and smooth the motion data. Moreover, we extract representative multi-group features and analyse the features by principal component analysis and singular value decomposition to reduce dimensions. Through experimental comparisons with various classifiers, the support vector machine classifier is selected to classify the extracted features. The accuracy of fall detection reached 96.072%, which proved the accuracy of our proposed method.
Introduction
With the development of our society and the improvement of our living standards, fall detection, as a fundamental research topic in activity sensing, has attracted a great deal of attention from researchers in the past few years. One out of every three people over the age of 65 has fallen (Salva, Bolibar, Pera, & Arias, 2004), which seriously affects the physical and mental health of the elderly and their ability to care for themselves. If the elderly cannot get prompt help after falling, they will have to lie on the ground for a long time (Nouty, Fleury, & Rumeau, 2007). Falling is a serious threat to the health and safety of the elderly, and timely medical assistance will help reduce morbidity and mortality (Chen, Zhang, Feng, & Li, 2012).
In recent years, researchers have obtained some important achievements in fall detection. However, due to the complexity of human movements and the influence of other uncertain factors, the human body falls in different ways, which eventually leads to a false positive rate of detection. A fall generally results from interactions between many factors, and study by Skelton et al. identified more than 400 factors that cause falls (Chaccour, Darazi, Hassani, & Andres, 2017). Wei et al. reviewed the gait analysis of wearable systems and briefly studied the types and working principles of the sensors used in the system (Tao, Liu, Zheng, & Feng, 2012). The principles and methods of fall detection were investigated in the article in reference (Mubashir, Shao, & Seed, 2013), CONTACT Guilin Zhang<EMAIL_ADDRESS>which points out that the existing fall detection techniques can be divided into three categories. The first type of method is based on machine vision (Panahi & Ghods, 2018) (Khawandi, Ballit, & Daya, 2013), in which images are captured by using the Microsoft Kinect R camera and processed to extract features using a detection algorithm. In addition, the SVM classifier is used to distinguish fall from normal motion. Rougier et al. proposed a new method for detecting falls by analysing the deformation of the human body in a video sequence (Rougier, Meunier, St-Arnaud, & Rousseau, 2011). Agrawal et al. used real-time video surveillance to detect human fall events at home. Then, they used the human contours generated in the video to match the human template to determine whether the fallen object in the video was human (Agrawal, Tripathi, & Jalal, 2017). However, this system has many limitations, such as the high environmental requirements, the elevated systems costs from complicated algorithm processing, and the potential to expose a user's personal privacy. The second type of method is based on acoustic fall detection systems, which uses principles that are similar to those of a stethoscope. The motion state of the human body is classified by capturing sound waves generated from the floor reflection (Principi, Droghini, Squartini, Olivetti, & Piazza, 2016). In addition, the sound signal of this system has many interference signals that are leading to a decreased recognition rate. The third type of methods is based on wearable devices that generally collect the acceleration sensor signal of human body. These methods identify falls by a certain proposed algorithm. The advantage of using a wearable sensor is that there is no need to install additional equipment. Therefore, the area of operation is not limited by space. Ailisto et al. (Ailisto & Makela, 2005) first proposed a method using acceleration sensors to measure the acceleration data of human body for gait recognition. Lee proposed a vertical velocity based precollision fall detection method using wearable inertial sensors (Lee, Robinovitch, & Park, 2015). A Harris et al. used wearable technology and machine learning algorithms to study fall recognition, including fall detection and fall direction recognition (Harris, True, Zhen, & Jin, 2017). Wu developed a new fall detection system based on wearable devices. The fall is identified by an effective quaternion-based algorithm and a help request is automatically sent to the patient's location (Wu, Zhao, Zhao, & Zhong, 2015). However, there are some shortcomings of this system. The lying down or suddenly sit down of a user may cause the false alarm problem. The wearable device detection system is convenient to carry, and it is not restricted by the environment. Because of the complexity of human behaviours, the recognition results are sometimes diverse and different. With the development of information technology, various mobile devices have rapidly emerged, and their performance and embedded sensors have been enhanced as well. All kinds of sensors can measure real-time motion information of users, and this information can be used not only for predicting users' locations, but also for identifying users' behaviours (Li, Xie, Zhou, Gou, & Bie, 2016). Sensors embedded in smartphone are used to acquire data, which are analysed and used to design an algorithm for fall detection (Hakim, Huq, Shanta, & Ibrahim, 2017). Pinky Paul et al. focused on activity recognition using embedded accelerometers in smartphone (Paul & George, 2015). Rakhman et al. developed a fall detection system, which detects fall state by setting a threshold. However, this method is only adapted to the case of a type of forward fall (Rakhman, Nugroho, Widyawan, & Kurnianingsih, 2014). Tolkiehn uses a 3D accelerometer and air pressure sensor to detect fall state and fall direction (Tolkiehn, Atallah, Lo, & Yang, 2011).
By using embedded sensors with computational ability, personal devices are able to detect physical activities. The advantage of this solution is that we do not need to deploy additional device. Thus, the designed system is simple and easy to use (Sun, Zhang, Li, Guo, & Li, 2010). In terms of data analysis, the main approaches are thresholding and machine learning. A Harris et al. compared the four algorithms, including the support vector machine, random forest, logistic regression and k-nearest neighbours (K-NN), and they ultimately demonstrated the accuracy of their proposed fall recognition system. However, fewer features were selected in their experiments and the recognition accuracy was not high (2017). Paul used a clustering K-NN classification algorithm, which is superior to K-NN in accuracy. However, this algorithm is susceptible to abnormal values and ultimately leads to misjudgments (2015). Khawandi et al. used a decision tree approach to classify each feature, but it could be over-fitting due to its multiple scans and data set types (2013). In this paper, we process the data collected from the sensor and adopt SVM to detect fall state.
Data collection and feature extraction
The acceleration information changes relatively smoothly during normal movement, and a sharp impact force occurs during a fall. The experiment considers that the acceleration signal will have a certain deviation due to the difference in the position of the mobile phone. Thus, to better reflect the state of the human body the mobile phone is placed at the volunteer's waist, which is the centre of gravity of the human body. Figure 1 presents photos of a fall taken every 0.2 s. It can be seen in the figure that the three X, Y, and Z axes have relatively large changes when the human body falls. Therefore, we used gravity acceleration sensor signal to detect falls.
Data collection and processing
The collected sensor data include not only the human motion acceleration signals but also gravity acceleration signals. Both signals could be disturbed by noise during the motion. Therefore, it was necessary to de-noise and smooth the collected data. The median filter is a nonlinear smoothing technique. And its basic principle is to replace the value of a point in a sequence of digits with the median value of all the points in a neighbourhood. The median filter has a good filtering effect on impulse noise. Median filter can protect the edge of signal in the process of filtering. Therefore, this paper uses median filter to process the signal.
Feature extraction and feature selection
For each sample collected, a number of factors related to human motion recognition must be identified, each of which becomes a feature of the research. The performance of feature selection can greatly have impact on classification results.
Feature extraction is a method to extract representative features of a pattern by transforming the measured values. Therefore, feature extraction plays an important role in pattern recognition. In a human motion pattern recognition system, the feature vector is extracted and selected from the time domain acceleration signal. The extraction process is relatively simple. A fall is a short and strenuous movement in the unconscious state. The resulting acceleration is only analysed as one of the features, and the remaining features are extracted based on the uniaxial acceleration, defined in equation (1): where a x , a y and a z are the acceleration in the X, Y and Z directions, respectively. The resulting acceleration is used to reflect the severity of the body movement, and the error caused by the uniaxial acceleration analysis of human motion is avoided.
The changes in time-domain signals during daily movements are obvious, while the frequency-domain signal changes are small. Therefore, only the time-domain features of the acceleration are extracted for the experiment. The most commonly used time domain features are: the mean (Wang, Yang, Chen, & Chen, 2005) (Ling & Intille, 2004) (Ravi, Dandekar, Mysore, & Littman, 2005), the variance (2005) (2004), the correlation between axes (2005) (2004) (2005), the skewness and the kurtosis. The acceleration signal will be almost constant when a person stays still. Skewness is a measure of the skew direction and degree of acceleration distribution, and it can effectively distinguish a downward movement from another state of motion. Kurtosis is the peak value of an acceleration curve at the mean value, and it can distinguish between running and other states. During a fall, the angle of inclination of the human body towards the ground changes greatly. Figure 1 shows that the Y-axis of the mobile phone position changes greatly in vertical direction. In order to distinguish falls from daily activities, the rotation angle ∂ between the Y axis and the gravity acceleration is taken into consideration in this paper, and is defined as follows: where G is the gravity acceleration.
As the number of features increases, the dimensions of the feature space also expand. And the irrelevance of features may result in a decrease in the recognition rate. Principal component analysis (PCA) is a widely used statistical method for identifying high-dimensional dataset patterns (Mastylo, 2016). To eliminate feature relevance and information redundancy, we used the PCA method to reduce the dimensions of the extracted features.
In the experiment, 21-dimensional time domain features of the inclination angle, resultant acceleration, mean, and variance, correlation between axes, kurtosis and skewness of the X, Y and Z axes were extracted. With a sample definition of Sample : X (1) , X (2) , · · · , X (m) , the characteristics of each sample are n ] T . The process is as follows: Step 1: Normalize the selected training sample features and calculate the processed sample data as: Step 2: Calculate the covariance matrix of the sample features.
Cov
Step 3: Use the singular value decomposition algorithm to calculate the eigenvalues and eigenvectors of the covariance matrix refer to the function in MATLAB.
[eigenvectors, eigenvalues] = eig(cov) Step 4: Arrange the eigenvalues in descending order and calculate the cumulative contribution rate for dimensionality reduction (set threshold is 0.90).
The reduced dimensions of the matrix are represented as X k . The original 21-dimensional feature matrix is reduced to 7-dimensional matrix after PCA dimensionality reduction. An effective classification method can substantially improve the system's ultimate recognition performance.
Classification and recognition
Artificial neural networks and the SVM classifier have been widely used in the field of pattern recognition (Talele, Shirsat, Uplenchwar, & Tuckley, 2016) (Li, Pang, Liu, & Wang, 2017). The SVM classifier is generally adopted to solve classification and regression problems. Class labels 1-6 represent going up stairs, walking, going down stairs, running, standing and falling. Figure 2 is the flowchart of the fall detection system. The training set is modelled by neural networks (generalized regression neural networks and probabilistic neural networks) and support vector machines in machine learning.
Neural network
A generalized regression neural network (GRNN) has a summation layer, which can remove the weight connection between hidden layers and output layers. There are two types of neurons in the summation layers. The first type is to arithmetically sum the output of all neurons in the pattern layers. The connection weight between the pattern layers and each neuron is set to 1. The second type is the weighted summation of neurons in all patterns. The output Y of the network is obtained with the following calculation: where X is the network input variable and X i is the corresponding learning sample of the ith neuron, Y i is the sample observation of the random variable y, n is the sample capacity, and σ is the smoothing factor.
The GRNN training process does not need to be iterated, and it is much faster than the back propagation (BP) neural network. In addition, the GRNN learning algorithm does not need to adjust the connection weights among neurons in the training process. Instead, the smoothing factors are changed to adjust the transfer function among units.
Probabilistic neural networks (PNNs) have many advantages such as a simple learning process, fast training, more accurate classification and better fault tolerance. Essentially, it is a supervised network classifier based on Bayesian minimum risk criteria. The structures of probabilistic neural networks and generalized regressive neural networks are similar. The equation for estimating the probability density function in the PNN model is formulated as follows: where w i is the class of the sample, − → x ik is the kth training sample belonging to the w i class, l is the dimension of the sample vector, σ is the smoothing parameter and N i is the total number of training samples of the w i class. The parameters that need to be adjusted in the PNN model are σ , which can be half of the average distance between feature vectors in the same group. By performing several experiments, we found that it is not difficult to find the optimal value of σ in practice. There is no significant change in the misclassification ratio with a slight change in the value of σ .
Support vector machine
The SVM classifier was originally applied to solve dichotomous problems. He et al. generally divides multi-class problems into several types of problems (He & Jin, 2008). Ma et al. provide good results in various pattern recognition areas and seem to be a good choice for human motion recognition (Ma, Zhang, Yang, Liu, & Chen, 2016). The given data contains m indicators (x ∈ R m ) and l training points. The SVM classifier can be separated by the optimal hyperplane shown as follows: where ω is a hyperplane normal vector, and b is a constant term of a hyperplane. If the training set is inseparable in the linear space, the relaxation variable ξ i (ξ i ≥ 0) and the penalty parameter C(C > 0) are introduced to the training point (x i , y i ) of i. Then the optimal objective function and the constraint condition of the classification problem under the linear non-separable case are formulated as follows: For non-linear cases, the approach of SVM is to choose a kernel function that solves the problem of linear indivisibility in the original space by mapping the data to a high-dimensional space. In this paper, the radial basis kernel function is used, which is formulated as follows: We adopted crossover verification (CV) and a grid search to find the optimum of the parameters. Crossvalidation is an assessment statistical analysis. The basic idea of CV is to divide the raw data into two groups, one is the test set, and the other is the validation set. The training set is used to train the classifier, where we can obtain the optimal parameters of the model. Then, the model is validated with the verification set, which takes the classification accuracy as the performance index of evaluating the classifier.
CV can effectively avoid over-learning and underlearning. Experiments show that the model obtained by training the SVM classifier with the parameters selected by the CV is more effective than the model obtained by randomly selecting the parameters for training the SVM classifier. The grid search method is used to find the global optimal parameters to improve classification accuracy. If there are multiple sets of (c g), we should look for the (c g) pair corresponding to the smallest parameter c. If there are multiple sets of g, then the first set of (c g) pairs should be selected.
Experiment results
Based on many experiments, the most ideal model prediction results for GRNN and PNN are shown in Figure 3. The most ideal model prediction results for SVM are shown in Figure 4. From the comparisons, it can be seen that SVM is the best choice for classifying and identifying human motion states.
To predict the state of the daily movements of a human body, the category labels 1-5 represent: going up stairs, walking, descending stairs, running and standing, respectively. Figure 5 shows the final prediction results with an accuracy rate of 92%. The CV method yielded a value of 64 for the penalty parameter C and 128 for the parameter g. Figure 5 shows that there is a tendency to misjudge the upper and lower stairs. The variation trends of the resultant accelerations are similar in both cases. The state of the daily movements of a human body is classified in one category, and the state of falling is classified in the other category. The prediction results are shown in Figure 6. In the experiment, we can get the best parameters c = 4 g = 8. The accuracy of classification under daily condition is 96.072%.
One of the challenges in fall detection is to identify falls in our daily life versus similar activities such as lying down, sitting down, squatting, which often lead to misjudgment.
Conclusion
In this paper, a fall detection algorithm based on the SVM classifier is proposed. The median filter is used to reduce the noise of the sensor signal. The representative features are extracted based on the acceleration of the sensor signal. The experimental results show that the accuracy of the SVM classifier is higher than that of neural networks and its prediction accuracy can reach 96.072%. Our future work will use multiple vector machines to reduce misjudgments and missed cases. Talele, K., Shirsat, A., Uplenchwar, T., & Tuckley, K. (2016). | 4,559.8 | 2018-09-21T00:00:00.000 | [
"Computer Science"
] |
Theoretical investigation of graphene-based photonic modulators
Integration of electronics and photonics for future applications requires an efficient conversion of electrical to optical signals. The excellent electronic and photonic properties of graphene make it a suitable material for integrated systems with extremely wide operational bandwidth. In this paper, we analyze the novel geometry of modulator based on the rib photonic waveguide configuration with a double-layer graphene placed between a slab and ridge. The theoretical analysis of graphene-based electro-absorption modulator was performed showing that a 3 dB modulation with ~ 600 nm-long waveguide is possible resulting in energy per bit below 1 fJ/bit. The optical bandwidth of such modulators exceeds 12 THz with an operation speed ranging from 160 GHz to 850 GHz and limited only by graphene resistance. The performances of modulators were evaluated based on the figure of merit defined as the ratio between extinction ratio and insertion losses where it was found to exceed 220.
I n recent years it has become evident that bandwidth-limited electrical interconnects can no longer meet the growing demand on data processing and telecommunication traffic. Besides the limited capacity, electrical wires suffer from large energy consumption, signal attenuation and significant operational costs as interconnects densities rise. As opposed to it, photonics enables huge amounts of data to be moved at very high speeds with extremely low power over very small optical waveguides. One of the key components of the future telecommunication networks are optical modulators which serve as the gateway from the electrical to the optical domain. These are particularly attractive for low energy transmitters because they do not have a threshold that could limit the minimum operating energy and they may be easier to integrate with the available silicon platform [1][2][3] . An optical modulator can modify the properties of light such as its phase, amplitude or polarization by thermo-optic-4 , electro-optic-5 , or electro-absorption modulation 6 and they are usually based on interference (Mach-Zehnder interferometers) 7 , resonance (ring resonators) 8 and bandgap absorption (germanium-based electro-absorption modulators) 9 . However, they suffer either from slow switching time (thermo-optic switches), narrow operating bandwidth or large footprint (electro-optic modulators). Therefore, there is a need to find new technology which will enable high operation conditions over a small active region. The unique properties of graphene such as strong coupling with light, high-speed operation, and gate-variable optical conductivity 10 make it a very promising material for realizing novel modulators 11,12 . Graphene offers the highest intrinsic mobility and the largest current density of any material, as well as an extraordinary thermal conductivity. These features make graphene ideal for use in the field of nanoelectronics. A single atom layer of graphene can provide the highest saturable absorption for a given amount of material -a phenomenon which enables highly efficient electro-absorption modulators 13 , photodetectors 14 and power monitors 15 to be realized.
Recently, the broadband electro-absorption modulator based on interband absorption of graphene was demonstrated with an overall length of 40 mm and with a modulation depth of 0.16 dB/mm. This was achieved by placing a double-layer graphene on top of a silicon waveguide 16 . In this configuration, the effect of graphene conductivity change is not very pronounced as graphene is placed far from the electric field maximum of the propagating mode. In order to increase the effects of changes in graphene's conductivity on the propagating mode, the graphene should be placed at the maximum of the electric field 17 . It has been previously shown that plasmonics ridge waveguides 18 are ideal candidates to realize graphene-assisted optical modulators since the electric field reaches its maximum at the interface between metal and dielectric 13 . Consequently, placing a doublelayer graphene between metal and dielectric will have strong effect on the propagating mode. Another concept is based on a ridge-type photonic modulator which consists of dual graphene layers separated by thin hexagonal boron nitride (hBN) spacers placed at the center of the waveguide with maximum light intensity 11 . Although the concept is straightforward and obvious, fabrication of such structures possesses many challenges in terms of alignment, and hence, the technological imperfection might influence the resultant propagation characteristic of the mode. Compared to this configuration, the graphene-based modulator presented in this paper ( Fig. 1(a)-(b)) is based on the rib waveguide platform with the graphene sheets and spacers placed between waveguide and slab. Thus, the difficulties in alignment between top and bottom ridge region are avoided. Additionally, the rib waveguide approach provides some significant advantages compared to strip waveguides as it allows for enhanced flexibility and compatibility with all processing modules such as photodiodes and multiplexers.
Results
Geometry and gate-variable dielectric permittivity of graphene. The proposed double-layer graphene optical modulator is based on the rib waveguide configuration with the Si rib waveguide deposited on the buried-oxide layer. To maximize the influence of the graphene on the modulator performances, the double-layer graphene was placed between Si layer (slab) and Si ridge ( Fig. 1(b)). The doublelayer graphene separated by a thin dielectric forms a simple parallel capacitor model in which the property is controlled by chemical potential m and which can be tuned by electrical gating. Because of a synergy of the graphene properties induced by zero bandgap and symmetric valence and conduction bands, one graphene layer is doped by holes and the other by electrons at the same doping level. Thus, the applied gate voltage changes the charge-carrier density in graphene, n 5 a(V 1 V 0 ), and accordingly shifts a Fermi level: where V 0 is the offset voltage caused by natural doping, a is estimated from a simple capacitor model (a 5 e 0 e d /de), is the Planck constant, v F is the Fermi velocity, and n is the electron/hole doping concentration.
The gate-dependent complex dielectric function of graphene, e(v), was obtained from the complex optical conductivity s(v) 5 s 1 (v) 1 is 2 (v) of graphene, consisting of interband and intraband contributions ( Fig. 1(c)).
Position of graphene sheets. To investigate the performances of the modulator, different waveguide configurations were numerically analyzed with a double-layer graphene and dielectric spacer placed between slab and a ridge (Fig. 2). For all calculations the ridge width was kept constant at w 5 400 nm and sum of the ridge height and slab height at h 5 340 nm, while changing only a ratio of the ridge height to the slab height. We started our calculations with the strip waveguide (ridge height: h 5 340 nm, slab height: t 5 0 nm) with a double-layer graphene and spacers placed on the top of the waveguide ( Fig. 2(a)). For this configuration, the modulation depth of 0.18 dB/mm and 0.037 dB/mm for TM and TE mode was calculated to have good agreement with the experimental data of 0.16 dB/mm 16 obtained for the same configuration with the ratio of extinction ratio to insertion loss (ER-IL) 14.3 and 3.5 for TM and TE mode respectively. It should be noted that insertion losses were attributed in the paper to the propagation losses as the coupling www.nature.com/scientificreports losses from photonics waveguide to the modulator should be considered as being close to 0 dB. For the same strip waveguide configuration, an improvement was observed for graphene and spacers placed below the ridge. i.e., between a buried-oxide and the Si ridge ( Fig. 2(b)). For this configuration, the mode power attenuation (MPA) was calculated to be 0.34 dB/mm (TM) and 0.033 dB/mm (TE) with the ER-IL ratio of 25.5 (TM) and 2.9 (TE) respectively. Introducing a slab and reducing the ridge thickness moves the mode down, i.e., the maximum of the mode electric field is closer to the graphene sheet thus strengthening the interaction between them. For only a 40 nm-thick slab and with the ridge thickness reduced from 340 nm to 300 nm, the mode attenuation reaches 2.32 dB/mm for the TM mode and 0.18 dB/mm for the TE mode what translates to a significant improvement in the ER-IL ratio to 142 and 10.9 for the TM and TE modes respectively ( Fig. 2(f)). However, even better performances was obtained for 80 nm-thick slab and 260 nm-thick ridge where the MPA was 5.05 dB/mm for the TM mode and 0.29 dB/mm for the TE mode with the calculated ER-IL ratio of 230 for TM mode and 12.8 for TE mode (Fig. 2(c)-(d)). Further increasing the slab thickness, while decreasing the ridge height, causes the mode to be pushed further into the slab resulting in weaker mode field confinement as the mode is spreads into the slab. Beyond a cutoff slab thickness of 100-110 nm, a photonics mode is mostly supported in the slab layer.
It has to be emphasized that conventional GeSi electro-absorption modulators typically have ER-IL ratio which do not exceed 3.5. The optical modulators based on Si ridge waveguides integrated with graphene provide ER-IL ratios which are two orders of magnitude higher, offering a tremendous improvement over state of the art modulators.
TM/TE mode versus chemical potential. Based on the results from previous section, the detailed analysis was performed for a Si rib photonics waveguide with the slab thickness of 80 nm and with a 260 nm-thick and 400 nm-wide ridge and with a double-layer graphene and spacers placed between the slab and ridge. In the first step, the change in refractive index and MPA was analyzed as a function of chemical potential for different spacer dielectrics while keeping its thickness constant at 5 nm. It has to be emphasized, that finding an appropriate dielectric spacer is another key issue which has to be addressed in obtaining efficient electro-absorption modulators. Firstly, good quality graphene sheets with high carrier mobility are formed on spacers with a small lattice mismatch with graphene. Secondly, very thin and high dielectric constant spacers are needed as it reduces the energy per bit and power consumption. Based on this, analyses were performed for two spacers with refractive indices of n 5 1.98 and n 5 3.47 corresponding to hexagonal boron nitride (hBN) and high-k dielectric respectively ( Fig. 3(a)).
As shown in Fig. 3(a), a similar behavior in terms of the mode effective index and MPA is observed for both dielectric spacers. However, a few changes still can be observed in terms of the mode effective index which has strong effect on realization of the electrorefractive modulators. Firstly, the higher refractive index of spacers results in a slightly larger mode effective index so for m 5 0 eV it increases from n eff 5 2.353 for the hBN spacer to n eff 5 2.450 for the high-k dielectric spacer. Secondly, for the modulator with the low-k dielectric spacer (hBN), the minimum mode effective index occurs at m 5 0.495 eV whereas for the high-k dielectric spacer it shifts to m 5 0.500 eV. At the same time, the maximum mode effective index is not affected by the spacer and remains at m 5 0.530 eV. Apart from it, the difference between maximum and minimum mode effective index increases from Dn 5 0.118 for the low-k dielectric spacer (hBN) to Dn 5 0.144 for the high-k dielectric spacer. In terms of a chemical potential, the change between minimum and maximum mode effective index is Dm 5 0.035 eV for the hBN spacer and Dm 5 0.030 eV for the high-k dielectric spacer. However, to achieve the same charge carrier mobility in graphene, a lower voltage change is required for the high-k dielectric spacers. The required voltage change was calculated to be DV 5 0.505 V for the hBN spacer while for the high-k dielectric spacer it drops more than four times to DV 5 0.141 V. Additionally, to introduce a p-phase shift between both Since the optical absorption in graphene can be controlled through electrical gating, the graphene can be used as the active medium in an optical electro-absorption modulator. Therefore, conductivity change induced by an applied gate voltage has a strong effect on the propagating mode ( Fig. 3(a)). Regardless of the type of spacer material, the minimum absorption of , 0.02 dB/mm was observed for m 5 0.404 eV while for m 5 0.512 eV the absorption arises to the maximum of , 4.172 dB/mm and , 5.052 dB/mm for the hBN and high-k dielectric spacers respectively. Thus, to achieve a 3 dB modulation depth a 720 nm-long waveguide is required for structure with the hBN spacer, and only a 595 nm-long waveguide for structure with the high-k dielectric spacer. Furthermore, the efficiency of the electro-absorption modulator estimated by the figure of merit, Da/a, shows slightly better performances with a high-k dielectric spacer where it was calculated to be 229 compared to 224 obtained for a lowk dielectric spacer. Apart from the optical properties, the investigated graphene-based electro-absorption modulators possess excellent electrical properties such as the energy per bit consumption. The voltage required to switch modulator from its minimum absorption state to maximum was evaluated to be DV 5 0.453 V and DV 5 1.394 V what corresponds to the E bit 5 0.26 fJ/bit for modulator with the hBN spacer and E bit 5 0.96 fJ/bit with the high-k dielectric spacer respectively.
One of the ways to increase the modulator attenuation is to push mode down into a slab such that graphene resides closer to the mode field maximum. Thus, the interaction between mode field and a graphene is maximized. It can be achieved either by increasing the slab thickness or by decreasing the ridge dimensions. In our calculations the slab thickness and ridge width were kept constant while the ridge height was decreased from 260 nm to 200 nm ( Fig. 3(b)). The analysis was performed for a high-k dielectric spacer and for both supported modes -TM and TE. It was found that, as the attenuation curves for both modes exhibit similar behavior, the attenuation of the TM mode is significantly larger than that of the TE mode, and the change of chemical potential is observed to have a stronger effect on the TM mode. Most of all, for m 5 0.51 eV there is a transition from ''dielectric graphene'' to ''metallic graphene'' corresponding to a dip in the curve of dielectric constant (Fig. 1(c)). Specifically, in the absence of applied voltage, m 5 0 eV, the attenuation losses for both modes are similar with 0.34 dB/mm for the TE mode and 0.46 dB/mm for the TM mode respectively and this trend dominates up to m < 0.4 eV. This is consistent with observed trends in silicon photonics, where TM polarization is less commonly used because propagation losses tend to be higher. For m < 0.404 eV, the attenuation losses for both modes reach a minimum of 0.025 dB/mm what is manifested by decreases in the imaginary part of graphene's dielectric constant and increase in its real part ( Fig. 1(c)). As m is further increased, the losses encountered in polarizing the graphene are keep constant in low level but the polarization strength induced by a mode and described by real part of dielectric constant drops very fast what increases the MPA of both modes. However, the attenuation of the TM mode is larger than that of TE mode. At m 5 0.51 eV, the real part of dielectric constant becomes negative and the plasmonics mode associated with TM polarized light and propagating in the interface between graphene and dielectric emerges. As the negative value of the real part of dielectric constant arises, the absorption losses and MPA decrease. It should be emphasized that maximum absorption of the mode corresponds to a minimum value of the graphene dielectric constant i.e., a dip in the curve of dielectric constant magnitude 17 . This ''epsilonnear-zero'' effect can be seen almost in any material at its plasma frequency. However, compared to other materials, the uniqueness of graphene lies in that its plasma frequency can be tuned by electrical gating. Furthermore, the magnitude of dielectric constant varies more than 30 times between m 5 0.4 eV (maximum dielectric constant magnitude) and m 5 0.51 eV (minimum dielectric constant magnitude) what explains high modulation depth achieved with the graphene-based modulators.
Wavelength dependence of MPA, optical bandwidth. Apart from high modulation speed, small footprint and high modulation strength (efficiency), the novel integrated modulators require large optical bandwidth for applications in on-chip optical interconnects. However, due to the poor electro-optic properties of regular materials the conventional electro-optic modulators suffer either from very large footprint or narrow bandwidth. In comparison to compound semiconductors, the ultrahigh carrier mobility and optical absorption of graphene which is independent of wavelength, enables a new optical modulators with ultra-broad optical bandwidth.
To evaluate the bandwidth of the presented rib waveguide photonic modulator, the dependence of chemical potential on graphene's dielectric constant and, in consequence, on the mode effective index and MPA was studied for different wavelengths covering the entire telecommunications bandwidth. As shown in Fig. 4(a)-(b), the graphene's dielectric constant and consequently the photonics mode vary with the wavelength. Increases in the chemical potential causes a peak in the real part of dielectric constant (Fig. 4(a)), corresponding to the minimum MPA, and a dip in the magnitude of dielectric constant (transformation of ''dielectric'' graphene to ''metallic'' graphene), corresponding to the maximum MPA, move to the lower wavelengths which has direct influence on the MPA which redshift as well ( Fig. 4(b)).
Thus, for a chemical potential of 0.512 eV, the dip in the magnitude of dielectric constant is observed at a wavelength of 1550 nm, what fulfill the requirements of maximum losses of the modulator. Away from the central wavelength, the MPA decreases. Consequently, the wavelength spanning from 1520 nm to 1580 nm only results in a decrease in modulation depth of ,1.5 dB/mm.
Consequently, for a lower potential, m 5 0.46 eV, the dip in the real part of the dielectric constant moves towards longer wavelengths with a maximum MPA corresponding to l < 1710 nm. Thus, for a 1 mm-long modulator, a 3 dB optical bandwidth was calculated to be 14.1 THz near 1550 nm, 12.3 THz near 1480 nm and 13.6 THz near 1710 nm for a chemical potential of 0.512 eV, 0.54 eV and 0.46 eV respectively. For a lower chemical potential, m 5 0.42 eV, a 3 dB modulation requires at least 1.5 mm-long waveguide with the optical bandwidth calculated for a 2 mm-long modulator exceeding 15.1 THz.
Apart from the chemical potential, the wavelength dependence of the MPA and mode effective index was studied for different spacer materials and for both supported modes -TE and TM (Fig. 4(c)-(d)). For a high-k dielectric spacer and for the modulator length of 1 mm, the 5.08 dB mode attenuation can be achieved while for a lowk dielectric spacer the same level of mode attenuation requires at least a 1.2 mm-long interaction length. The optical bandwidth for both modulators was calculated at 14.1 THz with a central wavelength at 1550 nm. Based on this it can be concluded that spacer not affect an optical bandwidth of the modulator but it impacts the modulator length ( Fig. 4(d)).
For the TE supported mode (Fig. 4(c)) with a mode attenuation of 5.08 dB, the interaction length of 16.2 mm is required for structure with a high-k dielectric spacer and 21.2 mm for a low-k dielectric spacer. Additionally, the central wavelength corresponding to the maximum mode attenuation shifts for a TE mode from 1550 nm to 1560 nm with a 3 dB bandwidth calculated to be 16.5 THz for both dielectric spacers.
Therefore, this modulator shows broadband operation so hundreds of channels from different systems can be processed in the same device because of weak wavelength dependence.
Modulation speed. Due to the exceptionally high carrier mobility and high saturation velocity, the operation bandwidth is not likely limited by the carrier transit time. The relaxation time is inversely proportional to the degree of crystalline disorder in the graphene so with a high quality of graphene it can operates on the timescale of picoseconds, which implies that graphene-based electronics may operate at 500 GHz. In practice, the maximum operating bandwidth is limited by the RC constant of the device.
As the calculated capacitance is very low for both dielectric spacers, the main limiting factor of the capacitive delay is graphene resistance with the graphene sheet resistance of 23.5 V/% for m 5 0 eV. Thus, for modulator working in a low loss regime, i.e., small losses for OFF voltage state, to obtain a 3 dB modulation it requires to introduce a modulator to a high loss regime, which is achieved by increasing the chemical potential. For a 0.72 mm-long modulator with a low-k dielectric spacer (e 5 3.9), a 3 dB modulation is achieved with an operation speed of 0.51 THz while keeping an energy per bit at a low level of 0.96 fJ/bit. Conversely, for modulator with a high-k dielectric spacer, the active length of around 0.595 mm is sufficient to achieve a 3 dB modulation with an operation speed of 0.16 THz and E bit 5 0.26 fJ/bit. Thus, it can be confirmed there is fundamental energy-speed tradeoff. Additionally, it should be noted that adaptation of longer modulators will not affect the modulation speed but will improve the modulation depth which will be however, at the cost of E bit . Furthermore, it can be summarized that low-k dielectric spacers offer a higher operation speed but at the cost of energy and footprint (Fig. 3(a)).
To go beyond a ,1 THz operation speed, the modulator should be introduced in a high loss regime with OFF applied voltage state corresponding to the maximum modulation depth for m 5 0.512 eV (Fig. 3(a)). A 3 dB modulation for a 0.72 mm-long waveguide with a low-k dielectric spacer requires significant increases in the chemical potential which can be realized by increases in the bias voltage. At the same time, with increases in the bias voltage, the graphene conductivity arises and graphene sheet resistance decreases resulting in an increase in the operation speed. Thus, for m 5 1.0 eV corresponding to voltage increases to 14 V, the bandwidth increases to 3.5 THz at the cost of the energy per bit which increases to 69 fJ/bit. Compared to it, with a high-k dielectric spacer and for a modulator length of 0.595 mm, the bandwidth drops to 1.14 THz with E bit 5 18 fJ/bit for m 5 1.0 eV corresponding to voltage of 4.5 V.
Discussion
In summary, we propose novel graphene-based photonics electroabsorption modulators based on a rib waveguide configuration. To maximize the influence of graphene on the modulator performance, the double-layer graphene was placed between a rib slab and a ridge, close to the mode field maximum. In this configuration, the conductivity of graphene was dynamically tuned by a gate voltage with a strong effect on the propagating mode. Regardless of the graphene position, it was found that TM mode gives better modulation ability than the TE mode with a modulation depth calculated to be 0.18 dB/ mm and 0.037 dB/mm for TM and TE mode respectively and for the modulator configuration with graphene placed on top of the waveguide. The figure of merit in this configuration was 14.3 for the TM mode and 3.5 for TE mode. Significant improvement was observed for graphene placed between 80 mm-thick slab and 260 mm-thick ridge with a figure of merit arising to 230 for the TM mode and 12.8 for the TE mode. Additionally, the influence of the spacer on the overall performance of the modulator was considered showing that with high refractive indices spacers the modulator length can be reduced from 720 nm for the hBN spacer to 595 nm for the high-k dielectric spacer. As optical interconnects require large optical bandwidth, the wavelength dependence of the mode attenuation was studied for different chemical potentials showing an optical bandwidth exceeding 14 THz.
Apart from the optical properties, the electrical properties of the modulator were studied in terms of the energy per bit and operation speed. It was confirmed that there is a fundamental tradeoff between energy consumption and speed. For the hBN spacer, a 3 dB modulation was achieved with an operation speed of 0.51 THz and with energy consumption of 0.96 fJ/bit whereas for a high-k dielectric spacer it was found to be 0.16 THz and 0.26 fJ/bit respectively.
As it was shown, the presented configuration enables realization of very efficient modulators with a nanoscale footprint, small losses and with huge optical/electrical bandwidth for a future on-chip optical interconnects.
Methods
The proposed modulator geometry was investigated using two-dimensional finite element method (FEM) simulations at the telecom wavelengths using commercial software COMSOL. The FEM is a well know technique for numerical solution of partial differential equations or integral equations, where the region of interest is subdivided into small segments and the partial differential equation is replaced with a corresponding functional. In the calculations, the refractive indexes of the Si rib, the SiO 2 buffer and the spacers are n 1 5 3.47, n 2 5 1.444 and n 3 5 1.98 for low-k dielectric spacers and n 4 5 3.47 for high-k dielectric spacers, respectively To evaluate the mode effective index and mode power attenuation of the considered structure the gate-dependent complex dielectric constant of graphene has to be calculated. The complex dielectric function e(v) can be obtained from the complex optical conductivity of graphene, consisting of interband and intraband contributions, | 5,926.2 | 2013-05-30T00:00:00.000 | [
"Engineering",
"Physics"
] |
Numerical Analysis of Flow and Heat Transfer Characteristics of CO 2 at Vapour and Supercritical Phases in Micro-Channels
Supercritical carbon dioxide (CO2) has special thermal properties with better heat transfer and flow characteristics. Due to this reason, supercritical CO2 is being used recently in air-condition and refrigeration systems to replace non environmental friendly refrigerants. Even though many researches have been done, there are not many literatures for heat transfer and flow characteristics of supercritical CO2. Therefore, the main purpose of this study is to develop flow and heat transfer CFD models on two different phases; vapour and supercritical of CO2 to investigate the heat transfer characteristics and pressure drop in micro-channels. CO2 is considered to be in different phases with different flow pressures but at same temperature. For the simulation, the CO2 flow was assumed to be turbulent, nonisothermal and Newtonian. The numerical results for both phases are compared. From the numerical analysis, for both vapour and supercritical phases, the heat energy from CO2 gas transferred to water to attain thermal equilibrium. The temperature of CO2 at vapour phase decreased 1.78% compared to supercritical phase, which decreased for 0.56% from the inlet temperature. There was a drastic increase of 72% for average Nu when the phase changed from vapour to supercritical. The average Nu decreased rapidly about 41% after total pressure of 9.0 MPa. Pressure drop ( P ) increased together with Reynolds number (Re) for vapour and supercritical phases. When the phase changed from vapour to supercritical, P was increased about 26%. The results obtained from this study can provide information for further investigations on supercritical CO2.
Introduction
Carbon dioxide (CO 2 ) gas, which has zero ozone depleting potential (ODP) and zero effective global warming potential (GWP) was reintroduced as an environmental friendly gas, and used as working fluid in refrigerators and air conditioning systems.Moreover, there are a few advantages of using CO 2 , such as it is non-toxic and safe to humans, abundant and noncombustible.As the supercritical CO 2 reaches near to its critical point, the physical properties shows extremely rapid variations with a change in temperature and pressure, which makes it as the most important characteristics [1].The current refrigerants, such as chlorofluorocarbons (CFCs) and hydro chlorofluorocarbons (HCFCs), which are being used in air conditioning and refrigeration systems, have high ozone depletion and effective global warming potentials.Hence, CO 2 can be the ideal replacement for these nonenvironmental friendly refrigerants with the thermo-fluid properties and appropriate design, at supercritical phase.
Furthermore, the density and dynamic viscosity of CO 2 at supercritical phase undergo a significant, which is almost vertical within a very narrow temperature range while the enthalpy undergoes a sharp increase near the critical point [2].As the temperature of supercritical CO 2 was increased in near-critical region, the pressure drop and heat transfer coefficient are increased too [3].At larger Reynolds number, heat transfer coefficient increased as the heat transfer rate increased [4].Therefore, to understand the underlying physics, an appropriate implementation of fluid flow and heat transfer correlation is the systems are needed.Even though a few researches had performed studies and investigation on cooling heat transfer and flow of supercritical CO 2 in micro-channels, it still could not solve this issue.The thermophysical properties and variables of supercritical CO 2 are obtained from NIST Refrigerants Database REFPROP [5].The density (ρ), thermal conductivity (λ), viscosity (µ) and specific heat (C p ) of supercritical CO 2 vary with different pressures along increased temperature [6].
Many researchers have studied the flow and heat transfer characteristics of supercritical CO 2 by using numerical and experimental methods.The geometry often used for mathematical model is the circular tube-in-tube heat exchanger, where supercritical CO 2 flow in the inner tube and water flow in the annular space [7].Most numerical analysis are done by using Renormalization Group (RNG) k-ɛ and Low-Reynolds number (LRN) k-ɛ models as the turbulence model with ANSYS FLUENT CFD codes [8][9][10][11].Besides, the flow domains are divided into two; CO 2 and water for cooling process [12,13].
The main purpose of this study is to develop flow and heat transfer mathematical models for CO 2 at vapour and supercritical phases in micro-channels, and to compare both phases to prove which phase is the best for heat transfer and flow characteristics of CO 2 .This study is expected to provide better knowledge on reducing the ozone depletion effect and global warming potentials by replacing the existing non-environmental friendly refrigerant with supercritical CO 2 .
Mathematical formulations 2.1 Governing equations
In this study, the flow field is assumed to be incompressible, steady, non-isothermal and twodimensional (2D) flow.Therefore, the governing equations for the continuity, momentum and energy can be expressed as [14]: where ρ is density of the fluid (kg/m 3 ), V is velocity vector of the fluid (m/s), t is time (seconds), g is gravitational acceleration (m/s 2 ) and µ in the fluid viscosity (kg/m.s)Conservation of energy equation: where V is stress and W is shear stress.
Pressure drop equations
Pressure drop (ΔP) takes place due to pressure loss in a system due to friction in the system.The ΔP equation represents the relationship between friction factor, length to diameter of tube ratio, and density and velocity of the fluid [14].The general equation of ΔP is: where ΔP is pressure drop (MPa): f is friction factor, L is length the tube (m) and D is diameter of the tube (m).
Meanwhile, Reynolds number also is calculated with following formula: where Re is Reynolds number and ρ is density of the fluid (kg/m 3 ).The friction factor for equation (1) will be calculated with Reynolds number obtained from equation (6).For laminar flow, the friction factor was: Re 64 f (7) Meanwhile, for turbulent flow, Colebrook equation was used to calculate friction factor: where H is pipe roughness.
Heat transfer rate equations
Convective heat transfer rate ( conv Q ) is the amount of heat transferred per unit time.The rate of convection heat transfer was expressed by Newton's Law of Cooling [15]: (9) where conv Q is convection heat transfer rate (W), h is convection heat transfer coefficient (W/m 2 ºC), A S is heat transfer surface area (m 2 ) , T S is temperature of surface (ºC) and T ∞ is temperature of the fluid sufficiently far from the surface (ºC).
Besides, Nusselt number (Nu) was calculated using the following equation: where λ is thermal conductivity (W/m.K) and L C is characteristics length (m).
Computational domain
The flow domain of CO 2 and water was designed in Design Modeler software.It was designed according to the tube-in-tube heat exchanger concept.The diameter of the inner tube was measured from the smallest tube available in market and the cooling length was the minimum value obtained from previous studies.The model was designed in 2D, as shown in Figure 1.The pipe-in-pipe flow domain was designed with the dimensions stated in Table 1.The 2D model is the half of the whole tube-in-tube heat exchanger.As mentioned above, the fluids used for the numerical analysis are water as the cooling fluid and CO 2 as the main fluid.Since the thermophysical properties of water are at room temperature, the details are available in ANSYS FLUENT material database.However, the thermophysical properties of CO 2 available in ANSYS FLUENT database are only at room temperature.Therefore, the thermophysical process of CO 2 at vapor and supercritical were obtained from National Institute of Standards and Technology (NIST) web book [6].The density, thermal conductivity, viscosity and specific heat of supercritical CO 2 vary with different pressures along the increased temperature.The thermophysical properties of CO 2 in both vapour and supercritical phases from 3.5 MPa to 10.0 MPa pressures respectively with 313 K inlet temperature are obtained, and are tabulated in Table 2.
Meshing
The mesh size is setup to be fine.Meanwhile, the mesh size at the interface between inner tube and annular part was setup to be finer to obtain accurate heat transfer data between CO 2 and water.Moreover, mesh independent test was conducted to make sure that the numerical analysis results were same for all mesh sizes at 3.5 MPa pressure only.The inlet and outlet faces for CO 2 and water, interfaces, symmetries and walls were renamed.For both vapour and supercritical phases' models, the number of meshes used is same, which are 15,000 elements.
Boundary conditions
For this study, the CO 2 was assumed as incompressible flow for both vapour and supercritical phases.The CO 2 and water flow in the inner tube and outer tube respectively.The parameters, which were used in the numerical analysis, are tabulate in Table 3.
Heat transfer
The temperature of CO 2 for both vapour and supercritical phases decreased along the 1 m tube, as shown in Figure 2. The temperature of CO 2 is constant up to 0.2 m and started reducing subsequently.However, the temperature of CO 2 at vapour phase decreased 1.78% compared to supercritical phase, which decreased for 0.56% from the inlet temperature.This is because the tube length was not enough for CO 2 at supercritical phase to transfer heat energy to attain thermal equilibrium at high total pressure.For vapour phase, due to low total pressure, it attained thermal equilibrium faster.In Figure 3, the average Nusselt number (Nu) for CO 2 for both vapour and supercritical phases were varying according to total pressures.At vapour phases, Nu of CO 2 increased slowly as the total pressure increased from 3.5 MPa to 6.0 MPa.Meanwhile, at supercritical phase, average Nu also increased together with the total pressure from 7.5 MPa to 9.0 MPa.There was a drastic increase of 72% for average Nu when the phase changed from vapour to supercritical.However, after 9.0 MPa total pressure, average Nu decreased rapidly for 41% because the tube length is not enough to transfer the thermal energy from CO 2 to water.Besides, as shown in Figure 4, the temperature changes of CO 2 along the tube at supercritical pressures were decreasing.The temperature of CO 2 decreased linearly for all supercritical pressure.At total pressure of 7.5 MPa, 8.0 MPa, 9.0 MPa and 10.0 MPa, the temperature decreased 0.56%, 0.4%, 0.44% and 0.53%, respectively from the inlet temperature.At 7.5 MPa, which is closer to critical pressure, the amount of heat loss is highest.Increase on pressure made the heat transfer coefficient decreased closely at pseudocritical temperature [16].As the total pressure increased from critical pressure, the heat transfer rate reduced.Besides, the length of the tube is not enough for supercritical CO 2 to heat up the water.From the numerical analysis, for both vapour and supercritical phases, the heat energy from CO 2 gas transferred to water to attain thermal equilibrium, as shown is Figures 2 and 4. The thermal energy from CO 2 , which was at 313 K temperature, was transferred to water, which was at 300 K.It was proved that as the total pressure increased, the heat transfer coefficient decreased [3], which is same with the situation in this study.Moreover, for supercritical phase, the tube length was not enough for the heat energy from CO 2 to be transferred fully to water.The tube length should be increased to obtained better results on heat transfer.The dynamic pressure and velocity were analyzed, and compared for both vapour and supercritical phases along the 1 m cooling length.For all total pressures, the dynamic pressure and velocity were directly proportional to each other.As the total pressure of CO 2 flowing in the tube decreased due to heat loss to water, the velocity of CO 2 increased along the tube.
Pressure drop
Moreover, the pressure drop, P ' of CO 2 was calculated from the dynamic pressure data from the numerical analysis.The P ' increased linearly as the total pressure of increased from 3.5 MPa to 10.0 MPa.As shown in Figure 5, the P ' increased together with Re for vapour phase, as the pressure is approaching the critical point.When the phase changed from vapour to supercritical, P ' increased 26%.For supercritical phase, the P ' increased along with Re up to 8.0 MPa.Even though the P ' increased after 8.0 MPa, the value of Re keep decreasing up to 10.0 MPa.Hence, the pressure drop of CO 2 at both vapour and supercritical phases do increased as it was proven.The decrease in Re is due to drastic increase in density and viscosity of CO 2 at total pressure from 8 MPa to 10 MPa.C.H. Son and S.J. Park [17] investigated that variation in the density of CO 2 resulted to decrease in ΔP with increased inlet pressure of gas cooler at supercritical phase, which is against the results in Figure 5.
Conclusion
In this study, the mathematical models to investigate the flow and heat transfer characteristics of CO 2 at vapour and supercritical phases in micro-channels were developed.The flow domain of CO 2 and water was designed in 2D by using Design Modeler software, according to the pipe-in-pipe heat exchanger concept, with length of 1000 mm.The thermophysical properties of CO 2 in both vapour and supercritical phases from 3.5 MPa to10.0MPa pressures respectively with 313 K inlet temperature were obtained from NIST.The temperature of water was 300 K at velocity of 10 m/s.From the numerical analysis, for both vapour and supercritical phases, the heat energy from CO 2 gas transferred to water to attain thermal equilibrium.The temperature of CO 2 at vapour phase decreased 1.78% compared to supercritical phase, which decreased for 0.56% from the inlet temperature.According to average Nusselt number (Nu), the heat transfer rate of CO 2 increased as the total pressure is increased.It was proved that as the total pressure increased, the heat transfer coefficient decreased, which is same with the situation in this study.There was a drastic increase of 72% for average Nu when the phase changed from vapour to supercritical.However, after 9.0 MPa total pressure, average Nu decreased rapidly for 41% because the tube length is not enough to transfer the thermal energy from CO 2 to water.
Pressure drop ( P ' ) increased together with Reynolds number (Re) for vapour and supercritical phases.When the phase changed from vapour to supercritical, P ' increased about 26%.There is a rapid heat transfer between CO 2 and water at low pressure compared to high pressure.The results obtained from this study can provide information for further investigations on supercritical CO 2 w are velocity components in the x, y and z directions, respectively, and & is divergence operator.Momentum equation:
Table 1 . 1 Figure 1 .
Figure 1.2D pipe-in-pipe heat exchange design in Design Modeler.
Figure 2 .
Figure 2. Temperature versus length of carbon dioxide and water for both vapour and supercritical phases.
Figure 3 .
Figure 3. Average Nusselt number (Nu) versus total pressure for both vapour and supercritical pressure phases.
Figure 4 .
Figure 4. Temperature versus length of CO 2 at supercritical pressures.
Figure 5 .
Figure 5. Pressure drop versus Reynolds number of CO 2 at both vapour and supercritical phases.
The heat transfer and pressure drop are obtained from dynamic pressure, velocity, temperature, Nusselt number and Reynolds number data from the numerical analysis.These data are analysed and compared.The convective | 3,505.6 | 2016-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Functionalization of Single and Multi-Walled Carbon Nanotubes with Polypropylene Glycol Decorated Pyrrole for the Development of Doxorubicin Nano-Conveyors for Cancer Drug Delivery
A recently reported functionalization of single and multi-walled carbon nanotubes, based on a cycloaddition reaction between carbon nanotubes and a pyrrole derived compound, was exploited for the formation of a doxorubicin (DOX) stacked drug delivery system. The obtained supramolecular nano-conveyors were characterized by wide-angle X-ray diffraction (WAXD), thermogravimetric analysis (TGA), high-resolution transmission electron microscopy (HR-TEM), and Fourier transform infrared (FT-IR) spectroscopy. The supramolecular interactions were studied by molecular dynamics simulations and by monitoring the emission and the absorption spectra of DOX. Biological studies revealed that two of the synthesized nano-vectors are effectively able to get the drug into the studied cell lines and also to enhance the cell mortality of DOX at a much lower effective dose. This work reports the facile functionalization of carbon nanotubes exploiting the “pyrrole methodology” for the development of novel technological carbon-based drug delivery systems.
Preparation of CNT/PPGP Supramolecular Adduct (CNT/PPGP s )
In a 250 mL round-bottomed flask CNT (500 mg) and acetone (20 mL) were put in sequence. The system was sonicated for 30 min, and then the pyrrole derivative (150 mg) was added into the flask. The suspension was sonicated again for 30 min. After solvent removal under reduced pressure, the mixture was quantitatively transferred in a funnel with a sintered glass disc, washed with acetone (100 mL), and then recovered and weighed.
The degree of functionalization was estimated employing TGA, determining the amount of pyrrole compound in the adduct after washing (mass losses for CNT and CNT/PPGP adducts are in Table 1), see Table 2: SWCNT/PPGP s 5.48 w%; MWCNT/PPGP s 3.00 w%.
Preparation of CNT/PPGP Covalent Adduct (CNT/PPGP c )
In a 250 mL round-bottomed flask equipped with magnetic stirrer, we added CNT (500 mg) and 20 mL of acetone in sequence. The system was sonicated for 30 min, and then the pyrrole derivative (150 mg) was added into the flask. The suspension was sonicated again for 30 min. After solvent removal under reduced pressure, the CNT/PPGP mixture was poured in a round bottom flask and heated at 150 • C for 2 h. After this time, the mixture was quantitatively transferred in a funnel with a sintered glass disc, washed with acetone (100 mL), and then recovered and weighed.
The degree of functionalization was estimated employing TGA, determining the amount of pyrrole compound in the adduct after washing (mass losses for CNT and CNT/PPGP adducts are in Table 1), see Table 2: SWCNT/PPGP c 5 w%; MWCNT/PPGP c 7.00 w%.
Preparation of Carbon Nanotube/Pyrrole Polypropylene Glycol/Doxorubicin CNT/PPGP/DOX Ternary Nano Complexes
2.4.1. General Procedure DOX hydrochloride (9 mg) was stirred with the selected CNT/PPGP adduct (3 mg) dispersed in a pH 7.4 Phosphate Buffered Saline solution (PBS) (6 mL) and stirred for 16 h at room temperature. The product was collected by ultracentrifugation with PBS until the supernatant became colorless. The amount of unbound DOX was determined by measuring the absorbance at 490 nm of the supernatant after centrifugation see Table S1. The CNT/PPGP/DOX nano complex dispersions in PBS were also analyzed by fluorescence spectrophotometry: dispersions were placed using a Pasteur pipette (Colaver s.r.l, Vimodrone, Italy) in a triangular quartz cuvette. A Jasco FP-6600 Spectrofluorometer (JASCO corporation, Tokyo, Japan) was used to perform the fluorescence measurements. The fluorescence detector was set at an excitation wavelength of 480 nm, and fluorescence spectra in the range of 500-700 nm were collected.
DOX Calibration Curve by UV-Vis Spectroscopy
A stock PBS solution of DOX hydrochloride (1 mg/mL) at pH 7.4 was prepared. The obtained solution was then diluted, and UV-Vis measurements were performed. The absorbance of these solutions was measured at 490 nm of maximum absorbance, using a 1 cm quartz cuvette.
DOX Release From CNT Nano Complexes
CNT/DOX and CNT/PPGP/DOX in acetate buffer (pH 5.5) were loaded in Spectra/Por ® Dialysis Membrane (10K MWCO, nominal flat width 24 mm, diameter 15 mm wet in 0.1% sodium azide) (Thermo Fisher Scientific Inc., Waltham, MA, USA). Each dialysis bag was then allowed to stand for 72 h in acetate buffer solution. The release of DOX was checked after 24, 48, and 72 h through UV-Vis spectroscopy. The same experiment conducted in PBS showed that DOX remained bound to CNT due to the stability of the drug at pH 7.4.
Fourier Transform Infrared Spectroscopy (FT-IR)
The IR spectra were recorded in transmission mode (128 scans and 4 cm −1 resolution) using a Thermo Electron Continuum IR microscope coupled with an FTIR Nicolet Nexus spectrometer. A small portion of the dry solid material was placed in a diamond anvil cell (DAC) and analyzed in transmission mode.
Thermogravimetric Analysis (TGA)
TGA test under N 2 flowing (60 mL/min) was performed with a Mettler TGA SDTA/851 instrument according to the ISO9924-1 standard method. Samples (10 mg) were heated from 30 to 300 • C at 10 • C/min, kept at 300 • C for 10 min, and then heated up to 550 • C at 20 • C/min. After being maintained at 550 • C for 15 min, they were further heated up to 700 • C and kept at 700 • C for 30 min under air flowing (60 mL/min).
High-Resolution Transmission Electron Microscopy (HR-TEM)
HR-TEM investigations on CNT and CNT/PPGP adducts were carried out with a Philips CM 200 field emission gun microscope operating at 200 kV. Few drops of the water suspensions were deposited on 200 mesh lacey carbon-coated copper grid and air-dried for several hours before analysis. During the acquisition of HR-TEM images, performed with low beam current densities and short acquisition times, the samples did not undergo structural transformation. The Gatan Digital Micrograph software (GMS 3, Gatan, Inc., Pleasanton, CA, USA) was used to estimate in HRTEM micrographs the number of stacked graphene layers and the dimensions of the stacks.
Wide-Angle X-Ray Diffraction
Wide-angle X-ray diffraction patterns were obtained in reflection, with an automatic Bruker D8 Advance diffractometer (Bruker Corporation, Billerica, MA, USA), with nickel filtered Cu-K α radiation. Patterns were recorded in 10-80 • as the 2θ range, being 2θ the peak diffraction angle. Details can be found in Section S1.
Preparation of Water Dispersions of CNT/PPGP Adducts
General procedure: water dispersions of CNT/PPGP adducts were prepared at different concentrations: 1 mg/mL; 0.5 mg/mL; 0.1 mg/mL; 0.05 mg/mL; 0.01 mg/mL; 0.005 mg/mL; 0.001 mg/mL. Each dispersion was sonicated for 1 min using an ultrasonic bath (260 W). The dispersion (10 mL) of each sample was put in a Falcon™ 15 mL Conical Centrifuge Tubes and centrifuged at 6000 rpm for 30 min. UV-Vis measurement was performed immediately after sonication or centrifugation, and also after 3 days. A Hewlett Packard 8452A Diode Array Spectrophotometer (Hewlett-Packard, Palo Alto, CA, USA) was used to perform the absorption measurements. The dispersions were placed using a Pasteur pipette in cuvettes with an optical path of 1 cm (about 3 mL per cuvette). The obtained UV-visible spectrum reports absorption as a function of radiation wavelength in the range of 200-750 nm.
Calculation of the Hansen Solubility Sphere and Hansen Solubility Parameters
The calculation of the Hansen solubility parameters (HSP) for CNT was performed by applying the Hansen Solubility Sphere representation of miscibility. The idea at the basis of this geometrical approach is the calculation of the cohesive energy density (U T /V) of a compound as the sum of three interaction contributions: non-polar Van der Waals forces (δ D ), polar (δ P ) and hydrogen bonding (δ H ). Details can be found in Section S2.
Cell Cultures
The M14 human melanoma and the A549 human lung adenocarcinoma cell lines have been used in the present investigation. Both cell lines were routinely maintained as previously described [40,41] in a humidified atmosphere of 5% CO 2 in a water-jacketed incubator at 37 • C. The base media were Roswell Park Memorial Institute (RPMI) 1640 for M14 cells, and Dulbecco's modified Eagle's medium (DMEM) for A549 cells. The complete growth media were obtained by supplementation with 10% (by volume) heat-inactivated fetal bovine serum, 2 mM glutamine, 100 units/mL penicillin, and 100 µg/mL streptomycin (Thermo Fisher Scientific, Milan, Italy). The cells were subcultured before reaching confluence, using a 0.25% trypsin-EDTA solution (Carlo Erba, Milan, Italy).
Cell Viability Assay
Cells were seeded in 96-well cell culture plates at a density of 2 × 10 4 cells/well (200 µL/well) for in vitro cell viability assay. After overnight incubation to allow the attachment of cells, the resulting monolayers were incubated with free DOX hydrochloride or into both unloaded and loaded CNT for 48 h.
In these experiments, all CNT samples were bath sonicated for 5 min in culture medium in order to obtain a homogeneous dispersion at 1 mg/mL. The stock dispersions were diluted in culture medium to get the desired concentrations referred and normalized to the amount of loaded DOX hydrochloride into each sample. After the time of incubation, the cells were carefully washed to remove the non-internalized nanotubes, and cytotoxicity was determined by 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay according to a previously published protocol [42]. The formazan was solubilized in DMSO (Sigma-Aldrich, Italy) and spectrophotometrically quantified at λ = 550 nm by a microplate reader (Titertek Multiscan, DAS, Milan, Italy).
Statistical Analysis
The results were expressed as mean ± s.e.m. based on data derived from three independent experiments run in triplicate. Statistical analysis of results was performed using Student's t-test and one-way ANOVA test plus Dunnett's test by the statistical software package SYSTAT, version 11 (Systat Inc., Evanston, IL, USA).
Models Preparation
SWCNT and MWCNT were generated from the Nanotube Modeler package [43] (v. 1.8, JCrystalSoft) with the armchair arrangement, all open and hydrogen-terminated, trying to be faithful, as diameter and number of walls, to the values reported in the literature [34]. The SWCNT structure with a length of 50 Å and the chiral vectors m = 13, and n = 13 (13,13) with a diameter of 17.640 Å, and the MWCNT structure with a length of 10 Å and the chiral vectors (25,25) for an inner diameter of 33.924 Å and (70,70) for an outer diameter of 94.987 Å were used as the model for the CNT/PPGP/DOX drug carrier systems. The initial structure of DOX was obtained from the DRUGBANK server [44], whereas that of PPGP was manually built.
Molecular Dynamics Simulations
The molecular dynamics simulations of the CNT/PPGP/DOX supramolecular systems (SSy) were performed with the YASARA Structure package (v. 19.10.23). A periodic simulation cell with boundaries extending 10 Å [45] from the surface of the CNT or CNT/PPGP c was employed. For the two SWCNT systems (1066 carbon atoms), we used 1 PPGP and 45 DOX molecules (SSy1,2, see paragraph 3.4), whereas for the two MWCNT systems (8550 carbon atoms), we used 12 PPGP and 369 DOX molecules for PPGP s arrangement, and 5 PPGP and 354 DOX molecules for the PPGP c one (SSy3,4, see paragraph 3.4). For the SWCNT/PPGP c system, the one PPGA molecule was covalently linked about at the center of the CNT structure, whereas for the MWCNT/PPGP c system, the five PPGA molecules were covalently linked, approximately regularly spaced, around the outer circumference of the CNT structure. Details can be found in Section S3.
Modification of the CNT with a Pyrrole Decorated Polypropylene Glycol
Functionalization of carbon nanotubes, either single (SWCNT) and multi-walled (MWCNT), was realized by using a pyrrole terminated polymer (Pyrrole polypropylene glycol, PPGP) as a modifying agent. PPGP was synthesized through the Paal-Knorr reaction [34,39], by reacting the amino
Molecular Dynamics Simulations
The molecular dynamics simulations of the CNT/PPGP/DOX supramolecular systems (SSy) were performed with the YASARA Structure package (v. 19.10.23). A periodic simulation cell with boundaries extending 10 Å [45] from the surface of the CNT or CNT/PPGPc was employed. For the two SWCNT systems (1066 carbon atoms), we used 1 PPGP and 45 DOX molecules (SSy1,2, see paragraph 3.4), whereas for the two MWCNT systems (8550 carbon atoms), we used 12 PPGP and 369 DOX molecules for PPGPs arrangement, and 5 PPGP and 354 DOX molecules for the PPGPc one (SSy3,4, see paragraph 3.4). For the SWCNT/PPGPc system, the one PPGA molecule was covalently linked about at the center of the CNT structure, whereas for the MWCNT/PPGPc system, the five PPGA molecules were covalently linked, approximately regularly spaced, around the outer circumference of the CNT structure. Details can be found in Section S3.
Modification of the CNT With a Pyrrole Decorated Polypropylene Glycol
Functionalization of carbon nanotubes, either single (SWCNT) and multi-walled (MWCNT), was realized by using a pyrrole terminated polymer (Pyrrole polypropylene glycol, PPGP) as a modifying agent. PPGP was synthesized through the Paal-Knorr reaction [34,39], by reacting the amino The reaction was performed in the absence of catalysts by adopting experimental conditions inspired by the basic principles of green chemistry [36]. In particular, acidic catalysts and toxic The reaction was performed in the absence of catalysts by adopting experimental conditions inspired by the basic principles of green chemistry [36]. In particular, acidic catalysts and toxic solvents traditionally used for the reactions of primary amines with carbonyl compounds were avoided. In previous works [34,37], it was shown that the reaction of 2-amino-1,3-propanediol (serinol) with 2,5-hexanedione led to a 1,3-bis-oxazolidine compound, which was then converted into a pyrrole compound only by heating. The reaction of PPGA with HD was performed by heating at 100 • C. Water was the only co-product of the reaction, which was characterized by a high atom economy of 88.2% and by a yield of 64%. The chemical structure of PPGP was confirmed through 1 H NMR spectroscopy Figure S1. PPGP appears as a light-yellow liquid with slightly higher viscosity at room temperature and normal pressure. PPGP contains a long polyether chain that could favor the compatibility with polar surroundings such as water, alcohols, and polar polymer matrices. PPGP also contains the pyrrole ring, which could favor π-π interaction with the aromatic rings of the sp 2 carbon allotropes.
As stressed in the introduction, pyrrole compounds (PyC) were shown to be able to form stable adducts with graphitic substrates [37,38]. In this work, reactions between CNT and PPGP were performed by adopting the experimental frame summarized in Figure 2. solvents traditionally used for the reactions of primary amines with carbonyl compounds were avoided. In previous works [34,37], it was shown that the reaction of 2-amino-1,3-propanediol (serinol) with 2,5-hexanedione led to a 1,3-bis-oxazolidine compound, which was then converted into a pyrrole compound only by heating. The reaction of PPGA with HD was performed by heating at 100 °C. Water was the only co-product of the reaction, which was characterized by a high atom economy of 88.2% and by a yield of 64%. The chemical structure of PPGP was confirmed through 1 H NMR spectroscopy Figure S1. PPGP appears as a light-yellow liquid with slightly higher viscosity at room temperature and normal pressure. PPGP contains a long polyether chain that could favor the compatibility with polar surroundings such as water, alcohols, and polar polymer matrices. PPGP also contains the pyrrole ring, which could favor π-π interaction with the aromatic rings of the sp 2 carbon allotropes. As stressed in the introduction, pyrrole compounds (PyC) were shown to be able to form stable adducts with graphitic substrates [37,38]. In this work, reactions between CNT and PPGP were performed by adopting the experimental frame summarized in Figure 2. In the experimental part, the functionalization of CNT by using PPGP as the modifying agent is described extensively. In brief, CNT were initially sonicated in the presence of the pyrrole terminated polymer (PPGP) in acetone, leading to the formation of the CNT/PPGPs adducts (supramolecular adduct) after removal of the solvent and washing with acetone. However, in a one-pot process involving the direct thermal treatment of the physical mixture (150 °C, 2 h), the covalent adduct CNT/PPGPc was easily obtained. No further optimization of the reaction conditions was performed. In order to establish the efficiency of the functionalization, the CNT/PPGP powder taken from the In the experimental part, the functionalization of CNT by using PPGP as the modifying agent is described extensively. In brief, CNT were initially sonicated in the presence of the pyrrole terminated polymer (PPGP) in acetone, leading to the formation of the CNT/PPGP s adducts (supramolecular adduct) after removal of the solvent and washing with acetone. However, in a one-pot process involving the direct thermal treatment of the physical mixture (150 • C, 2 h), the covalent adduct CNT/PPGP c was easily obtained. No further optimization of the reaction conditions was performed.
In order to establish the efficiency of the functionalization, the CNT/PPGP powder taken from the flask was extracted in Soxhlet with acetone until PPGP was undetectable in the washing solvent. TGA was performed on CNT and the CNT/PPGP adducts before and after washing with acetone. As described in detail in the experimental part, the TGA was carried out under nitrogen up to 700 • C and under oxygen up to 800 • C. Thermographs of both single and multiwall CNT and CNT/PPGP adducts are shown in Figure 3. The values of mass losses are reported in Table 1. In the experimental part, the functionalization of CNT by using PPGP as the modifying agent is described extensively. In brief, CNT were initially sonicated in the presence of the pyrrole terminated polymer (PPGP) in acetone, leading to the formation of the CNT/PPGPs adducts (supramolecular adduct) after removal of the solvent and washing with acetone. However, in a one-pot process involving the direct thermal treatment of the physical mixture (150 °C, 2 h), the covalent adduct CNT/PPGPc was easily obtained. No further optimization of the reaction conditions was performed. In order to establish the efficiency of the functionalization, the CNT/PPGP powder taken from the flask was extracted in Soxhlet with acetone until PPGP was undetectable in the washing solvent. TGA was performed on CNT and the CNT/PPGP adducts before and after washing with acetone. As described in detail in the experimental part, the TGA was carried out under nitrogen up to 700 °C and under oxygen up to 800 °C. Thermographs of both single and multiwall CNT and CNT/PPGP adducts are shown in Figure 3. The values of mass losses are reported in Table 1. The mass loss at T < 150 • C was attributed to absorbed low molar mass molecules, mainly water. The decomposition profile for all the samples, pristine CNT and CNT/PPGP adducts, reveals two main steps in the temperature range from 150 • C to 700 • C, which can be attributed to the decomposition of alkenyl-, oxygen-and nitrogen-containing functional groups. New final decomposition occurs at T > 700 • C, due to reaction with the oxygen of the graphitic structure. The amount of PPGP in the adduct was estimated by evaluating the mass loss in the temperature range from 150 • C to 700 • C.
By comparing data referring to CNT and CNT/PPGP adducts, it appears that the mass loss is more significant for the latter ones. It can also be observed that CNT/PPGP adducts containing the polymeric chain give rise to a more substantial mass loss below 150 • C, and this could be due to a more considerable amount of absorbed water.
The degree of functionalization and the functionalization yield were calculated through Equations (1) and (2), and their values are reported in Table 2.
Functionalization yield (%) = 100· PPGP mass % in (CNT/PPGP adduct) a f ter acetone washing PPGP mass % in (CNT/PPGP adduct) be f ore acetone washing (2) Although optimization of the reaction conditions was not performed, functionalization yield was high for all the CNT adducts except for the MWCNT/PPGP s adduct, which was 56%. This significant difference in the functionalization efficiency between the covalent and the supramolecular MWCNT adducts occurs since the commercial MWCNT are usually much entangled. The covalent functionalization helps the disentanglement of MWCNT increasing the available surface, which allows a better interaction between the functionalizing molecule and the MWCNT. In the MWCNT supramolecular adducts, there is no such effect, and the carbon nanotubes remain much entangled.
Mass loss values in the range 150-900 • C were exploited to determine the degree of functionalization of each adduct, defined as the percent value of the difference between the mass loss of the CNT/PPGP and that of the pristine CNT in the studied temperature range.
Thermogravimetric experiments showed that the functionalization of the considered adduct with PPGP was successful and that the introduction of an appropriate amount of modifier was achieved. The characterization of the CNT/PPGP adducts was performed employing IR and WAXD spectroscopies.
Infrared spectroscopy allowed a qualitative check of the chemical nature of the attached molecule and was performed on pristine CNT and CNT/PPGP adducts after acetone extraction. In Figure 4A,B the IR spectra of CNT and CNT/PPGP adducts are reported. As explained in detail in the experimental part, IR spectra have been recorded in transmission mode using a diamond anvil cell (DAC) in order to avoid the absorption peaks of water molecules. The spectra were obtained from the absorption of very thin films of CNT powder, which are not transparent to the IR beam. Indeed, the G IR absorption observed in the spectra at 1590 cm −1 is mostly due to the reflection from the graphitic planes. The intense light diffusion from the high surface area graphite (HSAG) particles is responsible for the increase of the absorbance in the spectra toward higher wavenumbers. Absorption spectra recorded on DAC are presented in the region 700-3900 cm −1 Figure 4A and in the fingerprint region 700-1800 cm −1 Figure 4B after baseline correction to make easier the comparison of the weak spectroscopic features. The low and broad vibrational signals floating on a rather steep background absorption and the chemical complexity of the samples severely limits a structural diagnosis through a detailed assignment of the spectral features. Thus, the vibrational analysis was based on the recognition of the functional groups based on correlative spectroscopic criteria [38]. The FT-IR spectra of MWCNT, MWCNT/PPGP s , and MWCNT/PPGP c are reported in Figure 4. In particular, Figure 4A shows the spectra recorded in the region 700-3900 cm −1 , while in Figure 4B the spectra are displayed in the fingerprint region, after baseline correction, to allow easier comparison. In Figure 4 Thermogravimetric experiments showed that the functionalization of the considered adduct with PPGP was successful and that the introduction of an appropriate amount of modifier was achieved. The characterization of the CNT/PPGP adducts was performed employing IR and WAXD spectroscopies.
Infrared spectroscopy allowed a qualitative check of the chemical nature of the attached molecule and was performed on pristine CNT and CNT/PPGP adducts after acetone extraction. In Figure 4A,B the IR spectra of CNT and CNT/PPGP adducts are reported. As explained in detail in the experimental part, IR spectra have been recorded in transmission mode using a diamond anvil cell (DAC) in order to avoid the absorption peaks of water molecules. The spectra were obtained from the absorption of very thin films of CNT powder, which are not transparent to the IR beam. Indeed, the GIR absorption observed in the spectra at 1590 cm −1 is mostly due to the reflection from the graphitic planes. The intense light diffusion from the high surface area graphite (HSAG) particles is responsible for the increase of the absorbance in the spectra toward higher wavenumbers. Absorption spectra recorded on DAC are presented in the region 700-3900 cm −1 Figure 4A and in the fingerprint region 700-1800 cm −1 Figure 4B after baseline correction to make easier the comparison of the weak spectroscopic features. The low and broad vibrational signals floating on a rather steep background absorption and the chemical complexity of the samples severely limits a structural diagnosis through a detailed assignment of the spectral features. Thus, the vibrational analysis was based on the recognition of the functional groups based on correlative spectroscopic criteria [38]. The FT-IR spectra of MWCNT, MWCNT/PPGPs, and MWCNT/PPGPc are reported in Figure 4. In particular, Figure 4A shows the spectra recorded in the region 700-3900 cm −1 , while in Figure 4B the spectra are displayed in the fingerprint region, after baseline correction, to allow easier comparison. In Figure 4 As reported in the introduction, another aim of this study was to investigate the ability of PPGP to modify the solubility parameter of carbon nanotubes in order to preliminarily understand if the projected carrier could be able to interact with different surrounding. Dispersions of CNT/PPGP adducts cited in Table 1 were prepared in solvents having different solubility parameters: water, As reported in the introduction, another aim of this study was to investigate the ability of PPGP to modify the solubility parameter of carbon nanotubes in order to preliminarily understand if the projected carrier could be able to interact with different surrounding. Dispersions of CNT/PPGP adducts cited in Table 1 were prepared in solvents having different solubility parameters: water, methanol, 2-propanol, acetone, ethyl acetate, propylene glycol, dichloromethane, xylene, toluene, and hexane. The stability of such dispersions was studied, as described in the experimental part. Visual inspection of the dispersions was carried out immediately after sonication. In Table S2, the result of these observations is qualitatively summarized with 'good' (meaning that a homogenous dispersion was observed) or 'bad' (the adduct either settled down or floated on the solvent) as indicators.
The introduction of PPGP on the surface of the carbon allotropes allows the dispersion in polar environments thanks to the long polyether chain. The solubility sphere shown in Figure 5 was generated, as explained in the experimental part, to encompass the suitable solvents points and to exclude the wrong solvents, being centered on the solubility parameters of the MWCNT/PPGP c adduct. Solubility parameters, δ D , δ P , and δ H values, of MWCNT/PPGP c adduct, were estimated to be 11.48 MPa 0.5 , 15.40 MPa 0.5 , and 18 MPa 0.5 , respectively. Nanomaterials 2020 10 of 22 methanol, 2-propanol, acetone, ethyl acetate, propylene glycol, dichloromethane, xylene, toluene, and hexane. The stability of such dispersions was studied, as described in the experimental part. Visual inspection of the dispersions was carried out immediately after sonication. In Table S2, the result of these observations is qualitatively summarized with 'good' (meaning that a homogenous dispersion was observed) or 'bad' (the adduct either settled down or floated on the solvent) as indicators.
The introduction of PPGP on the surface of the carbon allotropes allows the dispersion in polar environments thanks to the long polyether chain. The solubility sphere shown in Figure 5 was generated, as explained in the experimental part, to encompass the suitable solvents points and to exclude the wrong solvents, being centered on the solubility parameters of the MWCNT/PPGPc adduct. Solubility parameters, δD, δP, and δH values, of MWCNT/PPGPc adduct, were estimated to be 11.48 MPa 0 . 5 , 15.40 MPa 0 . 5 , and 18 MPa 0 . 5 , respectively. As it should be expected, considering the presence of such functional groups and a polyether long chain, water dispersions of CNT/PPGP adducts were easily prepared. Preparation and UV-Vis absorption data of water dispersions of CNT/PPGP adducts were reported and discussed in the following part.
CNT/PPGP water dispersions with the following concentrations were prepared: 1, 0.5, 0.1, 0.05, 0.01, 0.005 and 0.001 mg/mL. The dispersions were then analyzed by observing their absorptions in the UV-Vis range. Details are reported in the experimental section for each CNT/PPGP adduct. The adopted procedure is summarized in the block diagram of Figure 6. As it should be expected, considering the presence of such functional groups and a polyether long chain, water dispersions of CNT/PPGP adducts were easily prepared. Preparation and UV-Vis absorption data of water dispersions of CNT/PPGP adducts were reported and discussed in the following part.
CNT/PPGP water dispersions with the following concentrations were prepared: 1, 0.5, 0.1, 0.05, 0.01, 0.005 and 0.001 mg/mL. The dispersions were then analyzed by observing their absorptions in the UV-Vis range. Details are reported in the experimental section for each CNT/PPGP adduct. The adopted procedure is summarized in the block diagram of Figure 6. methanol, 2-propanol, acetone, ethyl acetate, propylene glycol, dichloromethane, xylene, toluene, and hexane. The stability of such dispersions was studied, as described in the experimental part. Visual inspection of the dispersions was carried out immediately after sonication. In Table S2, the result of these observations is qualitatively summarized with 'good' (meaning that a homogenous dispersion was observed) or 'bad' (the adduct either settled down or floated on the solvent) as indicators.
The introduction of PPGP on the surface of the carbon allotropes allows the dispersion in polar environments thanks to the long polyether chain. The solubility sphere shown in Figure 5 was generated, as explained in the experimental part, to encompass the suitable solvents points and to exclude the wrong solvents, being centered on the solubility parameters of the MWCNT/PPGPc adduct. Solubility parameters, δD, δP, and δH values, of MWCNT/PPGPc adduct, were estimated to be 11.48 MPa 0 . 5 , 15.40 MPa 0 . 5 , and 18 MPa 0 . 5 , respectively. As it should be expected, considering the presence of such functional groups and a polyether long chain, water dispersions of CNT/PPGP adducts were easily prepared. Preparation and UV-Vis absorption data of water dispersions of CNT/PPGP adducts were reported and discussed in the following part.
CNT/PPGP water dispersions with the following concentrations were prepared: 1, 0.5, 0.1, 0.05, 0.01, 0.005 and 0.001 mg/mL. The dispersions were then analyzed by observing their absorptions in the UV-Vis range. Details are reported in the experimental section for each CNT/PPGP adduct. The adopted procedure is summarized in the block diagram of Figure 6. Adduct dispersions at different concentrations were prepared to investigate if the Lambert-Beer law is respected. If so, the CNT/PPGP adduct can be assumed to form a "solution-like" substance with water, thanks to the added polyether chain on its surface. Figure 7 shows the dependence of UV-Vis absorbance on the concentration of MWCNT/PPGP c adduct in water after sonication (A) and the linear relationship between absorbance at 260 nm and concentration for MWCNT/PPGP c (B). Nanomaterials 2020 11 of 22 Adduct dispersions at different concentrations were prepared to investigate if the Lambert-Beer law is respected. If so, the CNT/PPGP adduct can be assumed to form a "solution-like" substance with water, thanks to the added polyether chain on its surface. Figure 7 shows the dependence of UV-Vis absorbance on the concentration of MWCNT/PPGPc adduct in water after sonication (A) and the linear relationship between absorbance at 260 nm and concentration for MWCNT/PPGPc (B). Absorption values corresponding to a wavelength of 260 nm for MWCNT-PPGPc adduct curves were plotted as a function of concentration values. In Figure 7B, a linear correlation is reported.
Structure and morphology of the CNT and the ensuing CNT/PPGP adducts were investigated through WAXD and HR-TEM analysis. Figure S4 shows WAXD patterns for powders of MWCNT, MWCNT/PPGPs, and MWCNT/PPGPc. In pristine MWCNT Figure S4a, crystalline order in the direction orthogonal to structural layers is revealed by 002 reflection at 26.6°, which corresponds to an interlayer distance of 0.35 nm. By applying the Scherrer equation (Equation (S2)) to (002) reflection, the out of the plane (D ┴ ) correlation length was calculated. From the values of D ┴ and of the interlayer distance, the number of stacked layers was estimated to be about 12 for CNT. Reflections in the patterns of the sp 2 adducts samples Figure S4b,c remains at the same 2θ value. MWCNT and MWCNT/PPGP adducts present distances between the structural layers slightly more significant than those of ordered graphite samples (d002 = 0.335 nm). In all samples, (112) reflection is negligible.
High-resolution transmission electron microscopy (HR-TEM) was exploited to study the morphology of CNT/PPGP adducts. Various magnifications were adopted. Absorption values corresponding to a wavelength of 260 nm for MWCNT-PPGP c adduct curves were plotted as a function of concentration values. In Figure 7B, a linear correlation is reported.
Structure and morphology of the CNT and the ensuing CNT/PPGP adducts were investigated through WAXD and HR-TEM analysis. Figure S4 shows WAXD patterns for powders of MWCNT, MWCNT/PPGP s , and MWCNT/PPGP c . In pristine MWCNT Figure S4a, crystalline order in the direction orthogonal to structural layers is revealed by 002 reflection at 26.6 • , which corresponds to an interlayer distance of 0.35 nm. By applying the Scherrer equation (Equation (S2)) to (002) reflection, the out of the plane (D ⊥ ) correlation length was calculated. From the values of D ⊥ and of the interlayer distance, the number of stacked layers was estimated to be about 12 for CNT. Reflections in the patterns of the sp 2 adducts samples Figure S4b,c remains at the same 2θ value. MWCNT and MWCNT/PPGP adducts present distances between the structural layers slightly more significant than those of ordered graphite samples (d 002 = 0.335 nm). In all samples, (112) reflection is negligible.
High-resolution transmission electron microscopy (HR-TEM) was exploited to study the morphology of CNT/PPGP adducts. Various magnifications were adopted. Micrographs at a lower magnification of each CNT/PPGP adduct in Figure 8 reveal that the length of CNT/PPGP adducts is of the same order of magnitude in samples isolated before and after PPGP treatments. This indicates that the chemical interaction with PPGP and the heating step for the preparation of the covalent adducts do not cause appreciable breaking of the nanotubes.
In the case of MWCNT/PPGPs and SWCNT/PPGPs adducts in water dispersions, micrographs at lower magnification in Figure 8A,C reveal carbon aggregates made by pseudo-spherical particles with an average size of about 5-10 nm. Figure 8B,b shows the MWCNT/PPGPc covalent adduct. It seems that a layer of organic substance (indicated by the light blue arrow) adheres to the carbon allotrope. It appears that the organic substance is probably made by unreacted PPGP, which covers the surface. Also, in the case of MWCNT/PPGPc, a low quantity of spherical organic aggregates (indicated by the red arrow) were detected (average dimension~5-20 nm). It appears that spherical agglomerates are probably made by unreacted PPGP also in this case. It is known from the literature that macromolecules like polyether terminated with cationic or anionic functional groups can generate micelle or more in general supramolecular assembly structures [46].
Micrographs at higher magnification Figure 8a-d allow visualizing walls of nanotubes. In the case of SWCNT, the covalent treatment with PPGP led to a large CNT disentanglement, as shown by the lower number of CNT micrometric bundles and by the presence of individual tubes in a defined space, as observed in many HR-TEM images and represented in Figure 8. The micrograph in Figure 8d shows that the SWCNT skeleton remained intact after the treatment with PPGP oligomer. The CNT surface was thus decorated with PPGP chains, which form condensed polymer layers adhered to the CNT external surface, with a thickness from about 3 to about 10 nm.
Furthermore, a comparison between TEM micrographs at low magnifications of pristine CNT and PPGP adducts are reported in Figure S5. Figure S5a,b show that a high entanglement of nanotubes characterizes pristine MWCNT (a) and SWCNT (b). The functionalization with PPGP improves the disentanglement of CNT, allowing better processability of the carbon allotropes dispersions Figure S5c
Preparation and Characterization of the Ternary Nano Complex CNT/PPGP/DOX
HR-TEM microscopy Figure 8 allows checking that both supramolecular adducts show on their surface micelle-like structures and adhered polymer chains, and to the contrary, that the covalent adducts show only a regular adherent polymeric layer. In both cases, the possibility of a drug loading seems possible via adsorption on the CNT surface. Previously, it has been shown that the mixing of the SWCNT with DOX leads to the absorption of DOX onto the outer sides of SWCNTs via π-π stacking interactions. It was reported that suitably functionalized SWNT and MWCNT have been Micrographs at a lower magnification of each CNT/PPGP adduct in Figure 8 reveal that the length of CNT/PPGP adducts is of the same order of magnitude in samples isolated before and after PPGP treatments. This indicates that the chemical interaction with PPGP and the heating step for the preparation of the covalent adducts do not cause appreciable breaking of the nanotubes.
In the case of MWCNT/PPGP s and SWCNT/PPGP s adducts in water dispersions, micrographs at lower magnification in Figure 8A,C reveal carbon aggregates made by pseudo-spherical particles with an average size of about 5-10 nm. Figure 8B,b shows the MWCNT/PPGP c covalent adduct. It seems that a layer of organic substance (indicated by the light blue arrow) adheres to the carbon allotrope. It appears that the organic substance is probably made by unreacted PPGP, which covers the surface. Also, in the case of MWCNT/PPGP c , a low quantity of spherical organic aggregates (indicated by the red arrow) were detected (average dimension~5-20 nm). It appears that spherical agglomerates are probably made by unreacted PPGP also in this case. It is known from the literature that macromolecules like polyether terminated with cationic or anionic functional groups can generate micelle or more in general supramolecular assembly structures [46].
Micrographs at higher magnification Figure 8a-d allow visualizing walls of nanotubes. In the case of SWCNT, the covalent treatment with PPGP led to a large CNT disentanglement, as shown by the lower number of CNT micrometric bundles and by the presence of individual tubes in a defined space, as observed in many HR-TEM images and represented in Figure 8. The micrograph in Figure 8d shows that the SWCNT skeleton remained intact after the treatment with PPGP oligomer. The CNT surface was thus decorated with PPGP chains, which form condensed polymer layers adhered to the CNT external surface, with a thickness from about 3 to about 10 nm.
Furthermore, a comparison between TEM micrographs at low magnifications of pristine CNT and PPGP adducts are reported in Figure S5. Figure S5a,b show that a high entanglement of nanotubes characterizes pristine MWCNT (a) and SWCNT (b). The functionalization with PPGP improves the disentanglement of CNT, allowing better processability of the carbon allotropes dispersions Figure S5c-f. Figure 8 allows checking that both supramolecular adducts show on their surface micelle-like structures and adhered polymer chains, and to the contrary, that the covalent adducts show only a regular adherent polymeric layer. In both cases, the possibility of a drug loading seems possible via adsorption on the CNT surface. Previously, it has been shown that the mixing of the SWCNT with DOX leads to the absorption of DOX onto the outer sides of SWCNTs via π-π stacking interactions. It was reported that suitably functionalized SWNT and MWCNT have been found to be non-toxic in mice and can be gradually excreted by the biliary pathway [28]. We explored the possibility of using supramolecular π-π stacking to load a cancer chemotherapy agent DOX on CNT/PPGP adducts for drug delivery applications.
HR-TEM microscopy
In this work, we describe a previously unreported non-covalent CNT/PPGP/DOX supramolecular nano complex that can be developed for cancer therapy. We have investigated the ability of DOX to interact non-covalently and covalently with CNT functionalized with PPGP. Figure 9 shows schemes suggesting the structures of both covalent and supramolecular CNT/PPG adducts (Panel A) and the hypothesized ternary nano complex CNT/PPGP/DOX (Panel B). found to be non-toxic in mice and can be gradually excreted by the biliary pathway [28]. We explored the possibility of using supramolecular π-π stacking to load a cancer chemotherapy agent DOX on CNT/PPGP adducts for drug delivery applications.
In this work, we describe a previously unreported non-covalent CNT/PPGP/DOX supramolecular nano complex that can be developed for cancer therapy. We have investigated the ability of DOX to interact non-covalently and covalently with CNT functionalized with PPGP. Figure 9 shows schemes suggesting the structures of both covalent and supramolecular CNT/PPG adducts (Panel A) and the hypothesized ternary nano complex CNT/PPGP/DOX (Panel B). The ternary nano complex was prepared, as described in Figure 9B. In brief, DOX hydrochloride was stirred for 16 h at room temperature with the modified nanotubes dispersed in a pH 7.4 PBS buffered solution. The CNT/PPGP/DOX nano complexes were isolated by repeated ultracentrifugation with PBS until the supernatant became colorless. Free, unbound DOX in the CNT supernatant was analyzed by UV-Vis spectroscopy. DOX characteristic absorbance peak at 490 nm was detected Figure S2. The amount of unbound DOX onto the CNT was estimated by measuring the absorbance at 490 nm relative to a calibration curve recorded under the same conditions Figure S2. In Table S1, the amount of loaded DOX is reported for pristine and functionalized CNT. After DOX loading on the PPGP modified carbon nanotubes, two different scenarios were observed for MWCNT and SWCNT respectively: i) in MWCNT/PPGP/DOX UV-Vis spectrum the absorption band at 490 nm was not detected; ii) in SWCNT/PPGP/DOX the absorption band at 490 nm is slightly redshifted ( Figure S3).
The interaction between DOX and CNT/PPGP adducts was studied by monitoring the emission spectrum of DOX by fluorescence spectrophotometry (Figure 10). The ternary nano complex was prepared, as described in Figure 9B. In brief, DOX hydrochloride was stirred for 16 h at room temperature with the modified nanotubes dispersed in a pH 7.4 PBS buffered solution. The CNT/PPGP/DOX nano complexes were isolated by repeated ultracentrifugation with PBS until the supernatant became colorless. Free, unbound DOX in the CNT supernatant was analyzed by UV-Vis spectroscopy. DOX characteristic absorbance peak at 490 nm was detected Figure S2. The amount of unbound DOX onto the CNT was estimated by measuring the absorbance at 490 nm relative to a calibration curve recorded under the same conditions Figure S2. In Table S1, the amount of loaded DOX is reported for pristine and functionalized CNT. After DOX loading on the PPGP modified carbon nanotubes, two different scenarios were observed for MWCNT and SWCNT respectively: (i) in MWCNT/PPGP/DOX UV-Vis spectrum the absorption band at 490 nm was not detected; (ii) in SWCNT/PPGP/DOX the absorption band at 490 nm is slightly redshifted ( Figure S3).
The interaction between DOX and CNT/PPGP adducts was studied by monitoring the emission spectrum of DOX by fluorescence spectrophotometry (Figure 10). As can be seen from Figure 10, the fluorescence quenching of DOX was evident for all the nano complexes.
Release profiles of DOX ( Figure 11) from all the modified and unmodified CNT at 37 °C were evaluated for up to 72 h at pH 7.4 in PBS ( Figure 11A,C), which mimics the acidity of cytoplasm, and at pH 5.5 acetate buffer ( Figure 11B,D), which mimics the acid condition of lysosomes, endosomes and cancerous tissues [47]. A slow-release with a pH-sensitive profile for all the investigated samples was observed. Our results were found to be in line with previous studies [48]. At physiological pH, DOX tended to remain bound to the CNT or CNT/PPGP adducts, whereas at acidic pH, the increased protonation of DOX changes both its solubility and hydrophilicity, hence leading to a higher release of the drug from the complexes [49]. No initial burst effect was observed in either of the conditions adopted. After 72 h, at pH 7.4 the amount of released DOX was in the range of 10-11% for MWCNT/DOX and MWCNT/PPGP/DOX complexes Figure 11A and of 10-20% for SWCNT/DOX and SWCNT/PPGP/DOX ( Figure 11C). At As can be seen from Figure 10, the fluorescence quenching of DOX was evident for all the nano complexes.
Release profiles of DOX ( Figure 11) from all the modified and unmodified CNT at 37 • C were evaluated for up to 72 h at pH 7.4 in PBS ( Figure 11A,C), which mimics the acidity of cytoplasm, and at pH 5.5 acetate buffer ( Figure 11B,D), which mimics the acid condition of lysosomes, endosomes and cancerous tissues [47]. A slow-release with a pH-sensitive profile for all the investigated samples was observed. Our results were found to be in line with previous studies [48]. At physiological pH, DOX tended to remain bound to the CNT or CNT/PPGP adducts, whereas at acidic pH, the increased protonation of DOX changes both its solubility and hydrophilicity, hence leading to a higher release of the drug from the complexes [49]. As can be seen from Figure 10, the fluorescence quenching of DOX was evident for all the nano complexes.
Release profiles of DOX ( Figure 11) from all the modified and unmodified CNT at 37 °C were evaluated for up to 72 h at pH 7.4 in PBS ( Figure 11A,C), which mimics the acidity of cytoplasm, and at pH 5.5 acetate buffer ( Figure 11B,D), which mimics the acid condition of lysosomes, endosomes and cancerous tissues [47]. A slow-release with a pH-sensitive profile for all the investigated samples was observed. Our results were found to be in line with previous studies [48]. At physiological pH, DOX tended to remain bound to the CNT or CNT/PPGP adducts, whereas at acidic pH, the increased protonation of DOX changes both its solubility and hydrophilicity, hence leading to a higher release of the drug from the complexes [49]. No initial burst effect was observed in either of the conditions adopted. After 72 h, at pH 7.4 the amount of released DOX was in the range of 10-11% for MWCNT/DOX and MWCNT/PPGP/DOX complexes Figure 11A and of 10-20% for SWCNT/DOX and SWCNT/PPGP/DOX ( Figure 11C). At No initial burst effect was observed in either of the conditions adopted. After 72 h, at pH 7.4 the amount of released DOX was in the range of 10-11% for MWCNT/DOX and MWCNT/PPGP/DOX complexes Figure 11A and of 10-20% for SWCNT/DOX and SWCNT/PPGP/DOX ( Figure 11C). At pH 5.5, a different behavior was observed for CNT/DOX and CNT/PPPG/DOX complexes. Regarding MWCNT/DOX and MWCNT/PPGP/DOX complexes ( Figure 11B), the amount of DOX released over a 72-h period was observed to be faster for MWCNT/DOX and MWCNT/PPGP c /DOX, while for MWCNT/PPGP s /DOX a controlled and a linear release was observed. In contrast, for SWCNT/DOX and SWCNT/PPGP/DOX complexes, the release of DOX at pH 5.5 Figure 11D was found to be faster for MWCNT/PPGP s /DOX and lower for MWCNT/DOX and MWCNT/PPGP c /DOX. The different complexes CNT/PPGP/DOX could satisfy different release rates, depending on the type of CNT and the nature of the interaction with PPGP and DOX.
Cell Viability Assay
In order to estimate the likelihood of our CNT drug delivery systems, the cytotoxicity of MWCNT, MWCNT/PPGP s , MWCNT/PPGP c , MWCNT/DOX, MWCNT/PPGP s /DOX, and MWCNT/PPGP c /DOX on A549 and M14 cell lines was evaluated. A stock dispersion of each CNT was prepared in culture medium at 1 mg/mL and sonicated. This procedure was ineffective for SWCNT, SWCNT/PPGP s , SWCNT/PPGP c , SWCNT/DOX, SWCNT/PPGP s /DOX, and SWCNT/PPGP c /DOX samples because of the persistence of carbon aggregates and agglomerates turned into the absence of a homogenous dispersion suitable for evaluation in cell culture assay. Indeed, the stock dispersion of MWCNT, MWCNT/PPGP s , MWCNT/PPGP c , MWCNT/DOX, MWCNT/PPGP s /DOX, and MWCNT/PPGP c /DOX was further diluted considering the amount of loaded DOX into each sample, as reported in Table S1. Thus, the concentrations were normalized and reported as the DOX amount.
The cytotoxic properties of loaded DOX were compared with that of the free DOX (Figures 12 and 13) at four different concentrations (8.25 µg/mL, 16.5 µg/mL, 33 µg/mL, 66 µg/mL) after 48 h of treatment. Our preliminary results indicated that DOX maintains its inhibitory effect for all the investigated cases (free DOX, covalently loaded DOX, or complexed DOX) on both A549 and M14 cellular lines. Moreover, from our data, it emerges that there is a different behavior for the CNT drug delivery systems with a cell variance. MWCNT/DOX showed lower activity compared to the free DOX on M14 cellular lines; MWCNT/PPGP c /DOX and the free DOX displayed a similar effect on both cellular lines at all the investigated concentration; MWCNT/PPGP c /DOX appeared more efficient than free DOX in A549 cell lines and less efficient than the free DOX in MT14 ones. Table S1. Thus, the concentrations were normalized and reported as the DOX amount.
The cytotoxic properties of loaded DOX were compared with that of the free DOX ( Figure 12 and Figure 13) at four different concentrations (8.25 µg/mL, 16.5 µg/mL, 33 µg/mL, 66 µg/mL) after 48 h of treatment. Our preliminary results indicated that DOX maintains its inhibitory effect for all the investigated cases (free DOX, covalently loaded DOX, or complexed DOX) on both A549 and M14 cellular lines. Moreover, from our data, it emerges that there is a different behavior for the CNT drug delivery systems with a cell variance. MWCNT/DOX showed lower activity compared to the free DOX on M14 cellular lines; MWCNT/PPGPc/DOX and the free DOX displayed a similar effect on both cellular lines at all the investigated concentration; MWCNT/PPGPc/DOX appeared more efficient than free DOX in A549 cell lines and less efficient than the free DOX in MT14 ones. Notably, the comparable cytotoxicity effects between free DOX and loaded DOX were discussed without considering the delayed release of DOX from the carrier. Considering the amount of DOX released at 48 h (about 35-43% at pH 5.5, Figure 11B, and 8-9% at pH 7.4, Figure 11A), a major cytotoxic effect can be ascribed to the DOX loaded on CNT respected to the free DOX, presumably due to a more efficient internalization route. Further investigation will be devoted to the studies of cellular uptake and intracellular trafficking of our nanocarrier to verify the chance to extend the use of this DDS.
Computational Studies
Although some molecular dynamics (MD) studies upon DOX/SWCNT systems were already performed [50][51][52][53][54][55], to the best of our knowledge, this is the first one that takes in consideration even a DOX/MWCNT system and, in particular, the use of opportunely reduced systems which reflects the actual diameter of the CNTs employed for the reported experiments and the CNT/PPGP/DOX ratios obtained from the experimentally observed weight percentages. So, we built four supramolecular systems named SSy1-4 corresponding to the complexes SWCNT/PPGPs/DOX, SWCNT/PPGPc/DOX, and MWCNT/PPGPs/DOX, MWCNT/PPGPc/DOX, respectively, and submitted each of them to 100 ns MD simulations. For each experiment, we used three different systems in which the starting positions of the PPGP and/or DOX molecules were randomly varied.
The studied systems have reached their equilibrium states after about 10 and 15 ns of simulation time for the SSy1,2 and SSy3,4, respectively, as revealed by the root-mean-square displacements (RMSD) of DOX molecules reported, only for SSy2,4, in Figure 14 and Figure 15; it is evident that the fluctuations in RMSDs have reduced significantly after these periods. Notably, the comparable cytotoxicity effects between free DOX and loaded DOX were discussed without considering the delayed release of DOX from the carrier. Considering the amount of DOX released at 48 h (about 35-43% at pH 5.5, Figure 11B, and 8-9% at pH 7.4, Figure 11A), a major cytotoxic effect can be ascribed to the DOX loaded on CNT respected to the free DOX, presumably due to a more efficient internalization route. Further investigation will be devoted to the studies of cellular uptake and intracellular trafficking of our nanocarrier to verify the chance to extend the use of this DDS.
Computational Studies
Although some molecular dynamics (MD) studies upon DOX/SWCNT systems were already performed [50][51][52][53][54][55], to the best of our knowledge, this is the first one that takes in consideration even a DOX/MWCNT system and, in particular, the use of opportunely reduced systems which reflects the actual diameter of the CNTs employed for the reported experiments and the CNT/PPGP/DOX ratios obtained from the experimentally observed weight percentages. So, we built four supramolecular systems named SSy1-4 corresponding to the complexes SWCNT/PPGP s /DOX, SWCNT/PPGP c /DOX, and MWCNT/PPGP s /DOX, MWCNT/PPGP c /DOX, respectively, and submitted each of them to 100 ns MD simulations. For each experiment, we used three different systems in which the starting positions of the PPGP and/or DOX molecules were randomly varied.
The studied systems have reached their equilibrium states after about 10 and 15 ns of simulation time for the SSy1,2 and SSy3,4, respectively, as revealed by the root-mean-square displacements (RMSD) of DOX molecules reported, only for SSy2,4, in Figures 14 and 15; it is evident that the fluctuations in RMSDs have reduced significantly after these periods.
In particular, Figure 14 shows the run of SSy2 in which there are two DOX molecules within the CNT at the starting time (0 ns) and 6 DOX molecules at the end time (100 ns). The entrance of the other four molecules, in pairs, takes place at 41.9 and 78.5 ns, respectively, as evidenced by the fluctuations registered in the RMSD graph for the DOX molecules. Interestingly, the six molecules within the CNT cavity were paired to four chloride ions, which coordinates some ammonium groups, some of which were facing each other (Figure 16), left; no sodium ion is present. The remaining DOX molecules are strongly anchored to the external wall of the SWCNT employing π-π interactions and, in some cases, two and even three of them are stacked together; the PPG pendant remains, most of the time, close to the external wall of the CNT (Figure 16, right). fluctuations registered in the RMSD graph for the DOX molecules. Interestingly, the six molecules within the CNT cavity were paired to four chloride ions, which coordinates some ammonium groups, some of which were facing each other (Figure 16), left; no sodium ion is present. The remaining DOX molecules are strongly anchored to the external wall of the SWCNT employing π-π interactions and, in some cases, two and even three of them are stacked together; the PPG pendant remains, most of the time, close to the external wall of the CNT (Figure 16, right). As regards the SSy4 system, at the end of the 100 ns of the MD simulations, there are 8 DOX molecules within the inner CNT cavity, whereas almost all others are mainly adsorbed upon the external CNT surface of the far wall, stacked up to 5 units. Due to the short length of the tube (10 Å), the same DOX molecules are scattered along the two ends, usually with the most extended axis parallel to that of the CNT and stacked on each other.
The two different accommodation of DOX molecules between the two SSy2 and SSy4 systems are in accord to their respective side surface extensions, 1385 Å 2 and 1492 Å 2 , except for that occupied by the covalently bound PPGP molecules. Considering that the semi-surface of a DOX molecule corresponds about to 75 Å 2 , about 18 upon 45, and 20 upon 354 DOX molecules for the SSy2 and the SSy4 system, respectively, should be able to cover the entire surface of each CNT. This also means that, on the equal surface, the efficiency of the MWCNT is approximately 7.8 times higher than that of the SWCNT. Finally, the ratio of about 1:5 of the PPGP molecules on the equal surface makes the MWCNT much more soluble than the SWCNT.
For the other two systems, SSy1,3, the trend is almost superimposable upon that of the corresponding systems with the covalently linked PPGP moieties. In both cases, the PPGP molecules are found to be sufficiently adherent to the surface of the CNT, mostly thanks to the portion of the PPG moiety; this is in accord with the TEM micrographs C,c and A,a of Figure 8 that highlight the presence of a polymeric layer adhered to the CNT surface. Moreover, for the SSy3 complex, the presence of 12 PPGP molecules strongly adsorbed on its surface make it even more soluble as the SSy4 parent. As regards the SSy4 system, at the end of the 100 ns of the MD simulations, there are 8 DOX molecules within the inner CNT cavity, whereas almost all others are mainly adsorbed upon the external CNT surface of the far wall, stacked up to 5 units. Due to the short length of the tube (10 Å), the same DOX molecules are scattered along the two ends, usually with the most extended axis parallel to that of the CNT and stacked on each other.
Conclusions
The two different accommodation of DOX molecules between the two SSy2 and SSy4 systems are in accord to their respective side surface extensions, 1385 Å 2 and 1492 Å 2 , except for that occupied by the covalently bound PPGP molecules. Considering that the semi-surface of a DOX molecule corresponds about to 75 Å 2 , about 18 upon 45, and 20 upon 354 DOX molecules for the SSy2 and the SSy4 system, respectively, should be able to cover the entire surface of each CNT. This also means that, on the equal surface, the efficiency of the MWCNT is approximately 7.8 times higher than that of the SWCNT. Finally, the ratio of about 1:5 of the PPGP molecules on the equal surface makes the MWCNT much more soluble than the SWCNT.
For the other two systems, SSy1,3, the trend is almost superimposable upon that of the corresponding systems with the covalently linked PPGP moieties. In both cases, the PPGP molecules are found to be sufficiently adherent to the surface of the CNT, mostly thanks to the portion of the PPG moiety; this is in accord with the TEM micrographs C,c and A,a of Figure 8 that highlight the presence of a polymeric layer adhered to the CNT surface. Moreover, for the SSy3 complex, the presence of 12 PPGP molecules strongly adsorbed on its surface make it even more soluble as the SSy4 parent.
Conclusions
Nanomedicine and technological nano delivery systems are a rather new but rapidly developing field where molecules in the nanoscale range are employed as diagnostic tools or to deliver therapeutics to specifically targeted sites and with a huge controlled fashion [56][57][58][59]. CNT are emerging nanomaterials with massive potential in the diagnostic and therapeutic fields. An effective way to make CNT more biocompatible and to increase their application in medicine is to functionalize the nanotubes. In this paper, we reported the functionalization of single and multi-walled carbon nanotubes with a pyrrole polypropylene glycol derived compound exploiting a Diels-Alder reaction. Thermogravimetric analysis and FT-IR spectroscopy showed that the functionalization of the considered adduct with PPGP was successful and that the introduction of a proper amount of modifier was achieved. WAXD shows that the functionalization procedures do not substantially alter per se the bulk structure of carbon nanotubes. The obtained functionalized CNT were then exploited to make a non-covalent CNT/PPGP/DOX supramolecular nano complex. The ability of DOX to interact with the non-covalent and the covalent PPGP modified CNT was investigated by experimental and computational techniques. HR-TEM microscopy confirmed that the covalent adducts show a regular adherent polymeric layer, and the supramolecular adducts contain on their surface both micelle-like structures and adherent polymer chains. MD simulations showed that DOX molecules can be adsorbed to the external wall of the nanotubes or included in their cavity.
Biological studies revealed that the in vitro activity of MWCNT/PPGP s /DOX and MWCNT/PPGP c /DOX are similar to that of the free DOX in A549 and M14 cell lines, although the former activities are actually attributable to a release, at 48 h, of approximately 8% (at pH 7.4) or 40% (at pH 5.5) of DOX.
Moreover, our studies show a different biological behavior between pyrrole functionalized-SWCNT and pyrrole functionalized-MWCNT, although a similar degree of chemical was detected for both materials. The formation of carbon aggregates and agglomerates in biological media for pyrrole functionalized-SWCNT prevented their evaluation, whereas the better dispersibility of pyrrole functionalized-MWCNT allowed the evaluation of their cytotoxicity in cell culture assay.
The use of carbon nanotubes in the drug delivery field seems promising due to the ability of CNT to cross biological barriers. This work paves the way for the facile functionalization of carbon nanotubes exploiting the "pyrrole methodology" for the development of novel technological carbon-based drug delivery systems.
Even if the preliminary biological studies were satisfactory, more mechanistic work is needed to investigate the capabilities of the novel "pyrrole functionalized" CNT to translocate into cells.
Moreover, the intracellular trafficking of MWCNT/PPGP s /DOX, MWCNT/PPGP c /DOX, and released DOX that determines the drug efficacy and the related side effects also need be studied. Figure S3: UV-Vis absorbance spectra of PBS solutions of free DOX (pink), MWCNT/PPGP/DOX covalent (yellow), and supramolecular (black) adducts, SWCNT/PPGP/DOX covalent (purple) and supramolecular (light blue) adducts. Figure S4: WAXD patterns of MWCNT (a), MWCNT/PPGP s (b), and MWCNT/PPGP c (c), Figure S5: TEM micrographs at low magnifications of pristine CNT and PPGP adducts, Table S1: Drug loading of the pristine and modified CNT, Table S2: Results of the inspections performed on the 1 mg/mL dispersions of the reported sp 2 carbon allotropes (CA) in the listed solvents. The label 'good' indicates the stability of the dispersion, while the label 'bad' indicates a separation between the CA and the solvent. Section S1: Wide-Angle X-ray Diffraction Details. Section S2: Hansen Solubility Parameters (HSP) Details. Section S3: Molecular Dynamics Details. | 14,210 | 2020-05-31T00:00:00.000 | [
"Chemistry"
] |
Particles, fields and a canonical distance form
We examine a notion of an elementary particle in classical physics and suggest that its existence requires non-trivial homotopy of space-time. We show that non-trivial homotopy may naturally arise for space-times in which metric relations are generated by a canonical distance form factorized by a Weyl field. Some consequences of the presence of a Weyl field are discussed.
I. Introduction.
Classical physics describes motion of particles under an action of classical fields. Classical particles are usually assumed to be structurless material points. Classical fields are produced by charges that attract or repel each other. It is also conventionally assumed that elementary charges (or simply elementary particles) of classical physics are point-like and have vanishing spatial sizes. (This follows from the fact that classical solutions with charge distributed in some area of space are normally not stable. Hence unknown additional forces are needed to stabilise elementary particles if they were to occupy some finite region in space.) The classical picture therefore contains a space filled with delta-like charges and fields described by field potentials everywhere except the points of charge singularities [1].
It is also widely accepted that classical fields represent connections in a fibre bundle associated with a particle representation transforming under Lorentz and a local symmetry group of particle interactions [2]. There exists an asymmetry in dealing with particle representations and connections in classical physics: connections enter the scheme of classical physics (as field potentials) while particle representations (fibre co-ordinates on which these connection in an appropriate associated form act) often do not. For example, electromagnetic 4-potential (that represents connection in a space of complex particle representations) is an element of classical physics, while complex particle representations are not. As a result, fields lose their geometrical meaning in classical physics and appear to be ad-hoc assumptions of classical dynamics. In this light, it seems natural to eliminate the asymmetry and restore geometrical meaning of classical fields by adding an internal structure to a classical particle.
Recently we have discussed classical dynamics containing particle representations that transform under Lorentz and local symmetry groups of particle interactions [1]. We have assumed that every point of our space-time has its own copy of the additional particle coordinates (describing a state of the particle) and treated the space of classical physics as a fibre bundle. The local co-ordinates of the (associated) fibre bundle were (x, ), where x are the usual space-time co-ordinates and are the fibre co-ordinates (that transform under representations of the Lorentz and local symmetry groups).
We assumed that locally physics is simple and the fibre space of one point of space-time can be connected to that of an adjacent point by a linear connection. As a result, field potential plays a role of a connection in the world fibre bundle while a classical particle appears as a non-trivial state of the world fibre bundle described by a globally non-trivial connection.
Since any non-trivial state of a world fibre bundle is accompanied by a non-trivial connection it implies that a classical particle is surrounded by fields and has some sort of singularity which is localised in space.
In Ref. 1 we "simplified" a classical particle to one point and assigned a particle representation vector to a point of its field singularity. The first conclusion of this approach was the fact that the conventional definition of geodesics should be modified when applied to classical particles with an internal structure. We managed to reformulate geodesics in terms of the parallel transport of the particle state vector (instead of the parallel transfer of a tangent vector) under the price that the distance on a manifold, ds, should be determined by an eigenvalue of some operator-valued distance one-form : ˆ( ) dx ds the conventional metric two-form). The new definition of geodesics is as follows: geodesic is a curve such that the parallel transport of the initial representation vector to any point along the curve yields an eigenvector of the operator-valued distance form ˆ( ) x taken at this point (in accordance with the original definition where a geodesic is defined as a curve such that the parallel transport of a tangent vector along the curve gives a displacement on the manifold via the canonical forms i ), see [1,3].
It turned out that the conventional metric two-form can be replaced by the linear operator one-form defining the same metrical relations. This linear operator was referred to as the canonical distance form (or simply the distance form) and has an analogy with the Finsler's metric. The action principle based on the distance form readily gives a description of classical particles with spin subject to Yang-Mills forces. The particle state plays the role of the classical particle momentum in this description. We have shown that motion of spinor particles in this formulation of classical physics is affected by the space-time curvature.
In case of the four-dimensional Lorentz space-time (which is an area of low energy particles and fields) the canonical distance form can be written as ˆˆa representations of SU(2) group (L α is a SU(2) doublet and R is a SU(2) singlet, α=1,2) which is forbidden by Schur's lemma [4].
In order to remedy the situation by simplest means, we have introduced an additional scalar field α which transforms as SU(2) doublet and glues spaces of left and right components of orthogonal representations. Then, the simplest canonical distance of our space-time is † †0ˆˆˆ0 Eigenvalues of the distance form (1) can be found as and yield the following length element d: It is clear that the field α scales the distance measured with the help of the particle that transforms as L The idea to introduce a scaling factor into the length interval is not new and was proposed some years ago by Herman Weyl in a brilliant conjecture later transformed into the modern gauge theories [5]. The scalar doublet a will be referred to as the Weyl field. This gives an action for a classical electron as: are the Yang-Mills connections for the left and right spinor components, is the Weyl form) and (2) vector [1].
The purpose of this paper is to provide two additional arguments in favour of the proposed distance form (1). Namely, we show that the distance (3) helps to solve problems of particle existence and singularities, discussed in Section II, and particle energy and divergences, discussed in Section III. We briefly discuss the Weyl field properties in Section IV. Finally, a conclusion is given.
II. Particle existence and singularities.
Created by the works of Weyl, Einstein, Cartan, Yang and Mills, gauge theories form the basis of the modern physics. They appeal to natural knowledge that locally physics is simple (there exists a local trivialisation of the world fibre bundle). Mathematically, they reflect the fact that any reasonable dynamics produces a flow which can be parallelised in appropriate coordinates (e.g., see the Darboux theorem [6]). Gauge theories describe behaviour of fields and motion of test particles extremely well. However, they lack one important ingredient, namely, a source of fields. A source of the field (a particle) cannot be described in a framework of a gauge theory in four-dimensional space-time. This follows from the following theorem: Any principal fibre bundle with a homotopically trivial base M (or the structure group G) is trivial.
Mathematical details connected with this "triviality" theorem are given in Ref. 7 Indeed, by removing a point from R 4 (R 4 is topologically identical to space-time), we obtain a manifold R 4 \R 0 which is topologically equivalent to the product S 3 R 1 and which allows nontrivial fibre bundles ( 3 (S 3 R 1 )= 3 (S 3 )=Z, where 3 is the homotopy group [7]). These fibre bundles could be associated with "photons" of the field because they would have only one singular point in space-time: a point of a "photon" creation or absorption. Analogously, by removing a singular line from R 4 , we create a manifold R 4 \R 1 topologically equivalent to S 2 R 2 which is also non-trivial ( 2 (S 2 R 2 )= 2 (S 2 )=Z). Fibre bundles over this manifold could be related to a particle and the singularity line R 1 can be regarded as a particle world line.
Finally, by removing a singular plane from R 4 , we construct a manifold R 4 \R 2 which is equivalent to S 1 R 3 with 1 (S 1 R 3 )= 1 (S 1 )=Z. Fibre bundles associated with this manifold could be linked to the Dirac monopole or vortices since they would have a singular line in three-dimensional space.
The triviality theorem is a generalisation of a well-known physical fact that the change density associated with an elementary particle is usually singular and that its charge is normally quantized. Hence an elementary particle cannot be described by a trivial fibre bundle that generates finite charge density. Physicists realised this problem a long time ago and various attempts have been made to develop a singularity-free theory of matter [8]. These attempts did not lead to a consistent and self-contained theory. As a result, several different approaches are now used to deal with singularities.
The most common is a positivistic approach which admits that something is wrong with a definition of an elementary particle but discards all difficulties. Physicist-positivist states that the main task of science is to predict results of measurements. Thus, scientists should not be interested in a detailed structure of nature as long as we can calculate every measured quantity. The theory of renormalization (developed by positivists) deals with infinities and singularities in exactly this vein. This is a consistent and successful approach shared by many.
Another approach (proposed by Kaluza and Klein [9]) is based on additional dimensions.
This approach assumes that the base of our world fibre bundle is not equivalent to simple R 4 and introduces additional dimensions which are hiding from our observations. Then, the base of the world fibre bundle could be topologically non-trivial and hence non-trivial fibre bundles describing particles are possible. This attractive view has its advocates in a number of modern string theories. However, there is a difficulty connected with such an approach.
Namely, using same arguments of covering homotopy one can prove that [7]: Thus, no matter how complex and non-trivial the fibre bundle is in the world with additional dimensions it will be trivial after a reduction to R 4 which is an area of low energy particles and fields. It implies that additional compactified dimensions should show themselves in observed space-time (in order to form a classical particle) for which we simply do not have enough experimental evidence at present.
A third common approach consists in ignoring problems connected with sources of fields on the basis that classical theory is not satisfactory anyway and quantum physics is needed for adequate description of our world. However, this approach just moves the problem of singularities deeper and deeper into quantum physics (from non-relativistic quantum mechanics to relativistic one, then to quantum theory of fields and then to a string theory) and (1) is good enough to ensure non-trivial particle-like principal fibre bundles because 2 (G/U em (1))= 1 (U em (1))=Z. These bundles even have a topological charge.
It is worth stressing that particles in this picture appear at places where the magnitude of the Weyl field goes to zero. This is in a stark contrast with standard Higgs-based models where the mass of elementary particles is produced by a non-zero value of the magnitude of Higgs field. A contribution of the Weyl field to particle energy is always non-zero which means that elementary particles which are described by a line where the Weyl field is zero should have non-zero masses. The close physical analogy to the proposed model of a particle is a vortex in type-II superconductors. The "universe" of superconductor is described by a wave-function of Cooper-pair condensate. This universe allows non-trivial fibre bundles with the structure group U(1) whenever the condensate density is zero in some region of superconductor. Due to symmetry, these fibre bundles are topologically stable when a region of vanishing condensate density is a line in R 3 (or a plane in R 4 ).
III. Particle energy divergences.
The distance form (1) and the length element (3) also ensure that the energy connected with particle singularities is finite. Indeed, according to (3) the volume element is proportional to 4 and hence compensates an apparent divergence of energy of fields generated by the particle in the places of singularities with 0 .
It is necessary to note that the tetrad defined by the distance form (1) is orthogonal but is not orthonormal. We can rewrite (1) as †ˆR and the length element as where ( ) is an orthonormal tetrad that depends on the Weyl field. It is a canonical form of the length element of classical physics except for our knowledge that the tetrad is also defined by some scalar field. If we assume that this dependence is absent and the field is constant, we return to the case where particle fibre bundle are impossible and energies connected with "manufactured" particles are infinite.
There exists a good reason for moving the Weyl field into a "geometry" part of the action and writing the length element in the form (5) conventional for classical physics. Let us consider a generic example of a scalar field that defines the length element coupled to a gauge field of connection A (we assume for simplicity that the contribution from fermionic fields can be neglected). The field action for this system could be written as † where is a constant, V() is a potential of the field , d is the volume element, F DA is the field strength (curvature) and the star denotes the Hodge operator. From (6) we get the following set of Maxwell equations: where (j is the current). Suppose that we know the solution of the system (7). Introducing a decomposition 0 ( ) T g (T(g) is an element of the local symmetry group and 0 is a fixed normalised vector), we find that the current associated with , is distributed in space since in general is not constant.
We note, however, that there is no a clear way of separating a contribution from the Weyl field to an experimentally measured length interval. Also, this contribution could be different in different points of space-time reflecting different choice of units for the Weyl fields in different points. Hence, we have reasons to believe that the action for the Weyl field should be scale invariant and allow local conformal symmetry. The arguments in favour of a conformal invariance of underlying physics have been already suggested by Weyl himself [5], see other works in review [11]. Let us, therefore, consider a generic conformal invariant action given by [11] where ab R is the Ricci tensor. The action for the Weyl field contains the kinetic term † . In this conformal case, we can use a "geometry" trick and move a problem of variation into the coordinate part of the action. Instead of we write where the Hodge operator of the new metric relations is produced by the conformal group Under this conformal transformation, the transformed Weyl field is given by theory. In a more complicated scenario, a separate dilaton field could be added to the theory (or the modulus of Weyl field may be regarded as a dilaton field), see references in [14]. The presence of an additional Higgs field (that would generate masses of particles and should have much higher expectation value than the Weyl/dilaton fields) may be unnecessary as masses of particles could be generated by self-interaction term i.e., by the energy of the field produced by the particle -the point of view shared by Poincare. It is worth noting that the process of particle-antiparticle annihilation provides a strong indication in favour of this hypothesis. By doing Lorentz transformation of a particle-like solution where the Weyl field equals to zero along the world line (t, 0) and using the Lorenz invariance of the theory, we can easily check that the relativistic energy-momentum relation holds in both global (as the integrals over the space) and local (as the property of the particle world-lines). The mass of the particle-like solution is proportional to the total energy of the system. Different masses of different particles could correspond to different structures of nodes of the Weyl field.
The fermion part of the action has some subtlety. The conform invariant action can be written as 1ˆt r 4 with the current When space-time has nontrivial topology produced by regions of zeros of the Weyl field, non-trivial associated fibre bundles of spinor fields are possible. Here, the geometry trick can be used to make the density of the charge being constant in the space (in agreement with Dirac "sea of electrons"). As a result, the energy of a fermion will be produced by the energy of self-interacting fields plus small additional energy connected with spin degree of freedom (which corresponds to deviation from the uniformly charged space.) The field contribution to the total energy in this case can still be written in a simple form (12).
IV. Discussion of Weyl field properties.
Here we briefly discuss some general properties of the Weyl field in a conformal invariant theory. First, we note that the Weyl field is bosonic field, which follows from the fact that (1) is a scalar with respect to Lorentz transformations. It is worth noting that normally it is gauge fields that are bosonic (which follows from the Lorentz invariance of ˆa ), which was noticed by many authors [11].
Combined together, these two terms could yield spontaneous symmetry breaking [14] and a nonzero mass for an elementary particle (that is not a gauge particle) in the presence of nontrivial curvature. Third, the curvature of space time is connected to the presence of matter and hence to density of regions with zero modulus of the Weyl field. The particle creation and annihilation and their dynamics is an evolution of zero-Weyl-field regions.
V. Conclusions.
We showed that the canonical distance form factorized by the Weyl field suggests a way to solve the problem of particles existence in gauge theories. In this approach, elementary particles represent non-trivial associated fibre bundles realised around regions of space-time where the modulus of the Weyl field is zero and the metric relations are not defined. We discussed how a conformal invariant theory of the Weyl field provides an apology for framework of classical physics. | 4,401.6 | 2014-04-17T00:00:00.000 | [
"Physics"
] |
Mix2SFL: Two-Way Mixup for Scalable, Accurate, and Communication-Efficient Split Federated Learning
In recent years, split learning (SL) has emerged as a promising distributed learning framework that can utilize Big Data in parallel without privacy leakage while reducing client-side computing resources. In the initial implementation of SL, however, the server serves multiple clients sequentially incurring high latency. Parallel implementation of SL can alleviate this latency problem, but existing Parallel SL algorithms compromise scalability due to its fundamental structural problem. To this end, our previous works have proposed two scalable Parallel SL algorithms, dubbed SGLR and LocFedMix-SL, by solving the aforementioned fundamental problem of the Parallel SL structure. In this article, we propose a novel Parallel SL framework, coined Mix2SFL, that can ameliorate both accuracy and communication-efficiency while still ensuring scalability. Mix2SFL first supplies more samples to the server through a manifold mixup between the smashed data uploaded to the server as in SmashMix of LocFedMix-SL, and then averages the split-layer gradient as in GradMix of SGLR, followed by local model aggregation as in SFL. Numerical evaluation corroborates that Mix2SFL achieves improved performance in both accuracy and latency compared to the state-of-the-art SL algorithm with scalability guarantees. Moreover, its convergence speed as well as privacy guarantee are validated through the experimental results.
I. INTRODUCTION
U TILIZING large amounts of data through large-scale par- allel computing power is instrumental in high-quality deep learning [1], [2], [3], [4].In this respect, federated learning (FL) first explores a way to utilize distributed learning for harnessing the data and computing resources scattered across multiple clients [5], [6].In FL, each client trains a local model using its own data, and uploads it to a parameter server.The server averages the uploaded models from clients, and constructs a global model that is downloaded by each client.By iterating this process, FL allows each client to reflect other clients' data without raw data exchanges that may induce privacy leakage.However, FL is ill-suited for large-sized models particularly when clients cannot store and transmit such large models given small memory as well as limited computing and communication energy.
On the other hand, split learning (SL), first proposed in [3], [4], is an alternative solution to FL to cope with large-sized models in a resource-efficient way [7], [8], [9].A typical architecture of SL is constructed by splitting an entire deep neural network (DNN) model into two partitions, in a way that the upper model segment above the split-layer is stored at the server while the lower model segment is stored at each client.In contrast to FL exchanging model parameters, each client in SL uploads the split-layer output, also known as smashed data, and downloads the gradient from the server to update the entire model.As the first of its instantiation, Vanilla SL shows its accuracy on par with FL while guaranteeing scalability in terms of increasing accuracy along with the number of clients.Notwithstanding, Vanilla SL operates from client to client in a sequential way, suffering from large latency particularly with many clients.Such a limitation calls for Parallel SL that can utilize the clients' parallel computing power to reduce latency.
In Parallel SL, the smashed data and gradients from multiple clients are exchanged in parallel, reducing the latency.However, Parallel SL fails to increase accuracy with the number of clients, questioning its scalability [10].Indeed, due to the simultaneous connections of multiple clients to the server, Parallel SL has innate problems of server-client update imbalance in the forward propagation (FP) of smashed data and the backward propagation (BP) of gradients, and inter-client update imbalance in the model parameters (i.e., model weights) of clients.To be precise, in the FP, Fig. 1 shows that while a single data sample propagates through the lower model segment of each client, multiple smashed data propagate through the upper model segment at the server.Likewise, for every single lower model segment update in the BP, the upper model segment is updated as many as the number of clients.Lastly, the clients' weights are updated separately even with the same gradient at the server, since the gradient backpropagates through the nonidentical smashed data of different clients according to the BP chain rule.
In fact, split federated learning (SFL), one of the representative frameworks of Parallel SL, already addresses the inter-client update imbalance by applying FL across clients [11], [12], i.e., the lower model segments, which however fails to achieve scalability.Meanwhile, our prior works [10] and [13] focus primarily on addressing the server-client update imbalance by averaging smashed data across clients and splitting the learning rates of the upper and lower model segments, respectively, thereby reinstating scalability.Nevertheless, these two works lack a deep understanding of the aforementioned two imbalance problems, making the methods in [10] and [13] compromising communication efficiency and accuracy, respectively.
Motivated by these preceding works, in this article we aim to achieve the scalability, high accuracy, and communication efficiency of Parallel SL by proposing a unified framework, coined split federated learning with two-way mixup (Mix2SFL).Mix2SFL resolves two imbalance problems, of which the interclient imbalance can be solved trivially by applying FL as in SFL.However, in server-client imbalance, FP and BP problems are intertwined and jointly addressing them is non-trivial.For instance, a näive solution to the FP and BP problems is averaging all the smashed data uploaded from all clients, which however becomes nothing but a white noise particularly with many clients.Instead, inspired from [10], Mix2SFL in the FP averages a small number of smashed data in a combinatorial way, hereafter referred to as Smashed Mixup (SmashMix).Meanwhile, following the method in [13], Mix2SFL in the BP averages the gradients at the split-layer, henceforth referred to as GradMix.In doing so, both FP and BP flows as well as the weight updates become aligned, thereby ensuring the scalability with high accuracy.Furthermore, GradMix in the BP yields a common gradient that can be broadcasted to all clients using the downlink (DL) same bandwidth, improving the communication efficiency and latency.Numerical simulations demonstrate that Mix2SFL outperforms other SL baselines including Vanilla SL and SFL in terms of scalability and accuracy.The results also show that Mix2SFL excels in terms of communication efficiency, convergence speed, and privacy guarantees.
Contributions: The major contributions of this article are summarized as follows: r We point to the mismatch between FP & BP and lack of lower model segment integration as the causes of Parallel SL's unscalability.
r In order to solve this problem while improving both accu- racy and communication efficiency, we design Mix2SFL by carefully combining SmashMix, GradMix, and SFL and describe its detailed operation.
r The simulations results validate that Mix2SFL outperforms state-of-the-art SL algorithms in convergence speed and privacy guarantee, as well as the tri-fold goal of scalability, accuracy, and communication efficiency.Related Works: FL and SL are two promising distributed learning frameworks having their advantages and disadvantages [14].With a large number of clients, FL achieves scalable accuracy [2], [15].Due to the limitations of computation, memory, and communication resources on the client-side, however, it can only handle small models.SL, on the other hand, can run big models by separating them [4], [16], [17], obtaining even quicker convergence with less communication overhead than FL [18], [19], [20].However, as first of its kind, Vanilla SL has large latency with the number of clients due to its sequential operation.
SFL [11], [12] inherits the advantages of FL and SL by combining both techniques, and also becomes the beginning of parallel implementation of SL.In SFL, FL is applied to the lower model segment after the SL's BP is finished.Scalability of SFL, however, is debatable, which is especially important in Internet-of-Things (IoT) or Web-of-Things (WoT) scenarios where global data and computing capacity are scattered among a large number of clients.
To resolve this problem, one of our prior works [10] proposes LocFedMix-SL by jointly manipulating local parallel techniques [21], [22], data augmentation techniques such as mixup [23] and CutMix [24], and federated averaging.By doing so, LocFedMix-SL successfully achieves improved performance in terms of both scalability and convergence speed while compromising latency.
Another prior work of ours [13] proposes SplitLr, which separates and adjusts the server-side learning rate and the clientside learning rate to ensure the scalability of Parallel SL, and GradMix, which can obtain a bandwidth gain by averaging the gradient at the split-layer.SGLR, composed of SplitLr and Grad-Mix, guarantees scalability and low latency, but less accurate.
II. SL ARCHITECTURE FUNDAMENTALS
Our main framework, SL, can be largely divided into Vanilla and Parallel SL according to the server's weight update process.All these SL methods aim to train the same neural network composed of layers.These layers are divided into two chunks, where the network architecture of lower one (lower model segment) are allocated to multiple clients respectively.The server stores the remaining upper layers (upper model segment), generally occupying more of the entire network than the lower model segment.
In this architecture, clients whose set is denoted as C participate in the training process, and they store the network up to (k − 1)th layer, such that the ith client has the ith lower model segment denoted as w c,i = [w 1 c,i , . .., w k−1 c,i ] T for all i.Accordingly, a pair of a client and the connected server compose an entire network, w = [w c,i , w s ] T , where weight w k−1 c,i of the split-layer connects the clients and the server.Here, we denote F as a representation of running forward path with owned weight parameters and data set.Training data set D is distributed to clients, therefore each client i contains |D i | samples, where ∪ i∈C D i = D, and D i is composed of raw data x i and its corresponding ground truth y i randomly selected in batch of size |B i |.Detailed training process of each methods are described below.
A. Vanilla SL
Vanilla SL is an original form of SL, also known as sequential SL [3].In this architecture, only one client is active at a time to train the entire network with connection to the server.
FP: Each ith client is selected sequentially from a set of all clients, and at bth iteration, it generates a mini-batch B i,b from its own data and produces smashed data with the input data, s i,b = F (x i,b ; w c,i ).Then the client send a tuple of the smashed data and one-hot encoded labels, (s i,b , y i,b ), to the server, and the server runs FP through the upper model segment and produces output via activation as y i,b = F (s i,b ; w s ).Usually in classification task, softmax function is used for activation.Finally the server calculates cross-entropy loss of the final prediction with corresponding label, where CEloss(p, q) = − p log q.
BP & Model Transition: To minimize the loss L i , the server updates itself with calculated gradient of the upper model segment and backpropagates until the gradient of the split-layer.Here, the gradient of each layer on the server-side is represented as follows: The server sends the split-layer gradient g k s to the ith client, while updating its upper model segment as follows, w s ← w s − ηG V SL s , where η denotes the learning rate.Since clients act one by one in order, the server also updates in every sequential step jointly with each client.
After receiving the split-layer gradient from the server, the ith client backpropagates its lower model segment by calculating gradient starting from g k−1 c,i : Then the client updates its lower model segment by its learning rate, w c,i ← w c,i − ηG V SL c,i .After the update of ith client finishes, its model weight w c,i is sent to the (i + 1)th client in the next order: ( Repeatedly, the (i + 1)th client can update the received lower model segment by iterating training with its own data set D i+1 .
Limitations of Vanilla SL:
In general, Vanilla SL results in high accuracy and fast convergence rate, since the algorithm works as if one entire neural network trains with all the distributed data with the help of its model transition.However, such sequential architecture incurs a considerable latency, since each client should wait for its turn, until the former clients finish their updates.This delay increases proportionally with the number of clients.Moreover, additional communication overload occurs, as each client sends the lower model segment to the next client after every training iteration finishes.It is inevitable whether clients are directly connected or relayed through a server or secondary client.These drawbacks due to its own structural problem motivate the need for its parallel implementation.
B. Parallel SL
In this structure, the clients connect to the server at the same time.At bth iteration, all clients run forward pass with their own mini-batch of data and upload the smashed data along with its ground-truth label ∪ i∈C (s i,b , y i,b ) to the server simultaneously.
BP With Global Gradient: After the server receives smashed data from all clients in parallel, it runs forward pass and computes cross-entropy loss L i for all i ∈ C. Then the server generates ith gradient using each loss as below, and sends split-layer gradient g k s,i to corresponding client: Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.These steps up to this point are same as Vanilla SL.However in Parallel SL, the server updates upper model segment with the global gradient which is obtained by weighted averaging ith gradients for all i as follows: Next, all clients backpropagate through their own lower model segment using the received gradient, which is actually same operation with Vanilla SL: Update Imbalance Problem of Parallel SL: Compared to Vanilla SL, Parallel SL can reduce latency and energy via efficient pipelining of computing and communication resources.However, the scalability of Parallel SL is questionable in a sense that accuracy does not always increase with the number of participating clients.This limited scalability is due to the update imbalance problem that is divided into server-client imbalance and inter-client imbalance, as elaborated next.To illustrate the server-client update imbalance, we first recall Vanilla SL where a single flow of a gradient is backpropagated through the server to each client, as seen by ( 1) and ( 2), respectively.In stark contrast, in Parallel SL, while each client experiences a single gradient flow in (7), the server encounters the number |C| of gradients in (5) for all clients.In other words, due to the parallel structure sharing a single server by multiple clients, the model parameter update rates between the server and each client are different under Parallel SL.For the same reason as such BP, the number of FP flows at the server is different from that of each client.Next, inter-client update imbalance is incurred by the non-identical model parameters of clients in Parallel SL, as opposed to the synchronous model parameters across clients in Vanilla SL as shown in (3).Given different model parameters, even when the same split-layer gradient is backpropagated from the server to multiple clients, non-identical gradients are backpropagated through different clients.In short, Parallel SL has great potential in latency and energy reduction, yet is not scalable, warranting a redesign of both server-side and client-side operations to address the update imbalance problem inherent in Parallel SL.
III. MIX2SFL: A SOLUTION TO THE UPDATE IMBALANCE PROBLEM OF PARALLEL SL
To solve the update imbalance problem of Parallel SL, in this section we propose split federated learning with two-way mixup (Mix2SFL), which consists of SmashMix and GradMix for server-client imbalance and SFL for inter-client imbalance as shown in Fig. 2.
A. Balancing Server-Client Updates Via Mixing Smashed Data and Gradients
After each client uploads the smashed data and its label as in Parallel SL, Mix2SFL feeds the averaged smashed data to the server through SmashMix elaborated as follows.
1 SmashMix:At bth iteration, the tuple of the ith client is mixed up with the tuple of the jth client (j ∈ C − {i}) as shown below: where λ denotes the mixing ratio that follows a uniform distribution (λ ∼ U (0, 1)).
For each ith client, this process can be repeated up to |C| − times, which implies that a total of |C| gradients, including gradient, denoted G SMU s,i (= G P SL s,i ), obtained through its own activation s i,b and |C| − 1 gradients, denoted G SMU s,i,j , obtained via the 1-to-1 manifold mixup of (8) with jth client, can be generated via server-side BP for SmashMix.Hereafter, the sample obtained through the SmashMix denoted by s i,j,b is coined mixed-up smashed data whose corresponding label is l i,j,b , and the number of them generated per client is denoted by n s .When the set acquired by sampling n s elements without replacement from C − {i} is connoted by C n s i , the server-side weight update formula for Mix2SFL through SmashMix is as follows: Then, for client set C ⊆ C, Mix2SFL provides the averaged split-layer gradient through GradMix, detailed as follows.Note that additional gradients through SmashMix, denoted by G SMU s,i,j in (9), are detached from the lower model segment and thereby do not affect the split-layer gradient or client-side weight update.
2 GradMix: Before sending the gradient, the server averages the split-layer gradients only for clients belonging to set C , thereby enabling them to download the averaged split-layer gradient ḡk−1 c,i , which is backpropagated as follows: (10) Clients corresponding to the remaining C − C download the same gradient as Parallel SL in (7).
By doing so, Mix2SFL successfully supplies the averaged smashed data and split-layer gradient through SmashMix and GradMix to the server and client, respectively, solving the server-client update imbalance rooted in the imbalance of FP and BP flow in Parallel SL.In addition, the effect of SmashMix and GradMix hyperparameters on accuracy and communication efficiency is specified as follows.
Impact of n s : Fig. 3 indicates the top-1 accuracy of SmashMix according to n s as well as |C|.The first thing to note is that accuracy is not always an incremental function for n s .While increasing n s has a similar effect to increasing the batch size, accuracy can also have a concave curve for batch size as in [25].Next, we can see the benefits of increasing |C|.Since the upper bound of n s is |C| − 1, a larger |C| leads to the potential improvement on accuracy with the help of searching for n s in a wider range.Finally, the optimal value of n s that maximizes accuracy varies for each |C|, which could be an interesting topic, but it is left for future work.
Impact of φ: Let φ be the number of clients to which GradMix is applied among all clients, denoted as: φ = |C |/|C|.Then, Fig. 4 demonstrates how the performance of GradMix varies in terms of top-1 accuracy and average downlink (DL) rate according to φ named fraction.Here, the average DL rate measures the average value for each client of the data rate on the DL transmission that the server transmits the gradient to the client.For simplicity, we ideally assume that the DL data rate R DL is equal to its theoritical upper bound, the capacity C DL , which is given by the following Shannon formula: where W and SN R represent channel bandwidth and signalto-noise ratio (SNR) in DL, respectively.First, in Fig. 4, the accuracy hardly fluctuates when φ changes from 0 to 1. On the other hand, the average DL rate increases exponentially as φ increases.Since clients in C shares the averaged split-layer gradient with the same value, GradMix enables broadcasting for clients within C in DL transmission.Consequently, the number of clients broadcasting through the shared bandwidth increases while the number of clients unicasting through the allocated orthogonal bandwidth decreases, resulting in an increase in the average DL rate.Combining them, the strategy for φ close to optimal is to fix it as 1, considering both accuracy and DL transmission rate.
B. Balancing Inter-Client Updates Via Averaging Client Models
After FP and BP, Mix2SFL unifies the lower model segment of clients through SFL.
3 SFL: After clients perform training for T s iterations, the server receives all lower model segments from clients, and computes weighted average with them to generate a global model w c for clients as follows: Then the server broadcasts the global model to the clients, completing aggregation process of Mix2SFL.When recalling and re-expressing ḡk−1 c,i in (10), Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
The last term implies the summation of the client-side gradient.
Assuming that w k−1 c,j is the same for all j ∈ C (i.e., SFL) in ( 12), GradMix has the effect of taking a mixup of the entire gradient that is backpropagated to the client, whereas it alone mixes up only the gradient in the split-layer.In other words, GradMix unifies the gradient that is backpropagated to the client by additionally applying the SFL, and as a result, successfully solves the inter-client update imbalance.Impact of T s : While this aggregation step of SFL allows clients to share their information melted into the model weight to others, it also becomes a communication burden on the clientside depending on the model size or location of the split-layer.As a result, in Fig. 5, SFL with T s = 1 achieves the highest accuracy while compromising communication efficiency due to the increase in uplink (UL) communication cost B UL , which is measured as the accumulated number of bits in the UL transmission.
By combining SmashMix, GradMix, and SFL on the Parallel SL framework, we design Mix2SFL, whose detailed operation is in Algorithm 1.To resolve the update imbalance problem inherent in Parallel SL, in Mix2SFL, SmashMix in FP supplies averaged FP flow to the server in a combinatorial manner, and GradMix in BP provides averaged splitlayer BP flow to clients, aiming to alleviate the imbalance between FP/BP flows.Furthermore, SFL averages the clientside BP flow with GradMix as in (12), thereby achieving a synchronization of the FP and BP flows throughout the entire model.
In addition, to improve the communication efficiency of Mix2SFL, in DL, GradMix increases the communication efficiency by allowing full-band utilization, resulting in high data rate R DL , and in UL, SFL reduces the communication cost B UL by increasing the model uploading period.Further, the high communication efficiency of Mix2SFL is experimentally verified through latency measurements under limited resources in Section IV.
IV. NUMERICAL EVALUATION
Starting from 1-by-1 combinations between component technologies, this section evaluates the performance of Mix2SFL step by step compared to existing SL algorithms including Vanilla SL and SFL.Additionally, for comparison, we use 1 SplitLr in [13], whose core operation is to scale the server's learning rate separately from that of the client's.This behavior of SplitLr is similar to that of SmashMix in terms of propagating the same FP flow to the server multiple times instead of the mixed FP flow and is detailed in Appendix A, available online.As a performance metric, we utilize top-1 accuracy, total communication round, latency, and information leakage.
There are 20 to 100 clients in our evaluation environment [11], [15], [26], [27], [28], and they store distributed CIFAR-10 data set [29] and fashion-MNIST data set [30].We assume the IID data set environmment, where each client contains 10% of the total data set, 5,000 for CIFAR-10 and 6,000 for fashion-MNIST.Meanwhile, the neural network used for a basis is LeNet-5 model [31].The split-layer is located after two convolutional layers following a max-pooling layer each.Therefore, the clients act as a feature extractor each containing 2,872 parameters, and the server stores three fully connected layers with ReLU activation, total number of 59,134 parameters.We train the network with SGD optimizer with η = 0.004 learning rate, 0.9 momentum, and weight decay of 5e − 4. We iterate training for 10,000 communication rounds, with batch size |B i | = 64 for all i.Hyperparameters in Table I and Fig. 6 are as follows: n s = |C|/5, φ = 0.5, T s = 1, and α = 0.5.In addition, a list of notations is referred to Appendix B, available online.
A. Scalability, Accuracy, and Convergence Speed
As a comparison group of Parallel SL in Tables I and II as well as Fig. 6, we utilize 3 SFL as the baseline and compose a one-to-one combination with other component techniques ( 1 SmashMix, 1 SplitLr, 2 GradMix).This is because SFL is the only existing Parallel SL algorithm that can solve the inter-client imbalance of the two imbalance problems by default.Table I shows the evaluation of top-1 accuracy as well as total communication round.Here, total communication round indicates the number of iteration until achieving the corresponding top-1 accuracy, which can be considered as convergence speed.
First, in terms of scalability, except for Vanilla SL, only two of the Parallel SL combinations, including 1 SmashMix and 1 SplitLr, satisfy scalability in the sense that top-1 accuracy increases according to the number of clients |C|.The thing to note is the case with 1 performs better than the case with 1 especially when the number of clients is small (less than or equal to 60).On the other hand, when the number of clients exceeds 60, vice versa.In the case of SmashMix where its performance gain is heavily dependent on the number of mixup between smashed data uploaded by the client, the gain may be marginal with a small number of clients, since the upper bound of n s is determined by |C| (n s ≤ |C| − 1).Therefore, the accuracy gain through SmashMix reverses that of SplitLr when the number of clients increases.Regarding convergence speed, the combination 1 + 3 is superior to other SL algorithms.If lower model segment aggregation of 3 improved the convergence speed of the lower model segment only, 1 may improve the convergence speed of the upper model segment through providing more sample to the server-side, and accordingly this fastens the entire model's convergence speed.Furthermore, as the number of clients increases, the faster the model reaches the top performance.
For brief validation of 1 + 3 on different data sets, Fig. 6(b) displays the learning curve for fashion-MNIST data set when |C| = 100, while Fig. 6(a) shows it for CIFAR-10 data set.Moreover, to figure out the impact of data set's non-IIDness, we set the data distribution in Fig. 6(c) and (d) as a dirichlet distribution [32] with dirichlet concentration parameters α D of 2 and 6, respectively.As a result, it is demonstrated in Fig. 6 that the superiority in terms of accuracy and convergence speed of 1 + 3 holds for different data set types as well as data distributions.In addition, it is noteworthy that the small accuracy drop of Parallel SL compared to Vanilla SL shows the robustness against non-IIDness of its parallelization effect.
One step more, to assess the accuracy when communication efficiency is improved, Table II measures the accuracy of the two scalable and high-accurate combinations in Table I ( 1 + 3 & 1 + 3 ) by increasing T s (φ = 0 by default) or applying 2 GradMix with φ > 0 while maintaining T s = 1.First, both solutions show drastic accuracy drop as T s increases, showing accuracy-communication efficiency tradeoff.When combining 3 as an alternative, the accuracy improves as φ increases, while the communication efficiency improvement when φ increases is already shown in Fig. 4.This improved accuracycommunication efficiency as well as scalability is particularly evident in 1 + 2 + 3 (Mix2SFL), achieving the tri-fold goal even with fast convergence in the Parallel SL architecture, by successfully matching the FP-BP flow under model synchronization.
B. Latency Analysis
In this subsection, we theoretically analyze the resulting latency of Mix2SFL by comparing it with various SL algorithms.Note that among the hyperparameters, n s and α have little or no effect on latency.This means that the latency of Mix2SFL is determined by φ or T s , among which T s is fixed to 1 considering the accuracy drop shown in Fig. 5.Moreover, we add LocSFL [33] as a comparator for latency analysis.In brief, LocSFL replaces the global gradient with the local gradient obtained via the auxiliary network with weight w a,i attached to the client so as to enable omitting DL communication.
Then, Table III derives the computation-communication latency of Mix2SFL as well as Vanilla SL, SFL and LocSFL.For simplicity, the computation time T comp considers only the computing time of FP or BP at the client-side.Moreover, when measuring the communication time, it is assumed that bandwidth is equally distributed to all clients and channel fluctuations are not considered.Further, T UL & T DL are unit times of UL & DL communication under full band utilization, respectively.The following parameters are used to measure the latency of Fig. 7: In Fig. 7, minimum and maximum values exist only in Mix2SFL and SFL, whose latency varies according to the hyperparameters.First, it can be seen that overall latency is especially large when T comp : T UL is 1 : 10.With the help of the parallelization of computing resources, computational latency does not vary significantly even if the number of clients increases.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.But the communication latency can become bottleneck since all clients share limited bandwidth.This highlights the specialty of Mix2SFL.Mix2SFL can maintain low latency in a poor communication environment thanks to its use of averaged split-layer gradient that enables bandwidth sharing between clients.As a result, this leads to low mean and variance in the latency of Mix2SFL.On the other hand, SFL has strengths in terms of minimum latency.In SFL, adjusting the weight aggregation period T s can greatly reduce model aggregation latency occupying a large portion of communication latency, while compromising accuracy as in Fig. 5. Taken together, Mix2SFL is the best SL technique in terms of latency, considering its mean and variance simultaneously.
C. Information Leakage
Another important aspect of SL is its preservation ability against privacy leakage of client-side input data that results from sending smashed data.In this subsection, we experimentally measure the privacy guarantee for the reconstruction attack of Mix2SFL's component technologies: SmashMix, GradMix, and SFL.For measurement, we use a decoder, an upper model segment of ResNet AutoEncoder pre-trained with CIFAR-10 data set from PyTorch Lightning bolts in [34].Here, mean squared error (MSE) is used to measure the reconstruction loss between the input data and the output of the decoder, referred to as the reconstructed data.At this time, a large reconstruction loss in Table IV means that privacy is well protected, and vice versa.
In Table IV, the reconstruction loss of GradMix increases as fraction φ increases.That is to say, the decoder has more difficulty in restoring input data from the smashed data of clients, as more participants share the averaged split-layer gradient.As the number of participants increases, the proportion of one's own gradient that is reflected in generating the averaged split-layer gradient is relatively reduced, and thus the privacy of that gradient is naturally protected.In the case of SmashMix, it has a similar tendency to GradMix.This is because, although SmashMix occurs in FP while GradMix occurs in BP, both techniques are centered on exchanging information (i.e., smashed data or splitlayer gradient) between clients commonly.Similar to the phenomenon in which GradMix's privacy is better preserved thanks to the "Hiding in the Crowd" effect [35] as φ increases, enlarging n s strengthens SmashMix's privacy guarantee.In addition, Table IV demonstrates that indirectly influencing the distribution of smashed data by touching the averaging of weights has a weaker impact than directly averaging smashed data or gradients, leading to inconsistent results for T s .Comparing all algorithms above, Grad-Mix with φ = 1 performs the best in terms of privacy guarantee.
Furthermore, Fig. 8 shows an implementation of input data as well as reconstructed data regarding φ and T s , visually confirming the aforementioned tendency of the reconstruction loss with respect to the hyperparameters.Moreover, Mix2SFL's privacy guarantee measurement according to joint changes in parameters is not explored yet and is left as future work.
Impacts of Mix2SFL and its Hyperparameters: Mix2SFL's SmashMix and GradMix correct the mismatch of FP and BP flow between server-client, respectively.Then, SFL solves the With respect to hyperparameters, except for n s whose optimization is deferred to future work, fixing T s = 1 and increasing φ in Mix2SFL is the optimal solution in all respects.
V. CONCLUSION
We investigated the parallel implementation of SL that can simultaneously solve the excessive computation-communication load of FL and large latency of Vanilla SL.Existing Parallel SL algorithms, however, had a limitation in that scalability is not guaranteed, and we pointed out that the unscalability of Parallel SL comes from its own structural problem, dubbed update imbalance problem.We divided this problem into two parts, one for the imbalance on FP-BP flows between client and server and the other for the absence of model integration.We combinatorially utilized the existing component technologies of Parallel SL and proposed a novel SL framework coined Mix2SFL.In Mix2SFL, SmashMix and GradMix supply averaged FP and BP flows to the server and client, respectively, and SFL unifies the gradient that is backpropagated to the client.Numerical evaluation corroborated that Mix2SFL succeeded in ensuring scalability by resolving the update imbalance problem.Besides, simulation results proved that the proposed algorithm outperformed the existing SL techniques in terms of accuracy, convergence speed, latency, and privacy guarantee.As a future work, in relation to communication-efficiency aspect, we consider a study on the Parallel SL structure to adaptively control the batch size under an environment in which the channel gain fluctuates.Research on optimization of various hyperparameters and convergence analysis of Mix2SFL could also be an interesting topic.Lastly, as in [36], the robustness measurement for various privacy attacks other than the reconstruction attack is deferred to future work.
Fig. 2 .
Fig. 2. Demonstration of the three component technologies of Mix2SFL.
Fig. 3 .
Fig. 3. Top-1 accuracy of SmashMix w.r.t the number n s of mixed-up smashed data and the number |C| of participating clients.
Fig. 5 .
Fig. 5. Top-1 accuracy and cumulative UL cost (in Mbit) of SFL w.r.t lower model aggregation interval T s (in log scale).
Fig. 8 .
Fig. 8. Visualization of input data and corresponding reconstructed data of GradMix (Up) and SFL (Bottom) depending on each hyper-parameter φ and T s , respectively.
TABLE I TOP
-1 ACCURACY AND TOTAL COMMUNICATION ROUND OF MULTIPLE SL ALGORITHMS
TABLE II TOP
-1 ACCURACY OF COMBINATIONS DERIVED FROM THE COMPONENT TECHNOLOGIES W.R.T |C| AND HYPERPARAMETERS TABLE III LATENCY COMPARISON DURING T COMMUNICATION ROUNDS OF SL ALGORITHMS WITH n CLIENTS TABLE IV RECONSTRUCTION LOSS OF COMPONENT TECHNOLOGIES OF MIX2SFL W.R.T HYPERPARAMETERS WHEN |C| = 10 | 8,100.4 | 2024-06-01T00:00:00.000 | [
"Computer Science"
] |
Plant-parasitic nematodes respond to root exudate signals with host-specific gene expression patterns
Plant parasitic nematodes must be able to locate and feed from their host in order to survive. Here we show that Pratylenchus coffeae regulates the expression of selected cell-wall degrading enzyme genes relative to the abundance of substrate in root exudates, thereby tailoring gene expression for root entry of the immediate host. The concentration of cellulose or xylan within the exudate determined the level of β-1,4-endoglucanase (Pc-eng-1) and β-1,4-endoxylanase (Pc-xyl) upregulation respectively. Treatment of P. coffeae with cellulose or xylan or with root exudates deficient in cellulose or xylan conferred a specific gene expression response of Pc-eng-1 or Pc-xyl respectively with no effect on expression of another cell wall degrading enzyme gene, a pectate lyase (Pc-pel). RNA interference confirmed the importance of regulating these genes as lowered transcript levels reduced root penetration by the nematode. Gene expression in this plant parasitic nematode is therefore influenced, in a host-specific manner, by cell wall components that are either secreted by the plant or released by degradation of root tissue. Transcriptional plasticity may have evolved as an adaptation for host recognition and increased root invasion by this polyphagous species.
Introduction Plant pathogens must recognise and respond to host signals in order to survive, with root exudates particularly important for those that are soil-borne [1,2,3]. Root exudates contain up to 20% of the plant's photosynthetically fixed carbon in the form of sugars, amino acids, organic acids, proteins and carbohydrates with the composition varying between plant species [4,5]. Plant parasitic nematodes are amongst the four most economically important groups of plant pathogens, causing >$80 billion of damage to crops globally each year [6]. They are principally root parasites and root exudates play an important role in host-nematode interactions, inducing nematode hatching and thrusting of an anterior, hollow stylet used to penetrate plant cell walls and during feeding [7,8,9]. Nematodes orientate to plant roots in response to chemical gradients (e.g. monosaccharides, carbon dioxide, volatile organic compounds and amino acids) provided by root exudates [7]. Plant hormones such as ethylene and auxin, and their signalling pathways, have been implicated in the attractiveness of roots towards nematodes [10,11,12].
This group of pathogens contains species that are host specialists and others that are capable of parasitising many plant species [13]. Pratylenchus coffeae is a polyphagous, migratory endoparasitic nematode that uses its stylet to disrupt root tissue mechanically and invade host roots [14]. This process is facilitated by secretion of a range of enzymes, produced in the nematode's pharyngeal gland cells, which weaken the cell wall [15,16,17]. The nematode feeds from a cell by ingesting nutrient rich cytoplasm through its stylet, before entering that cell and proceeding to the next [18]. Intracellular migration of the nematode through the cortical tissue results in root necrosis and nutrient deficiency throughout the host whilst increasing susceptibility to secondary root pathogens [19].
The sedentary plant-parasitic nematode Meloidogyne incognita differentially expresses genes in response to Arabidopsis thaliana roots and root exudates compared to when not exposed to a host [20]. Also, expression of a β-1,4-endoglucanase in the foliar nematode Aphelenchoides fragariae decreases after change of food source from plant to fungal culture [21]. This indicates that nematodes show a degree of transcriptional alteration when presented with a host, however the effect of different host plants and their exudates on nematode responses is unexplored. There is a precedent for some other plant pathogens and pests altering gene expression in response to a particular host. Expression of genes responsible for metabolism, chemotaxis and protein secretion in Pseudomonas aeruginosa is significantly altered postexposure to exudates from different varieties of Beta vulgaris [22]. Transcriptional plasticity of multigene clusters also underpins the ability of the peach potato aphid Myzus persicae to colonise diverse plant species rapidly [23]. Non-pathogenic symbionts can also respond to particular plant partners. For instance arbuscular mycorrhizal fungi exhibit host-dependent expression of secreted proteins that alters symbiotic efficiency in Medicago truncatula, Nicotiana benthamiana and Allium schoenoprasum [24]. The fact that such transcriptional variability has been linked to success of generalist pathogens led us to investigate the hypothesis that polyphagous plant-parasitic nematodes such as P. coffeae also have this ability.
RNA interference (RNAi) has been used to show that cell wall degrading enzymes have important roles in root penetration by migratory nematode species [17,25,26]. The inability to degrade one or more cell wall components, such as cellulose or xylan, results in failure to enter the root and death of the nematode. Consequently, as cell wall composition is known to vary between plant families, we focused on determining if expression of these enzymes in P. coffeae is modulated in response to different host species. Substantial differential expression of both a β-1,4-endoglucanase (Pc-eng-1) gene and a β-1,4-endoxylanase (Pc-xyl) gene occurred in P. coffeae post-host invasion and in response to exposure to exudates from host roots. Importantly, the magnitude of this response differed significantly between hosts. A positive correlation occurred between expression levels of these two genes and the level of cellulose and xylan in root exudates. RNAi-mediated loss of transcripts of either gene reduced establishment of this nematode in both potato and maize roots, thereby indicating that the correct regulation of these genes is important for parasitism.
Host root exudates stimulate a stylet thrusting response
Exposure to root exudate is known to stimulate thrusting of the stylet in some plant parasitic nematodes [20] although the response of Pratylenchus spp. has not been reported. Pratylenchus coffeae has a wide host range encompassing both monocot and dicot plants from >30 different genera [14]. We therefore first tested if root exudates from five representative monocot and dicot plants from its host spectrum (coffee, banana, maize, carrot and potato) induce stylet thrusting. Incubation in root exudates of all five hosts stimulated a significant increase in stylet thrusting of mixed stages of P. coffeae by 6-9 fold or more relative to a very low frequency in water ( Fig 1A). The proportion of nematodes showing any frequency of stylet thrusting also significantly increased and almost doubled for all root exudate treatments relative to water (Fig 1B; One-way ANOVA, SNK, P<0.001). All root exudates induced the same response regardless of plant identity. As a positive control, P. coffeae were incubated in 5 mM 5-hydroxytryptamine (5-HT), which is known to stimulate stylet thrusting in plant parasitic nematodes, including the related species Pratylenchus penetrans [27]. Exposure to this concentration of 5-HT induced an even higher rate of thrusting (Fig 1A; One-way ANOVA, SNK, P<0.001) but not a significant, further increase in the high proportion of nematodes responding to root exudate ( Fig 1B).
Pc-eng-1 and Pc-xyl are expressed in the pharyngeal gland cells throughout nematode development
Nematode stylet activity is an essential component for both invasion of and migration through roots. In order to determine if exposure to host root exudate also affected expression of P. coffeae genes involved in the invasion and migration process, we first characterised the spatial and temporal expression of two genes that encode plant cell wall degrading enzymes. A xylanase-encoding sequence (Pc-xyl) was identified by homology searching following de novo assembly of publicly available transcript data for P. coffeae. Expression of both this and a previously reported β-1,4-endoglucanase gene [16] (Pc-eng-1) were localised by in situ hybridisation to the secretory pharyngeal glands of mixed life stages (Fig 2B and 2C). No corresponding staining occurred when the negative control probes were used (Fig 2D and 2E).
Analysis by qRT-PCR confirmed that Pc-eng-1 and Pc-xyl are both expressed by all the nematode life stages studied. Both genes showed significantly increased expression as the nematode developed on carrot discs. The fold increase of Pc-eng-1 transcript for combined adult males and females relative to eggs was 6.22 ± 0.63 and 4.29 ± 1.11 for juveniles ( Fig 2F). The pattern of expression for Pc-xyl was similar to that for Pc-eng-1 (Fig 2F). The level of expression did not differ significantly between males and females for either gene.
Expression of Pc-eng-1 and Pc-xyl is influenced by the host plant
We next tested if the host plant species influenced the expression of the two genes. Microscopic examination of nematode pools prior to RNA extraction revealed no gross differences in the relative abundance of life stages recovered from the different host roots. These populations of mixed life stages of P. coffeae showed significantly higher expression of Pc-eng-1 and Pc-xyl when extracted from roots of banana, carrot, coffee and maize than from roots of potato. Additional differences in expression were associated with host identity. Relative to nematodes from roots of potato, Pc-eng-1 expression was higher when recovered from banana and maize (14.76 ± 6.34 fold, 26.06 ± 7.09 fold) and highest when recovered from roots of carrot and coffee (242.86 ± 50.01 fold, 158.19 ± 43.02 fold) (Fig 3A). Pc-xyl expression for nematodes parasitising roots of maize was 165.09 ± 28.75 fold that for individuals from roots of potato. This increase was greater than the grand mean increase in Pc-xyl expression for individuals extracted from banana, carrot and coffee (11.76 ± 3.67 fold) compared to nematodes from roots of potato. Differential induction of gene expression was not dependent on contact of the nematodes with host roots. Exposure of nematodes cultured on the same host (carrot) to different root exudates after 48 h in water induced broadly similar expression profiles of Pc-eng-1 and Pc-xyl as root parasitism. Pc-eng-1 expression in mixed life stages of P. coffeae was significantly higher after incubation in any of the host exudates than in water ( Fig 3B). The magnitude of this increased expression was highest for nematodes exposed to exudates of carrot or coffee. Those values were greater than for individuals in either banana or maize exudates which in turn were significantly higher than that for nematodes exposed to potato root exudate ( Fig 3B). This pattern of relative expression mirrored that observed for Pc-eng-1 when nematodes were extracted from the different host roots. The expression of Pc-xyl was also significantly higher for nematodes incubated in root exudates than water but the pattern of expression among host exudates did not match that for Pc-eng-1. Expression of Pc-xyl was highest for P. coffeae in maize exudate, similar but less upregulated for those in carrot or coffee exudates, lower when incubated in that of banana and least for nematodes in potato root exudate. This lowest level of induction of Pc-xyl by a host root exudate was still higher than for nematodes incubated in water only ( Fig 3B).
Pc-eng-1 and Pc-xyl expression correlates with cellulose and xylan quantities exuded by plant roots
We hypothesised that the host-related expression levels of Pc-eng-1 and Pc-xyl might reflect relevant differences in root exudate composition. Therefore cellulose (substrate for β- 1,4-endoglucanase) and xylan (substrate for β-1,4-endoxylanase) were quantified in each exudate. Overall there was a linear regression between the quantity of cellulose in the exudates and the expression level of Pc-eng-1 (P<0.05, R 2 = 0.89; Fig 4A). A linear relationship was also evident in the parallel experiment with increasing amounts of xylan inducing significantly increased expression of Pc-xyl (P<0.01, R 2 = 0.988; Fig 4B). The rank correlation coefficient (Spearman test) for the comparison of the increased expression of the two genes in response to plant species was not significant because carrot and maize exudates had different ranks for concentration of the two substrates (Fig 4A and 4B).
Mutant Arabidopsis plants deficient in either cellulose or xylan were used to provide further evidence that concentrations of these cell wall components in root exudate specifically regulate expression of Pc-eng-1 and Pc-xyl. Initial analysis confirmed that the reported cellulose deficiency of rsw1 mutant plant tissue [28] and the xylan deficiency of glz1 plants [29] was reflected in reduced accumulation of these molecules in root exudates (Fig 4C and 4D). Transcript abundance of both Pc-eng-1 and Pc-xyl increased relative to expression in water after exposure of nematodes to root exudate from wildtype Arabidopsis plants (Fig 4E). Expression of Pc-eng-1 was only reduced significantly (P <0.01; One-way ANOVA) from this level when the nematodes were exposed to exudates of the rsw1 mutant line that is deficient in cellulose. A similar specific effect was obtained for Pc-xyl expression when the root exudate was obtained from glz1 mutant plants deficient in xylan (P <0.01; One-way ANOVA). The specificity of the response was confirmed by analysing a pectate lyase gene (Pc-pel) and an additional Arabidopsis mutant (mur3). Pc-pel does not degrade or modify cellulose or xylan and was therefore predicted to be unresponsive to varying abundance of these two polysaccharides. The expression of Pc-pel was upregulated in response to Arabidopsis root exudate but no differential responses were detected among the exudates from wildtype and mutant plants. The mur3 mutant is deficient in fucose and galactose sidechains on the hemicellulose xyloglucan, thereby providing no changes to cellulose or xylan abundance but altering the cell wall structure. The expression of all three nematode genes studied was unaffected, compared to wildtype, when the exudate was from roots of mur3 plants.
Cellulose and xylan specifically upregulate expression of Pc-eng-1 and Pcxyl, respectively
We next determined if pure solutions of cellulose or xylan could induce expression of the nematode genes. Exposure of batches of mixed stages of P. coffeae to a range of cellulose solutions resulted in a significant linear increase in expression of Pc-eng-1 but not Pc-xyl or Pc-pel (Fig 5; P<0.05, R 2 = 0.975). A parallel experiment with a range of xylan concentrations resulted in a linear increase in Pc-xyl expression (P<0.05, R 2 = 0.955) but not of the other two genes.
RNAi of Pc-eng-1 and Pc-xyl reduces root invasion by P. coffeae
RNA interference was used to establish if the induced expression of the P. coffeae endoglucanase and endoxylanase genes was important for successful invasion of host roots (Fig 6). Two different hosts were tested. Potato was selected as its exudates contain the least cellulose and xylan whilst maize exudate has the highest xylan content. In the absence of RNAi, maize roots proved to be the more readily invaded host; a significantly greater number of nematodes were present in maize than potato roots after allowing a 72h period for root invasion (Fig 6) (P <0.01; One-way ANOVA). Treatment of mixed stages of P. coffeae with a dsRNA solution specifically targeting Pc-eng-1 reduced expression of this gene by 76% (Fig 6A). The number of dsRNA-treated P. coffeae detected in maize or potato plants after access to their roots for 72h was reduced significantly by 62.4 ± 5.1 and 54.4 ± 4.8% respectively relative to those Host-specific expression patterns in parasites of plants nematodes pre-incubated in buffer only (Fig 6C) (P<0.001 in both cases; SNK, One-way ANOVA). A dsRNA targeting Pc-xyl reduced expression of this gene by 79% (Fig 6B). Targeting this gene reduced nematode numbers in maize and potato roots by 68.2 ± 6.1 and 41.1 ± 6.7% respectively (Fig 6C) (P<0.001 in both cases; SNK, One-way ANOVA). A control dsRNA treatment that targeted a gfp sequence not present in the nematodes was without effect on root invasion or expression of either gene.
Discussion
The results establish that expression of two genes encoding cell wall modifying enzymes is upregulated in P. coffeae both post-invasion of roots and post-exposure to exudates from host roots. Furthermore, we demonstrate that this is a host-specific response; exposure to root exudates from different host plants confers a differential gene response in this plant-parasitic nematode. The five host plants tested could be divided into three significant groups with respect to their induction of expression of the β-1,4-endoglucanase gene Pc-eng-1: (coffee and The dsRNA molecules reduced expression of their respective targets whilst gfp dsRNA (a gene that is absent from the nematode) had no effect (A and B). Expression is presented relative to that for control nematodes incubated in buffer only. Pc-eng-1 dsRNA and Pc-xyl dsRNA reduced the infection of P. coffeae in both potato and maize root systems (C). Values are means ± SEM (n = 6 pools of mixed stage P. coffeae) with different letters denoting significant differences between treatments P<0.05 (One-way ANOVA, SNK test). carrot) > (maize and banana) > potato. These groupings were the same for expression of Pceng-1 in nematodes recovered from the host roots and nematodes exposed to the root exudates. Three groups were also established for Pc-xyl expression but with a different rank of maize > (banana, carrot and coffee) > potato. The host-specific abundance of both transcripts in response to root exudate was linearly related to the level of cellulose and xylan respectively with potato exudates having the lowest concentrations of both complex carbohydrates. The responses were established as specific as the expression of Pc-eng-1 increased with the concentration of cellulose but not xylan with a vice versa effect when Pc-xyl expression was measured. The expression of a pectate lyase (Pc-pel) was unaltered by exposure to either of these two complex carbohydrates, presumably due to the gene-product exhibiting no activity on either substrate. Pectate lyase catalyses the cleavage of unmethylated pectin and the nematode enzymes are predicted to aid in softening of the cell wall middle lamella so facilitating migration [30]. This gene does respond to presence of a host and so may be regulated by other specific or general components of root exudates. Determining whether or not individual exudate components have specific or broad effects on nematode gene expression is relevant to understanding the mechanism behind the response in not just P. coffeae but other plant-parasitic nematodes that upregulate genes in response to root exudates [20]. There is no indication that these enzymes are released in sequential order by the nematode, as observed in fungi which usually secrete pectin degrading enzymes first [31].
Use of mutant lines of A. thaliana, which is also a host for P. coffeae [32,33], confirmed the specificity of the effect as expression of Pc-eng-1 was reduced only when the nematode was exposed to exudate from a mutant deficient in cellulose [34] and Pc-xyl only when exposed to exudate from a mutant deficient in xylan [29]. Tissue from these mutants contains <50% of cellulose and xylan wild-type levels respectively [34,35]. A third mutant deficient in fucose and galactose sidechains of xyloglucan had no effect on the expression level of either gene, confirming that a variation in cell wall composition per se does not necessarily incur changes in expression in the nematode. Although Arabidopsis is a host, we and others have found that the rate of infection and reproduction of P. coffeae in Arabidopsis is unreliable and highly variable [33]. Concerns about potential pleiotropic effects of Arabidopsis cell wall mutants further discouraged inquiry regarding the capacity of P. coffeae to invade and reproduce in the mutant lines. The use of root exudates from additional mutant lines, coupled with a panel of nematode genes, provides the opportunity to further analyse exudate components that elicit specific or broad responses in the nematode. RNAi of either Pc-eng-1 or Pc-xyl reduced nematode invasion of maize and potato roots and established the importance of both these gene products for the penetration of both hosts.
The polysaccharide components of exudates form a mucilaginous layer along the root, often accumulating at the root tip [1]. These components are either secreted by root epidermal cells or released through their degradation [2,3]. Transfer of polysaccharides across the cell membrane has been suggested to occur through vesicular trafficking and/or ATP-binding cassette transporter proteins [3,36]. Although costly for the plant, continual communication with the rhizosphere is considered important in order to detect and respond to the presence of pathogens, symbionts and beneficial soil micro-organisms [1]. However, in this study we found that these compounds may also be reliable indicators of close proximity of roots and so induce preparation of P. coffeae for root invasion. Expression of Pc-eng-1 and Pc-xyl by all life stages studied is also appropriate as all stages of this nematode invade roots during development. The host-specific concentration of cellulose and xylan is detected by the nematode resulting in appropriate transcriptional shifts. This distinctive reaction to non-host specific complex carbohydrates may be an adaptation for plant invasion due to the polyphagous nature of P. coffeae. Transcriptional plasticity in response to different plant hosts is known to occur for the generalist aphid Myzus persicae [23] and may also be important for other polyphagous plant-feeding arthropods [37]. Our results extend this response to a generalist plant-feeding nematode, suggesting it may be a common adaptation to tailor gene expression to a particular host plant. It would be interesting to investigate which nematode species and genera are capable of such perception and determine if this relates to host generalisation rather than specialisation. Mitotic asexual species of Meloidogyne, root-knot nematodes, which cause complex adaptive changes in root cells to form feeding sites, also have wide host ranges. Their polyphagy is considered to relate to the plasticity afforded by their large, duplicated genomes [38]. This approach is an interesting contrast with that of Pratylenchus. This genus does not modify plant cells but achieves a wide host range while having the smallest genome of any nematode studied to date [39]. Attempting to establish in any plant root encountered seems to be a beneficial adaptation given the nematode's limited locomotory range in soil. The optimised expression of Pc-eng-1 and Pc-xyl seems likely to contribute to the success of generalist P. coffeae nematodes although environment-adjusted regulation of these genes may not play a dominant role in determining the actual host status of different plant types. Root invasion rates and subsequent success in feeding and reproduction are influenced by many factors associated with both the root and the nematode. The recent sequencing of the P. coffeae genome may enable genes involved in those aspects of the host/parasite interaction to be defined [39].
Direct chemoreception of carbohydrate polymers is not a commonly reported ability. Intracellular fluctuations in transcription factor binding due to external cellulose or xylan have been reported for filamentous fungi [40]. Transcription factors CLR-1 and XLR-1 in Neurospora crassa bind to promoters of genes encoding cellulases and xylanases, respectively, with binding enrichment observed for both when grown in cellulose or xylan conditions [41]. Similar responses may occur in plant-parasitic nematodes that regulate the differential expression of cell wall-degrading enzyme genes. However, it is unclear from the fungal studies whether or not the polymer itself is detected, or if the observed effects occur in response to the presence of breakdown products.
Plants can perceive cellulose-derived oligomers as damage-associated molecular patterns (DAMPs) as a means to survey cell wall integrity and then respond by activating a signalling cascade that leads to induction of defence-related genes [42]. Monosaccharides are also known to induce the upregulation of several genes, including an endoxylanase, in fungi [43,44]. Chemosensory detection of these breakdown products by P. coffeae in root exudates may result in the host-specific expression of Pc-eng-1 and Pc-xyl. Monosaccharides are present in root exudates [45] and influence nematode chemo-attraction and stylet activity [46]. In the field, such breakdown products could arise from the activities of soil microbes or be generated in proximity to the nematode through the action of its own secreted enzymes. Given that sterile solutions of cellulose and xylan elicited similar induction of gene expression as root exudate containing equivalent concentrations, it seems likely that soil microbes are not playing an important role in this case. Both Pc-eng-1 and Pc-xyl were expressed at detectable basal levels when P. coffeae was maintained in water and the nematodes exhibited a low rate of stylet thrusting in these conditions. This activity might supply sufficient amounts of the enzymes to release soluble inducers from the carbohydrate polymers associated with the roots, as proposed for fungi [47]. As for other typical β-1,4-endoglucanases, Meloidogyne incognita ENG-1 has been shown to cleave cellulose into glucose dimers/trimers rather than monosaccharides [48]. The conservation of GH5 cellulases suggests that the enzymes of other Clade 12 nematode species, such as P. coffeae, likely have similar activity [49]. The breakdown of these small glucose chains requires β-glucosidase, which we have not identified as being encoded in the genome or transcriptome data for P. coffeae. This suggests a system based on detection of either the cellulose polysaccharide or the short chain cellobiose/cellotriose breakdown products, rather than the monosaccharides. A parallel effect is known for filamentous fungi which do not require the breakdown of oligosaccharides into glucose monomers for induction of β-1, 4-endoglucanase genes [50]. These data present new insights into pathogen detection of carbohydrate polymers and its importance in the parasitism of an economically important nematode species.
Plant culture and exudate collection
Banana (Musa acuminata), coffee (Coffea arabica) and maize (Zea mays) plants were grown in 50:50 sand/loam mix in a glasshouse at 23-25˚C with supplementary lighting to provide 16:8 h light:dark conditions. Carrot (Daucus carota) and potato (Solanum tuberosum var. Désirée) were grown similarly at 19-22˚C. Arabidopsis thaliana (Col-0, rsw1-1 (NASC ID: N6554), glz1 (NASC ID: N16279) and mur3 (NASC ID: N8566)) were grown at 28˚C on ½ strength Murashige and Skoog medium containing 1% sucrose. For exudate collection, roots were washed, separated intact from above ground tissue and soaked in water (80 g/L) in darkness for 24 h at 4˚C. Root exudates were then filter sterilised (0.22 μm) and stored at 4˚C. For RNAi experiments maize and potato plants were grown in CYG growth pouches (Mega International, USA) for 8 days at 22˚C before infection.
Nematode culture and plant infection
A population of P. coffeae was maintained on sterile carrot discs at 26˚C for use in assays described below and to provide inoculum for infection of different host plants. Mixed life stages were collected by washing the discs with sterile tap water. Batches of 500 nematodes were introduced into the soil to a depth of 2 cm around the stem of each host plant for infection. Roots were harvested after eight weeks and washed to remove soil. Roots were then immediately placed in a misting chamber where the spray of water stimulated the movement of nematodes out of the root [51]. After six hours the nematodes were collected in water from the chamber.
Stylet thrusting assay
Groups of 100 mixed life stages P. coffeae nematodes collected from carrot discs were soaked in either 100 μl sterile water, 5 mM 5-hydroxytryptamine (5-HT) or a root exudate for 1 h [20]. Ten nematodes were observed at a magnification of 250x and stylet thrusts of each nematode were counted for 30 s in triplicate. This protocol was replicated with three independent exudates to account for possible variation between collections.
Gene expression analysis
Nematodes extracted from roots. Nematodes were collected from the roots of banana, carrot, coffee, maize and potato plants using a misting chamber, as described above. Nematodes were frozen and total RNA was then extracted, as described later. This was replicated eight times for nematodes from each host plant.
Post-exposure to host root exudates. Mixed stage nematodes of P. coffeae were removed from carrot disc cultures and washed multiple times. Samples of 500 mixed stage nematodes were incubated in tap water for 48 h before exposure to 500 μl of either host root exudate or fresh tap water for 6 h. Total RNA was extracted immediately, as described later. This was carried out four times for each treatment.
Throughout nematode development. Groups of 100 eggs, juveniles, females and males were selected individually from a mixed nematode population reared on carrot discs, and total RNA was extracted to determine expression of genes at different nematode life stages. This was repeated in triplicate.
Treatment of nematodes with cellulose and xylan. Sets of 500 mixed stage P. coffeae were removed from carrot disc cultures and washed multiple times before incubating in tap water for 48 h. The nematodes were then treated with a range of cellulose (0-18 μg/ml) or xylan (0-1.2 μg/ml) solutions (Sigma-Aldrich, US) for 6 h. Concentration ranges were chosen based on respective polysaccharide concentrations detected in root exudates, as described later. This was carried out four times per concentration treatment and then total RNA was extracted.
RNA extraction, cDNA synthesis and gene expression analysis
Total RNA was prepared from nematode samples using an RNeasy Plant Mini Kit according to the manufacturer's protocol including DNase treatment (Qiagen, UK). First-strand cDNA was synthesised from 750 ng RNA using SuperScript II reverse transcriptase (Invitrogen, UK) and Oligo(dT) 17 primer (500 μg/ml) following the manufacturer's protocol. Analysis of gene expression was carried out using quantitative reverse transcription (qRT) PCR with Brilliant III Ultra-Fast SYBR Green Master Mix (Agilent Technologies, CA, USA). Cycle conditions were 95˚C for 30 s and subsequently 40 cycles of 5 s at 95˚C and 10 s at 60˚C. The sequence of Pc-eng-1 was obtained from GenBank (EU176871.1 [16]), whereas Pc-xyl (endoxylanase), Pc-pel (pectate lyase) and Pc-ef (elongation factor, a reference gene) sequences were obtained from genome and transcriptome sequence reads (PRJNA276478 [39], PRJNA79895 [52]) that were assembled using GS De Novo Assembler Software (Roche) (sequence data is given in S1). The genomic data were used to identify regions of Pc-eng-1, Pc-xyl, Pc-pel and Pc-ef that were suitable for design of sequence-specific qRT-PCR primers (see S2 for primer sequences). Each primer pair had an amplification efficiency of 95-100%. The expression of Pc-ef was confirmed to be stable across treatments and life stages, validating its use as a reference gene (S3). The 2 (-ΔΔCt) method was used to calculate relative expression between control and experimental samples for at least three biological replicates each with three technical replicates. One-way ANOVA with a Student-Newman-Keuls post hoc test was used to determine significant differences between means unless otherwise stated.
In situ hybridisation
Single-strand digoxygenin-labelled anti-sense DNA probes for Pc-eng-1 and Pc-xyl were synthesised with DIG DNA labelling mix (Roche, Germany) from cDNA fragments amplified by qPCR primers for Pc-eng-1 and Pc-xyl (see S2 for primer sequences). Sense probe controls were constructed in separate reactions. These probes were used for in situ hybridisation to determine spatial expression patterns for both genes [53]. Approximately 2000 P. coffeae were fixed in 2% paraformaldehyde in M9 buffer for 18 h at 4˚C followed by 4 h at 22˚C. Fixed nematodes were cut with a razor blade before washing with M9 buffer and proteinase-K treatment (0.5 mg/ml for 30 min at 22˚C). Nematodes were frozen and treated with methanol for 1 min followed by acetone for 1 min before rehydration in RNase-free water. Treated nematodes were hybridised with the probes overnight at 50˚C and then washed 3 times with 4x saline sodium citrate (SSC) and three times with 0.1x SSC/ 0.1% SDS at 50˚C. Nematodes were incubated at 22˚C in 1% blocking reagent in maleic acid buffer (Roche, Germany) for 30 min and labelled for 2 h with anti-digoxigenin-AP Fab fragments 1:1000 in 1% blocking reagent. The nematodes were stained overnight at 4˚C with 337 μg/ml nitroblue tetrazolium and 175 μg/ml 5-bromo-4-chloro-3-indolyl phosphate. Stained nematodes were washed in 0.01% Tween-20 before viewing under a compound microscope (Olympus, BH2). Images were captured with a QIcam camera (QImaging) and Q-Capture software.
Quantification of cellulose and xylan in root exudates
Cellulose was quantified in root exudates by a colorimetric assay [54]. 1 ml of root exudates were centrifuged at 10 000 rpm for 5 min and supernatant removed. 0.3 ml acetic/nitric reagent (8:1:2, acetic acid:nitric acid:water) was added. Samples were incubated for 30 min at 90˚C and then centrifuged again before washing with 0.5 ml water. Samples were vortexed in 0.5 ml sulfuric acid (67%) before mixing with 1 ml of cold anthrone reagent (0.2% anthrone (Sigma-Aldrich, US) in sulfuric acid) and incubated at 90˚C for 16 min. Samples were then left to stand at 22˚C for 10 min before reading absorbance at 620nm. Five biological replicates were measured in technical triplicate and the optical densities were used to generate cellulose equivalents using a standard curve.
The antibody LM11 was used in an enzyme-linked immunosorbent assay to detect xylan in the root exudates [55]. Each exudate was incubated overnight at 4˚C with PBS (Severn Biotech) to ensure efficient coating of microtitre plate wells. Samples were diluted fivefold to ensure final absorbance readings in the range 0.1-1.0 optical density. Three technical and four biological replicates of each exudate were assayed. Plates were then blocked with 5% w/v milk powder (Marvel) in PBS before incubation with LM11 in 5% blocking solution at 22˚C for 1 h. LM11 was then incubated with the anti-HRP rat secondary antibody (A9552; Sigma-Aldrich, US) using a 1:1000 dilution. Substrate was added to each well (0.1 M sodium acetate buffer pH 6, 1% (v/v) tetramethyl benzidene, 0.006% (v/v) H 2 O 2 ) to detect antibody binding, Five biological replicates were measured in technical triplicate and the optical densities were used to generate xylan equivalents using a standard curve.
RNAi of Pc-eng-1 and Pc-xyl
cDNA from P. coffeae was used to amplify templates for production of dsRNA complementary to Pc-eng-1 and Pc-xyl (see S2 for primer sequences). The dsRNAs were designed to be sequencespecific using the assembled genome data to avoid suppression of non-target sequences. A GFP sequence [56] was amplified to provide a control for a non-nematode gene [57]. The DNA fragments were cloned between the XbaI and XhoI sites of the vector L4440 (pPD129.36 [58]). Complementary single-stranded RNAs (ssRNAs) were synthesised from T7 promoters in the L4440 constructs post-independent digestion with XbaI and XhoI. The synthesis of ssRNAs and subsequent production of double-stranded RNA (dsRNA) used a Megascript T7 RNAi kit (Invitrogen), according to the manufacturer's instructions. A total of 500 mixed stage P. coffeae nematodes were treated with 100 μg/ml dsRNA in M9 buffer for 16 h at 25˚C. Control nematodes were treated with buffer only. Three-hundred individuals were used for RNA extraction and cDNA synthesis as described above to assess the reduction in target gene expression by qPCR. One hundred of the treated nematodes were infected on to roots of eight day old maize or potato plants grown in soil-free pouches, as previously described [59]. The 100 nematodes were distributed between five root tips on the root system. Each treatment was replicated six times. After 72 h, root tissue was stained with acid fuchsin [60] to visualise and count nematodes. | 7,935.2 | 2019-02-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Polyol specificity of recombinant Arabidopsis thaliana sorbitol dehydrogenase studied by enzyme kinetics and in silico modeling
Polyols are enzymatically-produced plant compounds which can act as compatible solutes during periods of abiotic stress. Nicotinamide adenine dinucleotide+-dependent SORBITOL DEHYDROGENASE (SDH, E. C. 1.1.1.14) from Arabidopsis thaliana L. sorbitol dehydrogenase (AtSDH) is capable of oxidizing several polyols including sorbitol, ribitol, and xylitol. In the present study, enzymatic assays using recombinant AtSDH demonstrated a higher specificity constant for xylitol compared to sorbitol and ribitol, all of which are C2 (S) and C4 (R) polyols. Enzyme activity was reduced by preincubation with ethylenediaminetetraacetic acid, indicating a requirement for zinc ions. In humans, it has been proposed that sorbitol becomes part of a pentahedric coordination sphere of the catalytic zinc during the reaction mechanism. In order to determine the validity of this pentahedric coordination model in a plant SDH, homology modeling, and Molecular Dynamics simulations of AtSDH ternary complexes with the three polyols were performed using crystal structures of human and Bemisia argentifolii (Genn.) (Hemiptera: Aleyrodidae) SDHs as scaffolds. The results indicate that the differences in interaction with structural water molecules correlate very well with the observed enzymatic parameters, validate the proposed pentahedric coordination of the catalytic zinc ion in a plant SDH, and provide an explanation for why AtSDH shows a preference for polyols with a chirality of C2 (S) and C4 (R).
INTRODUCTION
Nicotinamide adenine dinucleotide + -dependent SORBITOL DEHYDROGENASE (SDH, E. C. 1.1.1.14) is an enzyme required for the oxidation of inert sorbitol into metabolically-accessible fructose. Most SDH enzymes possess two zinc ions, one structural and the other catalytic. The mechanism proposed for the reaction of SDH with sorbitol requires that oxygen atoms of C1 and C2 are coordinated by a pentavalent catalytic zinc ion. The C2 hydroxyl group is thus in close proximity with C4 of the nicotinamide, leading to a chain of events which ultimately results in the reduction of NAD + to NADH, and the formation of a C2 keto group in the fructose product (Pauly et al., 2003).
Sorbitol dehydrogenase activity has also been identified in non-sorbitol translocating species including soybean (Glycine max, Fabaceae; Kuo et al., 1990) and maize (Zea mays, Poaceae; Doehlert, 1987). As in the case of SDHs characterized from the Rosaceae family, purified maize SDH, and a recombinant LeSDH from tomato (Solanum lycopersicum, Solanaceae) were also capable of oxidizing other polyols, albeit with lesser efficiency (Doehlert, 1987;Ohta et al., 2005). Recently, an SDH in the non-sorbitol translocating species Arabidopsis thaliana (Brassicaceae) has been identified and characterized (AtSDH, At5g51970; Nosarzewski www.frontiersin.org Aguayo et al., 2013). The use of mutants is enabling the physiological role of SDH to be elucidated in these species. For example, atsdh-mutants suffered reduced growth when supplemented with sorbitol (Aguayo et al., 2013). Additionally, under short day conditions, soil-grown mutants withstood drought stress better than wild-type plants, as shown by their enhanced relative water content and greater survival rates once rewatering had been resumed (Aguayo et al., 2013). These observations suggest that AtSDH is involved in metabolizing polyols which act as osmoprotectants and accumulate during drought stress. Although sucrose and raffinose are the main phloem translocated carbon sources in Arabidopsis (Haritatos et al., 2000), metabolic profiling studies have detected many different polyols such as glycerol, erythritol, xylitol, ribitol, mannitol, and sorbitol in this species (Fiehn et al., 2000;Kaplan et al., 2004;Rizhsky et al., 2004;Bais et al., 2010;Ebert et al., 2010). Of the polyols tested in enzyme assays, those oxidized preferentially by recombinant His-AtSDH were sorbitol (100%), ribitol (98%), and xylitol (80%; Aguayo et al., 2013). These three polyols all possess the same S and R configurations at C-2 (S) and C-4 (R), and it is the C2 hydroxyl group which is oxidized during their conversion to fructose, ribulose, and xylulose, respectively 1 . Molecules with different configurations at these two C-atoms were oxidized by recombinant His-AtSDH at a lower rate [L-arabitol (C-2 (S), C-4 (S); 59%) and D-mannitol (C-2 (R), C-4 (R); 32%)] suggesting that this configuration is key for optimal catalytic activity (Oura et al., 2000;Aguayo et al., 2013). Interestingly, SDHs biochemically characterized from nonsorbitol translocating species, share the preference for sorbitol, whilst ribitol is oxidized at >60% of the efficiency of sorbitol (purified maize SDH, Doehlert, 1987;recombinant tomato LeSDH, Ohta et al., 2005). However, SDHs from Rosaceae species have a significantly lower ability to metabolize ribitol (<15% compared to sorbitol in apple (Negm and Loescher, 1979;Yamaguchi et al., 1994), pear (Oura et al., 2000) and recombinant plum (Guo et al., 2012).
In animals, SDH forms part of the polyol pathway, a means of converting glucose to fructose, via sorbitol (Jeffrey and Jornvall, 1983). Human SDH (HsSDH) oxidizes several polyols with similar relative efficiency, including sorbitol, xylitol, and ribitol (Maret and Auld, 1988). Several crystal structures of SDHs from different non-plant sources have been obtained. These include recombinant SDHs from silverleaf whitefly [Bemisia argentifolii (Genn.; Hemiptera: Aleyrodidae), PDB 1E3J, Banfield et al., 2001], human (PDB 1PL6, 1PL7, and 1PL8, Pauly et al., 2003) and Rhodobacter sphaeroides (PDB 1K2W, Philippsen et al., 2005). Of these structures, the one obtained from R. sphaeroides lacks zinc ions and substrates, BaSDH from whitefly contains both catalytic and structural zinc ions, but without substrates, and HsSDH (1PL6) is crystallized in the presence of NAD + , the catalytic zinc, and the inhibitor CP-166,572 showing interactions expected to resemble those achieved by sorbitol. Thus, a catalytic mechanism whereby the catalytic zinc changes from a tetrahedric coordination in the absence of substrates to a pentahedric geometry where hydroxyls 1 and 2 of sorbitol become part of the coordination sphere, was proposed (Pauly et al., 2003). More recently, an SDH from the liver of sheep (Ovis aries) has been crystallized (PDB 3QE3, Yennawar et al., 2011) in the presence of catalytic zinc. In this structure, an acetate molecule is observed close to the coordination sphere of the zinc atom, and a glycerol molecule is bound through hydrogen bonds with arginine, tyrosine, and glutamic acid residues of the binding pocket. In sheep SDH, it was proposed that only hydroxyl 1 of sorbitol contributes to the penta-coordination of zinc, establishing hydrogen bonds with the above-mentioned residues. Of note is that no crystal structure has yet been reported for the complex of a SDH with sorbitol in order to understand the role of zinc coordination, and the interactions with specific residues.
Homology modeling and Molecular Dynamic studies of the Arabidopsis enzyme could help to identify the key amino acid residues involved in substrate binding, and provide an explanation for the preference of the C-2 (S) and C-4 (R) configuration. Therefore, in order to understand the structural determinants of substrate specificity of AtSDH toward sorbitol, ribitol, and xylitol, the aim of this work was to correlate the kinetic performance of recombinant AtSDH toward these three substrates, with the dynamic behavior of their respective interactions observed in Molecular Dynamics simulations.
EXPRESSION AND PURIFICATION OF RECOMBINANT HIS-AtSDH AND AtSDH
Arabidopsis thaliana sorbitol dehydrogenase fused at its Nterminus to a 6xHis tag (His-AtSDH; Aguayo et al., 2013), was expressed in vitro from the pEXP5-NT/TOPO vector using the Expressway Cell-Free expression system (Invitrogen) according to the manufacturer's instructions with minor modifications (1.5 μg of plasmid DNA per reaction; expression at 30 • C for 6 h). For purification, 10 parallel in vitro reactions (250 μl each) were resuspended in binding buffer (50 mM Tris-HCl pH 8.2, 500 mM NaCl, 10 mM imidazole, 10% glycerol) and loaded onto a His-Spin Protein Miniprep (Zymo-Research) and the column washed with four volumes of binding buffer containing 50 mM imidazole. Bound proteins were eluted with binding buffer containing 250 mM imidazole as described previously (Aguayo et al., 2013). The N-terminal His tag was then removed by adding 1 mg TEV protease to 5 mg His-AtSDH and incubating at 25 • C for 2 h in three volumes of 50 mM Tris-HCl pH 8.2, 1 mM T-CEP and 10% glycerol. The imidazole was removed from the TEV protease buffer at the end of the incubation after two cycles of dilution in four volumes of 50 mM Tris-HCl pH 8.2, 500 mM NaCl and 10% glycerol, and concentration through a Centricon column (10 kDa Millipore, 15 min, 4500 × g). The concentrated protein mix (40-80 ng/μl in buffer without imidazole) was then loaded onto a His-Spin Protein Miniprep and the flow-through fraction (containing recombinant AtSDH) collected for further experiments. In multiple experiments, the recovery of recombinant AtSDH (1 mg) was approximately 20% of the recombinant His-AtSDH (5 mg) originally synthesized. The recombinant proteins were separated by SDS-PAGE, visualized by Coomassie staining and detected by immunoblot analysis using monoclonal anti-His (Sigma; to detect His-AtSDH) antisera and anti-mouse alkaline phosphatase-conjugated secondary (Sigma) antisera.
ENZYMATIC ANALYSIS OF RECOMBINANT AtSDH
Dehydrogenase activity was determined spectrophotometrically by measuring the rate of change in absorbance at 340 nm for NAD + reduction at 25 • C, using a Unicam spectrophotometer (model UV2). Reactions were initiated by adding purified recombinant His-AtSDH or AtSDH (1.2-1.5 μg) to a standard reaction mixture containing 100 mM Tris-HCl pH 9, 20 mM polyol and 1.36 mM NAD + (as determined by enzymatic titration). In separate experiments, sorbitol, ribitol, xylitol, and NAD + concentrations were varied in order to determine the respective kinetic parameters, using enzyme collected from at least three independent in vitro expression reactions and purifications. The initial velocity (v) was determined at the different substrate concentrations, [S]. In the case of sorbitol and ribitol, the K m was calculated by fitting to the Michaelis-Menten hyperbolic func- -Bowden, 2012). All data were fitted using SigmaPlot (Systat Software, San Jose, CA, USA), which uses non linear regression by an iterative least squares algorithm for parameter estimation.
MOLECULAR DOCKING
The PDB file of the HsSDH structure (PDB 1PL6, Pauly et al., 2003) was modified by changing selenomethionine residues to methionine, and selecting for the highest occupancy of those methionine residues with multiple conformations. Additional preparation of the structure was performed by using AutoDock Tools (Morris et al., 2009). A +2 charge was assigned to the structural and catalytic zinc atoms, and sorbitol, ribitol, and xylitol were positioned for flexible docking using the ideal conformation for these ligands from Ligand depo (Feng et al., 2004). The chirality of the carbon atoms was confirmed according to the ChEBI database (Hastings et al., 2013). The docking calculations were performed using Autodock Vina, with 250 as the exhaustiveness parameter. The docking area was defined by a box (17 Å × 15 Å × 15 Å) centered on the location of the CP-166,572 inhibitor molecule present in the HsSDH structure (Pauly et al., 2003). Twenty different conformations were generated for each polyol which were ranked according to their binding energy, in agreement with the expected coordination of pentavalent catalytic zinc and proximity between C2 of the polyol and C4 of the nicotinamide moiety of NAD + (where the hydride is transferred) in order to select the best template for further modeling.
IN SILICO MODELING
Two types of model were generated; (i) NAD + -bound AtSDH (without polyols) with two zinc atoms, one associated with the catalytic site and the other structural; (ii) NAD + -bound AtSDH with two zinc atoms, one associated with the catalytic site, and the other structural, and a polyol substrate (sorbitol, xylitol, or ribitol).
For the first model, the structure of HsSDH was used as template for the active site containing NAD + and the tetracoordinated catalytic zinc (PDB 1PL8; Pauly et al., 2003). The second template was the structure of BaSDH (PDB 1E3J; Banfield et al., 2001) to contribute the structural zinc. Therefore, in the case of the second model type, the polyol-bound form generated by docking of HsSDH (see Molecular Docking) was the template which contributed the substrates and the catalytic zinc, whilst the structure of BaSDH contributed the structural zinc.
The alignment of the amino acid sequences of AtSDH, HsSDH, and BaSDH was performed using ClustalX 2.1 (Larkin et al., 2007), and the result served as the input for the generation of three-dimensional models using Modeler 9.11 (Sali and Blundell, 1993;Eswar et al., 2008). The resulting alignment showed that the first 18 amino acids at the N-terminus of AtSDH have no equivalent in the other templates. Ten models were generated for each complex, employing methods of conjugate gradients, and molecular simulation with simulated annealing, performed by Modeler. The quality of the best model was assessed by the determination of its energy (ProsaII; Sippl, 1993) and local sequence-structure correlation (Verify3D; Eisenberg et al., 1997). In the case of NAD + -bound AtSDH, the structure of the 18 amino acids at the N-terminus was predicted by Jpred3 (Cole et al., 2008) and then subjected to ab initio modeling using the GalaxyWeb server (Ko et al., 2012). The GalaxyLoop procedure (Park et al., 2011) was employed to refine the region between amino acids 1 and 18, using the PS1tbm scoring method. Five models were obtained and the best one was chosen according to the same evaluation criteria mentioned above. In the case of the enzyme-substrate complexes, the first 18 amino acids were not included in the final models (see Modeling NAD + -Bound AtSDH).
MOLECULAR DYNAMICS SIMULATION
Ten nanosecond trajectories were simulated for the generated models of NAD + -bound AtSDH, and NAD + -bound AtSDH in complex with sorbitol, ribitol, or xylitol, using NAMD 2.8 (Phillips et al., 2005) and the force field AMBERff99SB (Hornak et al., 2006). Systems were prepared with Ambertools 1.5 (Case et al., 2010). In the case of the polyols, the parameters and topologies were generated by homology using Antechamber (Wang et al., 2006), whereas previously-described parameters and topologies were used in the case of NAD + (Ryde, 1995). Each system was simulated in a box of TIP3P waters with a pad of 13 Å in all directions and the overall charge of the system was neutralized using three Na + ions. Integration steps of 1 fs were used, and non-bound interactions were considered within a radius of 9 Å, with a switching function over 11 Å. For long range interactions, the Particle-Mesh Ewald model was employed (Darden et al., 1993). For each system, 100,000 steps of energy minimization were applied, followed by a gradual temperature increase to 300 K.
Unlike the tetrahedric coordination of zinc, to the best of our knowledge the parameters needed to simulate the pentahedric coordination of zinc are not defined in the force field used or elsewhere. Therefore, harmonic restrictions on the distances and angles between the atom ligands around the catalytic (Cys36, His61, HO−, O1, and O2 of the polyols) and structural (Cys91, Cys94, Cys97, Cys105) zinc atoms were applied, according to the regular distances and coordination angles observed in crystallographic structures (Alberts et al., 1998). In addition, the Glu62 www.frontiersin.org residue was set to its protonated form in order to prevent its tendency to interact with the zinc atom, which in turn disturbs the coordination geometry. In the case of NAD + -bound AtSDH, both the catalytic and structural zinc atoms were modeled with tetrahedric coordination. For the simulations, harmonic distance and angle restrictions were applied on the four molecules involved in their coordination (Cys36, His61, HO−, and Glu62 for the catalytic zinc; Cys91, Cys94, Cys97, and Cys105 for the structural zinc). VMD 1.9 (Humphrey et al., 1996) was used to analyze trajectories. Hydrogen bonds were quantified using a cut-off distance of 4 Å, with an Acceptor-Hydrogen-Donor angle greater than 120 • . The radial pair distribution function was calculated between the polyols and the oxygen atoms of the water molecules at 3 Å from the protein. The Stamp tool from the MultiSeq package (Roberts et al., 2006) was used to perform structural superpositions between the docking complexes of HsSDH and the models of AtSDH complexes after 100,000 minimization steps.
RESULTS AND DISCUSSION EXPRESSION AND PURIFICATION OF RECOMBINANT HIS-AtSDH and AtSDH
Previously, we showed that recombinant His-AtSDH is capable of oxidizing a variety of linear polyols, exhibiting greatest specific activities with sorbitol, ribitol, and xylitol (Aguayo et al., 2013). In the present study, we chose to continue working with His-AtSDH expressed in vitro, as recombinant tagged versions expressed in Escherichia coli or Saccharomyces cerevisiae formed inclusion bodies with no enzymatic activity post-solubilization or lost activity during the purification process, respectively (unpublished results). In order to determine the kinetic parameters of the recombinant form of the enzyme with these three substrates, recombinant His-AtSDH and AtSDH were expressed and purified, as described in Materials and Methods. The increased migration in SDS-PAGE, coupled with the lack of cross reactivity with the anti-His antisera in immunoblot assays indicated that the His-tag had been completely removed from AtSDH by TEV protease treatment (Figures 1A,B, Supplementary Figure S1). On using sorbitol as substrate at a saturating NAD + concentration (1.36 mM), it was noted that the K m remained relatively unchanged by the excision of the His-tag [His-AtSDH: 1.20 ± 0.16 mM, three replicates (Aguayo et al., 2013); AtSDH: 0.96 ± 0.07 mM, three replicates]. Such a minor effect on the affinity for sorbitol was also observed on removal of a Maltose Binding Protein from the N-terminus of purified recombinant SDH from plum (Guo et al., 2012). However, the turnover number of the Arabidopsis enzyme increased more than sixfold when the His-tag was removed [His-AtSDH: 0.33 ± 0.01 s −1 , three replicates (Aguayo et al., 2013); AtSDH: 2.13 ± 0.03 s −1 , three replicates]. The difference between these two versions is discussed below, and we proceeded with the kinetic characterization using recombinant AtSDH, as this form is a truer representation of the enzymatic parameters observed for the enzyme in planta.
POLYOL SPECIFICITY OF RECOMBINANT AtSDH
The K m of recombinant AtSDH with sorbitol and ribitol as substrates were similar (Figure 2A; Table 1), as were the turnover numbers for all three substrates (Table 1) at 1.36 mM NAD + . These results confirm that AtSDH also acts as a ribitol dehydrogenase and a xylitol dehydrogenase, and are consistent with the observation that in long day conditions, atsdh-mutants possess elevated levels of sorbitol and ribitol (Nosarzewski et al., 2012;Aguayo et al., 2013). The apparent affinity of recombinant AtSDH for sorbitol (K m 0.96 mM) was very similar to that determined for HsSDH (K m 0.62 mM; Maret and Auld, 1988), lower than that observed in other non-Rosaceae species such as recombinant tomato SDH (K m 2.39 mM; Ohta et al., 2005) and purified maize SDH (K m 8.45 mM; Doehlert, 1987), and several orders of magnitude lower than that of recombinant or partially purified SDHs studied in Rosaceae species such as plum (K m 111.8 mM; Guo et al., 2012), Japanese pear (K m 96.4 mM; Oura et al., 2000), apple (K m 86 mM; Negm and Loescher, 1979) and peach (S 0.5 43 mM; Hartman et al., 2014). Of the three polyols evaluated, recombinant AtSDH exhibits the highest specificity constant with xylitol, yet the specific activity with this substrate was lower than that of sorbitol and ribitol when recombinant His-AtSDH was used (Aguayo et al., 2013). We believe that this can be attributed to the fact that the latter assays were performed at 2 mM NAD + and 50 mM xylitol, concentrations which produce substrate inhibition, as shown in Figure 2B. Nevertheless, the K m of recombinant AtSDH with xylitol (0.27 mM) is very similar to that of HsSDH (0.22 mM; Maret and Auld, 1988) and substantially lower than with the other substrates, as also found in purified apple SDH (37 mM; Negm and Loescher, 1979), translating into a higher specificity constant with this 5-carbon molecule than with either sorbitol or ribitol (Figure 2A; Table 1). At xylitol concentrations greater than 5 mM in the presence of 1.36 mM NAD + , the specific activity was significantly inhibited, a phenomenon not observed with the other two polyol substrates. However, when substrate inhibition is considered in the fitting of initial velocity data (not shown) the K i was far beyond physiological concentrations of NAD + (more than 100 mM). This property of recombinant AtSDH with xylitol was reduced at a lower NAD + concentration, and absent at 34 μM NAD + (Figure 2B). The phenomenon of substrate inhibition in oligomeric enzymes could originate from the negative interaction between the active sites or the presence of allosteric sites (Johnson and Reinhart, 1992;Cabrera et al., 2008). In order to determine the quaternary structure of the plant enzyme, repeated attempts were made to obtain sufficiently-concentrated recombinant AtSDH for performing gel filtration chromatography. However, these efforts were unsuccessful due to the propensity of the purified enzyme to precipitate when concentrated under the experimental conditions employed. Therefore, although it is not known if AtSDH functions as an oligomer, other SDHs have a tetrameric quaternary structure (e.g., HsSDH, Pauly et al., 2003;sheep SDH, Yennawar et al., 2011). If AtSDH does function in a quaternary state, then the binding of NAD + /xylitol at one site could negatively affect the affinity for these substrates at others due to long range coupling through oligomeric packing, thus leading to the inhibition of xylitol oxidation at higher NAD + concentrations. However we consider that this phenomenon is more likely to be a side effect of the elevated NAD + concentrations involved, which are not physiologically relevant in planta.
Frontiers in Plant Science | Plant Metabolism and Chemodiversity
Sorbitol dehydrogenase enzymes from most species harbor two zinc atoms, one structural and the other at the active site. In line with this finding, in plants it has been shown that the activity of a recombinant peach SDH increases dramatically if the bacterial cells are grown in the presence of zinc chloride (Hartman et al., 2014). Therefore, in order to determine whether AtSDH also possesses coordinated zinc molecules, the enzyme was preincubated with varying concentrations of the divalent ion chelator, EDTA, and then evaluated for activity. The specific activity of AtSDH with sorbitol (20 mM) and NAD + (0.34 mM) fell by 48% after preincubation with 1 mM EDTA for just 60 min, compared to the enzyme pre-incubated in the absence of EDTA, strongly indicating that zinc does indeed play a key role in the enzymatic activity of the plant enzyme.
MODELING NAD + -BOUND AtSDH
The full 364-amino acid sequence of AtSDH was used to perform a BLAST search (Altschul et al., 1990) of the PDB 2 in order to obtain suitable templates. The first five hits obtained correspond to SDHs, as shown in Supplementary Table S1. In order to generate a structure of NAD + -bound AtSDH, two templates were chosen, both of which share >46% amino acid identity with the plant enzyme. Specifically, we used the structure of BaSDH (PDB 1E3J; Banfield et al., 2001), because it is the only SDH template present in PDB that contains the structural zinc atom. We also used the HsSDH structure (PDB 1PL8; Pauly et al., 2003), due to the presence of NAD + and catalytic zinc. Sequence alignments of plant SDHs (Nosarzewski et al., 2012;Aguayo et al., 2013;Hartman et al., 2014) show that the AtSDH protein sequence possesses the four conserved Cys residues involved in the binding of the structural zinc. The first 18 amino acids of the N-terminal of AtSDH do not align with the sequences of either template. In an updated phylogenetic analysis of more than 40 known and putative SDHs from mono-and dicotyledonous species (Hartman et al., 2014), all except that from Triticum urartu possess an N-terminal extension compared to these non-plant templates. Therefore, in order to determine whether this region could be structured in plant SDHs, homology modeling and ab initio loop refinement of the N-terminal 18 amino acids of AtSDH were performed. As a first step, the evaluation of the resulting model via ProsaII resulted in a Z-score with a value expected for a protein of 364 amino acid residues (−9.03), and Verify3D analysis gave a score of 93.7%, again with a good local correspondence between sequence and structure. In the second stage, the structure of these 18 amino acids was refined. On performing a prediction of the secondary structure derived from the AtSDH sequence, a high probability for the formation of an α-helix at the Nterminus was determined ( Figure 3A). Five ab initio models were obtained, of which the best possessed a Z-score of −8.45, and 96.16% of the residues had a good Verify3D score. However, the residues at the N-terminus have an average ProsaII value of −0.21, compared to −1.17 for the rest of AtSDH, indicating that the structure formed by the 18 amino acids at the N-terminus is FIGURE 3 | Modeling the structure of NAD + -bound AtSDH. (A) Secondary structure prediction of the N-terminus of AtSDH. The prediction was performed using Jpred3 and the secondary structure observed after modeling ab initio by loop refinement (GalaxyWeb). (B) 3D representation of a monomer of NAD + -bound AtSDH (blue ribbons) obtained by homology modeling followed by ab initio modeling of the 18 amino acid residues at the N-terminus (blue ribbon enlarged within the dashed circle). The results of the ProsaII energy and Verify3D scores for the N-terminus and the rest of the protein are shown. The structural and catalytic zinc atoms are colored orange, NAD + is colored brown and a water molecule coordinated by the catalytic zinc atom is shown in red. (C) RMSD graph of the N-terminus (blue line) and of NAD + -bound AtSDH without the N-terminus (black line) during a 10 ns simulation. Snapshots of modeled structures of the N-terminus are shown at different time points.
Frontiers in Plant Science | Plant Metabolism and Chemodiversity less favorable. Nevertheless, the Verify3D score of these residues is less than that of the entire protein (0.28 vs. 0.49, respectively), indicating that the correspondence between sequence and structure is not good in this region. As shown in Figure 3B, the final structure predicted from the primary sequence is an α-helix.
The stability of the model generated was analyzed by a Molecular Dynamics simulation for 10 ns. The full 364-amino acid sequence of AtSDH is highly stable during the trajectory, with a RMSD of 2.5 ± 0.3. However, the ab initio modeled N-terminus is less stable; the RMSD of the first 18 amino acids is 5.6 ± 1.6, whereas that of the remainder of the protein (346 amino acids) is just 2.21 ± 0.19 ( Figure 3C). The N-terminus tends to unfold during the simulations; after 1 ns, the α-helix presents one turn less, and after 4 ns, the helix is completely unwound and does not form again during the 10 ns trajectory (Figure 3C). However, the immediate downstream secondary structure, a β-strand, maintains its structure and position throughout the simulation.
Given the predicted unpacking and loosening of the Nterminus of AtSDH, this zone of the polypeptide chain may indeed be less-structured, or maybe the model does not capture effectively the native configuration of the N-terminus. In addition, the N-terminal region is located approximately 28 Å from the active site, meaning it is unlikely that it participates directly in substrate binding and/or catalysis. Considering that other SDHs function as tetramers (HsSDH, Pauly et al., 2003;sheep SDH, Yennawar et al., 2011;pear SDH, Oura et al., 2000;plum SDH, Guo et al., 2012;peach SDH, Hartman et al., 2014), the predicted AtSDH structure was superimposed onto the tetrameric structure of HsSDH (Pauly et al., 2003). The overlay shows that the N-terminus is unlikely to participate in interface interactions, nor is it close to the active sites of the neighboring subunits.
Considering the observed effects of the presence of the Histag on enzymatic activity (see Expression and Purification of Recombinant His-AtSDH and AtSDH), it appears that this tag is a cause of unfavorable consequences. For example, it has been previously reported that the proximity of the His-tag to cysteine residues in recombinant corticotropin-releasing factor receptor affected the formation of disulfide bridges (Klose et al., 2004). However, around 90% of the crystallized proteins whose structures are deposited in PDB correspond to recombinant proteins (including both HsSDH and BaSDH), and of these recombinant proteins, 60% have been purified by means of a His-tag (Carson et al., 2007). Therefore, we consider that removal of this tag produces a more faithful model of the native protein, and for these reasons, the first 18 amino acids of AtSDH were not included during the Molecular Dynamics simulations in the presence of the polyol substrates.
MODELING OF INTERACTIONS OF AtSDH TERNARY COMPLEXES WITH DIFFERENT POLYOLS
Since we adhere to the proposal by Pauly et al. (2003) about the role of sorbitol hydroxyls 1 and 2 in the coordination of catalytic zinc, and given that recombinant AtSDH has similar kinetic properties compared to HsSDH (see Polyol Specificity of Recombinant AtSDH), we chose 1PL6, the structure of HsSDH, as the scaffold to prepare the coordinates of the template polyol complexes. Unlike 1PL8, in 1PL6 the inhibitor molecule CP-166,572 is observed participating in the trigonal bipyramidal coordination of the catalytic zinc (Supplementary Table S1). The inhibition exercised by CP-166,572 is competitive and uncompetitive with respect to fructose and sorbitol, respectively (Pauly et al., 2003). Nevertheless, the catalytic mechanism proposed for HsSDH indicates that all three molecules occupy the same physical space, and notably the same two hydroxyl groups (1 and 2) of fructose and sorbitol coordinate the catalytic zinc (Pauly et al., 2003). Additionally, the NAD + molecule in this structure was kept as the template for modeling this cofactor in AtSDH. We obtained the structures of HsSDH in complex with sorbitol, ribitol, or xylitol by molecular docking, using the site occupied by the inhibitor in 1PL6 as the docking space. Interestingly, most of the resulting conformations and those with the lowest binding energies present hydroxyls 1 and 2 of the polyols orientated toward the catalytic zinc. Figure 4 shows the best docking solution (see Materials and Methods) that was used as one of the templates for modeling the AtSDH complexes.
The evaluation of the resulting AtSDH models showed acceptable Z-scores which were within the values expected for the length of AtSDH (−9.11, −9.15, −9.12 for sorbitol, ribitol, and xylitol complexes, respectively;Sippl, 1993 3 ). The analysis by Verify 3D showed that more than 85, 91, and 88% of the residues in sorbitol, ribitol, and xylitol complexes, 3 https://prosa.services.came.sbg.ac.at/prosa_help.html www.frontiersin.org FIGURE 5 | Hydrogen bond interactions between the AtSDH models and the polyols. The hydrogen bonds formed between AtSDH and sorbitol, ribitol, or xylitol, were quantified every 25 ps during the trajectories (left hand panels). The protein residues of the binding pocket identified in each case are represented in the right hand panels, in which dashed blue lines mark those hydrogen bonds that were maintained for at least 1 ns. The interactions with the arginine guanidinium group (R292) exhibit bidentation, hence contributing with two effective hydrogen bonds. The colors of the atoms are the same as described in Figure 3. respectively, present scores indicating good local correspondence between sequence and tertiary structure (Supplementary Figure S2).
A protocol of energy minimization was applied to all the models to improve the packing of sidechains. The coordination geometry of the catalytic zinc involved five atom ligands: the sulfur atom of Cys36, the hydroxyls of C1 and C2 in the polyol, the nitrogen Nε of His61 and a structural water (Figure 4). In the following stage of molecular simulation analysis, an important consideration was taken for the protonation state of Glu62. Different preliminary simulations showed that the negatively-charged carboxylate tends to interfere with the penta-coordinated ligands due to the coulombic attraction with the positively-charged zinc. The systems behaved with Frontiers in Plant Science | Plant Metabolism and Chemodiversity FIGURE 6 | Water molecules present in the binding pocket of sorbitol, ribitol, and xylitol. The radial pair distribution function was calculated for the water molecules at different distances from the polyols in the active site of AtSDH (upper left panel). The remaining panels show the localization of the visually-tracked structural water molecules which mediate the interaction between the residues of AtSDH and the respective polyols. The colors of the atoms are the same as those described in Figure 3. greater stability when Glu62 was in a protonated state. Additionally, to maintain the consistency with the reaction mechanism proposed for HsSDH, with water functioning as a general base (Pauly et al., 2003), we deprotonated the structural water in the coordination sphere. Thus, we maintained these criteria during all the subsequent minimization and molecular simulation steps.
We first analyzed the time course of direct hydrogen bonds between the polyols and the protein residues during 10 ns simulation trajectories. The greatest numbers of interactions were observed for xylitol with an average of 5.4 ± 1.3 bonds and sorbitol with an average of 4.9 ± 0.9 bonds. In contrast, ribitol showed an average of 2.4 ± 0.9 hydrogen bonds with AtSDH. Figure 5 shows that the hydrogen bonds formed with the polyols involve in the case of sorbitol, the interaction between hydroxyls 3, 5, and 6 with residues Phe111, Ser38, and Asp39, respectively; in the case of xylitol, hydroxyls 4, and 5 interacting with Arg292 and Glu147, respectively; and in the case of ribitol, hydrogen bonds are formed rather transiently between hydroxyl groups 4 and 5 and residues Thr113, Tyr42, Asp39, and Ser38. Considering that the K m of recombinant AtSDH is substantially lower than that of other plant SDHs (see Expression and Purification of Recombinant His-AtSDH and AtSDH), evaluating in silico the conservation and orientation of these residues in characterized SDHs of plant origin, and subsequent site-directed mutagenesis studies will be informative in determining their true relevance in the substrate specificity and performance of these enzymes.
As shown in Figure 5, sorbitol and ribitol interact with residues of helix α1, whilst only xylitol forms its interactions with residues from the loop connecting helix α10 to strand β13. These divergent orientations originate from the different torsions adopted by the C2-C3 bond of the polyol structure. Upon measuring the dihedral angle along the C1-C2-C3-C4 bonds of the polyols www.frontiersin.org (Figure 6), a value close to 180 • is observed in the case of sorbitol, whilst in the case of ribitol it varies between 45 • and 180 • (with an average of 115 • ) and it maintains around 300 • in the case of xylitol.
We also observed the number of interactions that are mediated by water molecules. The radial pair distribution function was calculated for the probability of encountering a stable water molecule at different radii from the substrate along the complete trajectory (Figure 6). In the case of sorbitol, this analysis identified one water molecule mediating an interaction between the hydroxyl 4 and NAD + carboxyamide. As such an interaction is not achieved with a protein residue, it strongly restrains the orientation of hydroxyls 5 and 6, for which direct stable interactions are formed with AtSDH. For xylitol and ribitol, the distribution functions indicate the establishment of almost 3 and 4 water-mediated interactions, respectively. Residues Asp39 and Ser38 in helix α1 seem to be important for the positioning of these structural waters. In the case of ribitol, Glu147, and hydroxyl 1 are bridged by structural water.
It is interesting that ribitol presents the lowest number of hydrogen bonds and the greatest quantity of structural water molecules, correlating well with the poorer kinetic performance of this substrate. On the other hand, sorbitol maintains the more stable interaction with the protein, and no water molecules interact via hydrogen bonds between the substrate and the enzyme. However xylitol, in addition to maintaining a number of hydrogen bonds not significantly different from the number observed for sorbitol, is also seen interacting through a high number of water-mediated interactions with residues present in helix α1 of AtSDH. When considering the differences between their specificity constants, it seems that xylitol is the preferred substrate of the enzyme given its higher number of direct and watermediated hydrogen bond interactions. Sorbitol performs better in kcat than ribitol probably due to a greater stability in positioning the C2 carbon for hydride transfer to the nicotinamide moiety. These simulation results are also consistent with the preference of AtSDH for these substrate polyols in regard to the chirality of C2 and C4 (S and R, respectively). The opposite configuration in C2 would reorient the torsion angle along C1-C2-C3-C4 in order to maintain the bidentated zinc coordination, resulting in a complete loss of the observed interactions. The opposite configuration in C4 would disturb the pattern of hydrogen bonds with hydroxyls 4 and 5 in both xylitol and ribitol, and the ablation of the water-mediated interaction of sorbitol and NAD + nicotinamide. Indeed, experimental findings demonstrate that L-arabitol [C-2 (S), C-4 (S)] and D-mannitol [C-2 (R), C-4 (R)] are oxidized at 59 and 32%, respectively, compared to sorbitol, by recombinant His-AtSDH (Aguayo et al., 2013).
Given these findings, whilst these polyols have been detected in different organs of Arabidopsis (Fiehn et al., 2000;Kaplan et al., 2004;Rizhsky et al., 2004;Bais et al., 2010;Ebert et al., 2010), it would be of particular interest to determine the effective intracellular concentrations of sorbitol, ribitol, and xylitol in plants grown under standard, and drought stress conditions, in order to discern which substrates are oxidized by AtSDH in planta. | 8,656.2 | 2015-02-23T00:00:00.000 | [
"Biology",
"Chemistry",
"Environmental Science"
] |
Fast and flexible design of novel proteins using graph neural networks
Protein structure and function is determined by the arrangement of the linear sequence of amino acids in 3D space. Despite substantial advances, precisely designing sequences that fold into a predetermined shape (the “protein design” problem) remains difficult. We show that a deep graph neural network, ProteinSolver, can solve protein design by phrasing it as a constraint satisfaction problem (CSP). To sidestep the considerable issue of optimizing the network architecture, we first develop a network that is accurately able to solve the related and straightforward problem of Sudoku puzzles. Recognizing that each protein design CSP has many solutions, we train this network on millions of real protein sequences corresponding to thousands of protein structures. We show that our method rapidly designs novel protein sequences and perform a variety of in silico and in vitro validations suggesting that our designed proteins adopt the predetermined structures. One Sentence Summary A neural network optimized using Sudoku puzzles designs protein sequences that adopt predetermined structures.
Introduction
Protein structure and function emerges from the specific geometric arrangement of their linear sequence of amino acids, commonly referred to as a fold. Engineering novel protein sequences has a broad variety of uses, including academic research, industrial process engineering (1), and most notably, protein-based therapeutics, which are now a very important class of drugs (2,3).
However, despite extraordinary advances, designing a sequence from scratch to adopt a desired structure, referred to as the "inverse folding" or "protein design" problem, remains a challenging task.
Conventionally, a sampling technique such as Markov-chain Monte Carlo is used to generate sequences optimized with respect to a force-field or statistical potential (3)(4)(5). Limitations of those methods include the relatively low accuracy of existing force fields (6,7) and the inability to sample more than a miniscule portion of the vast search space (sequence space size is 20 N , N being the number of residues). While there have been successful approaches that screen many thousands of individual designs using in vitro selection techniques (8,9), those approaches remain reliant on laborintensive experiments.
Here we overcome those limitations by combining a classic idea with a novel methodology. Filling a specific target structure with a new sequence can be formulated as a constraint satisfaction problem (CSP) where the goal is to assign amino acid labels to residues in a polymer chain such that the forces between interacting amino acids are favorable and compatible with the fold. To overcome previous difficulties with phrasing protein design as a CSP (10,11), we elucidate the rules governing constraints using deep learning. Such methods have been applied to a vast diversity of fields with impressive results (12)(13)(14), partly because they can infer hitherto hidden patterns from sufficiently large training sets. For proteins, the set of different protein folds is only modestly large with a few thousand superfamilies in CATH (15). Indeed, previous attempts at using deep learning approaches for protein design used structural features and thus only trained on relatively small datasets and achieved moderate success as of yet without any experimental validation (16)(17)(18)(19)(20). However, the number of sequences that share these structural templates is many orders of magnitude larger (about 70,000,000 sequences map to the CATH superfamilies), reflecting the fact that the protein design problem is inherently underdetermined with a relatively large solution space. Thus, a suitable deep neural network trained on these sequence-structure relationships could potentially outperform previous models to solve the protein design problem.
The distance matrix is commonly used to represent protein folds (21). It is an NxN matrix consisting of the distances between residues, optionally restricted to only interacting pairs of residues that are within a certain distance of one another. The distance matrix can be thought of as placing constraints on pairs of residues, such that the forces governing the interaction between those residues are not violated (e.g. interactions between residues with the same charge or divergent hydropathicities are usually not well tolerated). A given protein structure, corresponding to a single distance matrix, can be formed by many different homologous sequences, and those sequences all satisfy the constraints imposed by the distance matrix. Such solutions to this constraint satisfaction problem (CSP) are given to us by evolution and are available in sequence repositories such as Pfam (22) or Gene3D (23). While the rules of this CSP for a specific protein fold can be found by comparing sequences from one of such repositories and can be captured as Hidden Markov Models (HMM) or position weight matrices (PWMs), often represented as sequence logos, it has thus far not been possible to deduce general rules-those that would connect any given protein fold or distance matrix with a set of sequences.
Here, we use a graph neural network, denoted ProteinSolver, to elucidate those rules. The graph in this case is made up of nodes, corresponding to amino acids, and edges between those nodes, corresponding to the spatial interactions between amino acids, as represented in the distance matrix.
The edges thus represent the constraints that are imposed on the node properties (amino acid types).
We show that a ProteinSolver network trained to elucidate the rules governing the CSP of protein folding by reconstructing masked sequences shows remarkable success in generating novel protein sequences for a predetermined fold. Previous approaches to protein design were hampered by the enormous computational complexity, in particular when taking into account backbone flexibility. Our approach sidesteps this problem and delivers plausible designs for a wide range of folds. We expect that it would also be able to generate sequences for completely novel imagined protein folds.
Furthermore, as a neural network approach, its evaluation is many orders of magnitude faster than classical approaches and should enable the exploration of vastly more potential backbones.
Finally, we present a web server which allows users to run a trained ProteinSolver model to generate sequences matching the geometries of their own reference proteins. We hope that this web server will lower the barrier to entry for protein design and will facilitate the generation of many novel proteins. The web server is freely available at: http://design.proteinsolver.org. Fig. 1. Graph convolutional neural network used by ProteinSolver to assign node labels that satisfy the provided node and edge constraints. (A) ProteinSolver network architecture. (B) Training a ProteinSolver network to solve Sudoku puzzles. Node attributes encode the numbers provided in the starting Sudoku grid. Edge attributes encode the presence of constraints between pairs of nodes (i.e. that a given pair of nodes cannot be assigned the same number). (C) Training a ProteinSolver network to reconstruct protein sequences. Node attributes encode the identities of individual amino acids. Edge attributes encode Euclidean distances between amino acids and the relative positions of those amino acids along the amino acid chain.
Network architecture
As there had been little previous work in using neural networks to solve CSPs (24,25), we first had to devise a network architecture that would be well-suited for this problem. In order to facilitate this search, we focused on designing a neural network capable of solving Sudoku puzzles, which is a welldefined CSP (25) for which predictions made by the network can easily be verified. We treat Sudoku puzzles as graphs having 81 nodes, corresponding to squares on the Sudoku grid, and 1701 edges, corresponding to pairs of nodes that cannot be assigned the same number (Fig. 1B). The node attributes correspond to the numbers entered into the squares, with an additional attribute to indicate that no number has been entered, the edge indices correspond to the 1701 pairs of nodes that are constrained such that they cannot have the same number, and the edge attributes are all the same value because all edges in the graph impose identical constraints. We generated 30 million solved Sudoku puzzles using the sugen program (26), which first generates a solved Sudoku grid using a backtracking grid filler algorithm, and then randomly removes numbers from that grid until it generates a Sudoku puzzle with a unique solution at the requested difficulty level. Neural networks with different architectures were trained to reconstruct the missing numbers in the Sudoku grid, by minimizing the cross-entropy loss between predicted and actual values. Throughout training, we tracked the accuracy that those networks achieve on the training dataset ( Fig. 2A, blue line) and on the validation dataset ( Fig. 2A, orange line), which contains 1000 puzzles which were excluded from the training dataset. After a broad scan over different neural network architectures that we conceived for this problem, we converged on the "ProteinSolver" graph neural network architecture presented in Fig. 1A. The inputs to the network are a set of node attributes and a set of edge attributes describing interactions between pairs of nodes. The node and edge attributes are embedded in an -dimensional space using linear transformations or a multi-layer perceptron. The resulting node and edge attribute embeddings are passed through N residual edge convolution and aggregation (ETA) blocks. In the convolution step, we update the edge attributes using a modified version of the edge convolution layer (27), which takes as input a concatenation of node and edge attributes and returns an update to the edge attributes. In our work, g is a multi-layer perceptron, although other neural network architectures are possible. In the aggregation step, we update node attributes using an aggregation over transformed edge attributes incident on every node. In our work, h is a learned linear transformation, although other neural network architectures, including attention layers (28), are possible.
In the case of Sudoku, the fully-trained network with optimized hyperparameters predicts correctly 72% of the missing numbers in a single pass through the network and close to 90% of the missing number if we pass the input through the network multiple times, each time adding as a known value a single prediction from the previous iteration in which the network is the most confident (Fig. 2B). Similar accuracy is achieved on an independent test set containing puzzles from an online Sudoku puzzle provider (Fig. 2C).
Reconstructing and evaluating protein sequences
After optimizing the general network architecture for the well-defined problem of solving Sudoku puzzles, we applied a similar network to protein design, which is a less well-defined problem than Sudoku and for which the accuracy of predictions is more difficult to ascertain (Fig. 1C). We treat proteins as graphs, where nodes correspond to the individual amino acids and edges correspond to shortest distances between pairs of amino acids, considering only those pairs of amino acids that are within 12 Å of one another. The node attributes specify the amino acid, with an additional flag to indicate that the amino acid is not known, while the edge attributes include the shortest distance between each pair of amino acids in Cartesian space and the number of residues separating the pair of amino acids along the protein chain.
We compiled a dataset of 72 million unique Gene3D domain sequences from UniParc (29) for which a structural template could be found in the PDB (30), and we trained the network by providing as input a partially-masked amino acid sequence together with the adjacency matrix adapted from the structural template, and minimizing the cross-entropy loss between network predictions and the identities of the masked amino acid residues (Fig. 1C). The training and validation accuracies achieved by the network reach a plateau after around 100 million training examples, with a training accuracy of ~22% and a validation accuracy of ~32%, when half of the residues are masked in the starting sequence ( Fig. 2D-E). The training accuracy is lower than the validation accuracy because, while the training dataset has no restriction on the similarity between sequences and structural templates, for the validation dataset we included only those sequences for which a structural template with at least 80% sequence identity to the query could be found. Reconstruction accuracy is considerably lower than Sudoku ( Fig. 2A-C), as was expected: the Sudoku CSP has a single well-defined solution, thus an accuracy approaching 1 is possible. By contrast, each protein adjacency matrix can be adopted by many different sequences, and the achieved accuracy of 30%-40% roughly corresponds to the common level of sequence identity within a protein fold (15). Evaluating our network by having it reconstruct sequences from our validation dataset, we observe a bimodal distribution in sequence identities between generated and reference sequences, with a smaller peak around 7% sequence identity and a larger peak around 37% sequence identity, corresponding to generated sequences about as similar to the reference sequences as other members of their family (see Fig. 2F). Note that reconstruction accuracy substantially higher than what we achieve would likely be an artifact, as it is already common for sequences with the achieved level of sequence identity to adopt the same fold and thus fulfill the same CSP.
We next asked whether the score assigned by a trained network to single mutations can be predictive of whether those mutations are stabilizing or destabilizing. We speculated that a destabilizing mutation would also disrupt some of the constraints in the CSP and would thus be scored unfavourably by the graph neural network. We find that predictions made using ProteinSolver, which is trained solely to reconstruct protein sequences, show a significant correlation with experimentally measured changes in protein stability reported in Protherm (31) (Spearman ⍴: 0.44; p < 0.001) and, in fact, show a stronger correlation than predictions made using Rosetta's fixbb and ddg_monomer protocols (Fig. 2G).
While predictions made using ProteinSolver show a weaker correlation than predictions made using Rosetta's cartesian_ddg protocol, both the cartesian_ddg protocol, and the beta_nov16 energy function used by that protocol, have been optimized in sight of the Protherm dataset and may have inadvertently overfit on this data (32). Furthermore, Rosetta's cartesian_ddg protocol performs extensive structural relaxation and sampling around the site of the mutation and takes on the order of minutes to hours to evaluate a single mutation, while ProteinSolver can typically evaluate a mutation in under a second.
While the ProteinSolver network was not trained using any mutation data, we did not explicitly exclude the proteins in the Protherm dataset from our training dataset. In order to ascertain that the correlation between predictions made using ProteinSolver and the experimentally-measured ΔΔG values is not biased by the presence of the wild-type sequences in our training dataset, we calculated the correlation between predictions made using ProteinSolver and the effect of mutations on the stability of a number of proteins designed de novo to have a contrived shape (9) (Fig. 2H). While none of those de novo designs appear in the ProteinSolver training dataset, ProteinSolver nevertheless achieves similar correlations as Rosetta's fixbb protocol for mutations in those proteins. Note that we did not make predictions for this dataset using Rosetta's ddg_monomer and cartesian_ddg protocols because of the heavy computational resources that would be required and the fact that evaluating the effect of multiresidue mutations is not explicitly supported by those protocols.
Finally, in order to evaluate how well ProteinSolver can score entire protein sequences and prioritize them for subsequent experimental evaluation, we calculated the correlation between the scores assigned by ProteinSolver to complete novel proteins that have been designed de novo using Rosetta, and the stability of those proteins, as measured using a high-throughput sequencing approach (9) (Fig. 2I). We observe that, for most protein geometries and rounds of selection, the correlation between scores assigned by ProteinSolver and the stability of the proteins is similar to the correlation observed for scores produced by the Rosetta protocol used to generate the protein library. One exception is the round 4 library of the EEHEE designs, where Rosetta achieves a Spearman correlation of ~0.4 while ProteinSolver achieves a Spearman correlation of ~0.14, although it should be noted that ProteinSolver achieves a significantly higher correlation for the round 2 library of designs with the same geometry.
Fig. 2. (A)
Training and validation accuracy of the ProteinSolver network being trained to solve Sudoku puzzles. (B-C) Accuracy achieved by the ProteinSolver network, trained to solve Sudoku puzzles, on the validation dataset (B), comprised of 1000 Sudoku puzzles generated in the same way as the training dataset, and on the test dataset (C), comprised of 30 Sudoku puzzles extracted from an online Sudoku puzzle provider. Predictions were made using either a single pass through the network (blue bars) or by running the network repeatedly, each time taking a single prediction in which the network is the most confident (red bars). (D) Training and validation accuracy of the ProteinSolver network being trained to recover the identity of masked amino acid residues. During training, 50% of the amino acid residues in each input sequence were randomly masked as missing. (E) Accuracy achieved by the ProteinSolver network on the test dataset, comprised of 10,000 sequences and adjacency matrices of proteins possessing a different shape (Gene3d domain) than proteins in the training and validation datasets. In the case of blue bars, predictions were made using a single pass through the network, while in the case of red bars, predictions were made by running the network repeatedly, each time taking a single prediction in which the network is the most confident. (F) Sequence identity between generated and reference sequences in cases where 0%, 50%, or 80% of the reference sequences are made available to the network. (G) Spearman correlation coefficients between experimentallymeasured changes in protein stability associated with mutation and predictions made using Proteinsolver (blue), Rosetta's fixbb protocol (orange), Rosetta's ddg_monomer protocol (green), and Rosetta's cartesian_ddg protocol (red). (H) Spearman correlation coefficients between changes in protein stability associated with mutation, measured using an enzyme digestion assay, and predictions made using Proteinsolver (blue) and Rosetta's fixbb protocol (orange). (I) Spearman correlation coefficient between the stability of proteins designed de novo using Rosetta to match a specific architecture, and predictions made using Proteinsolver (blue) and Rosetta's fixbb protocol (orange). HHH, HEEH, EHEE, and EEHEE denote proteins designed de novo using Rosetta in order to have a helix-helix-helix, helix-sheet-sheet-helix, sheet-helix-sheet-sheet, or sheet-sheet-helix-sheet-sheet architecture, respectively.
Generating new protein sequences with predefined geometries
Motivated by the observation that the ProteinSolver network is able to reconstruct protein sequences with a reasonable level of accuracy, and to assign probabilities to individual residues, as well as entire proteins, that correlate well with experimental measurements of protein stability, we sought to use the network to generate entire novel protein sequences for specific protein folds. To that end, we chose four protein folds that had been left out of the training set and cover the breadth of the CATH hierarchy, and for each of those folds, we extracted a distance matrix from a protein structure representative of that fold. We designed new protein sequences matching those distance matrices by starting with an entirely empty or "masked" protein sequences and, to each of the positions in those sequences, iteratively assigning residues by sampling from the residue probability distributions defined by the network (see Methods). This corresponds to de novo protein design-designing novel sequences for a given fold.
In total, we generated over 600,000 sequences for each of the four selected folds: serum albumin (mainly α; Fig. 3 profiles of the generated sequences show that the network has a clear preference for specific residues at some positions, and in cases where several residues are equally preferred, those residues tend to have similar chemical properties ( Fig. 3D; Supp. Fig. S1D, S2D, S3D). As no information about the sequences was provided to the network, it suggests that the network is able to learn the features pertinent to mapping structural information to sequences from the training dataset.
For each of the four selected folds, we selected the top 20,000 (10%) sequences, as scored by our network, and performed further computational validation. First, we used PSIPRED (33) to predict the secondary structure of each of the generated sequences ( Fig. 3E; Supp. Fig. S1E, S2E, S3E), and we found that the predicted secondary structures of our designs match the secondary structures of the reference proteins almost exactly. Next, we created homology models of our designs, and we evaluated those homology models using a number of metrics, including the Modeller molpdf score (34) and the Rosetta REU score (35) (Fig. 3C; Supp. Fig. S1C, S2C, S3C). In all cases, the scores obtained for the designs are in the same range or better than the scores obtained for the reference structures, suggesting that the sequences that our network generated do novo indeed fold in the shape corresponding to the provided distance matrix. We also used QUARK (36), a de novo (not templatebased) structure prediction algorithm to obtain structures for our sequences (Fig. 3F; Supp. Fig. S1F, S2F, S3F). For all four folds, the obtained structures match the reference structure almost exactly.
Finally, we performed 100 ns molecular dynamics (MD) simulations of the reference structures and the homology models of our de novo designs ( Fig. 3G; Supp. Fig. S1G, S2G, S3G). In all four cases, the designs show comparable fluctuation in molecular dynamics to the reference proteins, indicating that they are of comparable stability as the reference proteins and are thus stably folded.
After obtaining encouraging results from our computational validation experiments, for two of the four selected folds, we chose sequences with the highest combined network and Modeller NormDOPE score, and we attempted to express and evaluate those sequences experimentally. We had each of the sequences synthesized as oligonucleotides, and we expressed them as his-tagged constructs for Ni-NTA affinity purification. After purification, we evaluated the secondary structure of each protein using circular dichroism spectroscopy (CD). Each secondary structure element has a distinctive absorbance spectrum in the far-UV region, and thus similar folds should present similar absorbance spectra. For the two folds, serum albumin and alanine racemase, the selected sequences show a CD spectrum that is similar to the spectrum obtained for the native protein ( Fig. 3E; Supp. Fig. S1E). This is particularly striking in the case of the serum albumin template, where the spectra are indistinguishable (Fig. 3E).
The sequence generated for the alanine racemase template displayed a considerable loss of solubility compared to the sequence from the target structure. Although this made its characterization challenging, we were able to obtain clear spectra by combining a low ionic strength buffer (10 mM Naphosphate, pH 8) with a 10 mm cuvette. While the resulting CD spectrum is somewhat different from the target (Supp. Fig. S1E), this may be due to technical issues resulting from low solubility or a more dynamic nature of the designed protein (consistent with the molecular dynamics in Supp. Fig. S1D).
The spectrum definitely corresponds to a folded helical structure consistent with the predetermined fold.
Taken together with the rest of the evidence from molecular dynamics and Modeller and Rosetta assessment scores, this strongly suggests that those generated sequences adopt the fold specified to the neural network.
Conclusion
In this article, we present ProteinSolver, a graph neural network-based method for solving the protein design problem, formulated as a CSP, and generating protein sequences which satisfy the specified geometric and amino acid constraints. We show that a trained ProteinSolver network can reconstruct protein sequences with a high degree of accuracy, and that it assigns probabilities to individual residues, and to entire protein sequences, that correlate well with the stability of the resulting proteins.
Finally, a trained ProteinSolver network can generate novel protein sequences which, according to extensive computational and experimental validation, fold into the same shapes as the reference proteins from which the geometric constraints are extracted.
There has been growing interest in using neural network-based approaches for protein representation learning and design (19,(37)(38)(39)(40), with several new methods reported during the preparation of this manuscript (41,42). While most methods are accompanied by a variety of metrics which attempt to illustrate the accuracy of the predictions, it is inherently difficult to evaluate the quality of generative models, as their ultimate goal is to generate entirely novel sequences with no existing counterparts. To address this concern, we have synthesised and experimentally validated several of our designs, and have shown that the designed proteins fold into stable structures with circular dichroism spectra that are consistent with their target shapes. As far as we are aware, ProteinSolver is currently the only machine learning based model whose predictions have undergone this level of experimental validation.
One limitation of existing methods for protein design is the steep learning curve and the high degree of domain expertise that is necessary to make reasonable predictions. We circumvent those limitations by developing a web server which the users can use to generate, in near-real time, hundreds to thousands of sequences matching a given protein topology and amino acid constraints (Supp. Fig. S4). It can be freely accessed at: http://design.proteinsolver.org. We believe that this web server will make our graph neural network-based approach to protein design accessible to the widest possible audience and will facilitate the generation of many novel proteins.
Data preparation
We downloaded from UniParc (37) a dataset of all protein sequences and corresponding domain definitions, and we extracted from this dataset a list of all unique Gene3D domains. We also processed the PDB database and extracted the amino acid sequence and the distance matrix of every chain in every structure. The distance matrix consists of distances between all pairs of residues that are within 12 Angstroms of one another, considering both the backbone and the sidechain residues in this calculation. Finally, we attempted to find a structural template for every Gene3D domain sequence, and we transferred the distance matrices from the structural templates to each of those sequences. The end result of this process is a dataset of 72,464,122 sequences and adjacency matrices, clustered into 1,373 different Gene3D superfamilies. We split this dataset into a training subset, containing sequences of 1,029 Gene3D superfamilies, a validation subset, containing sequences of 172 Gene3D superfamilies, and a test subset, containing sequences of another 172 Gene3D superfamilies.
Instructions on how to download the training and validation datasets are provided on the ProteinSolver documentation page (https://ostrokach.gitlab.io/proteinsolver). A list of all resources used to construct those datasets is provided Supp. Table S1.
In the case of Sudoku, the training and validation datasets were generated using the sugen program (26) with the target difficulty of the puzzles set to 500. The training dataset was composed of 30 million generated puzzles, while the validation dataset was composed of 1000 puzzles that do not appear in the training dataset. The test dataset was comprised of 30 Sudoku puzzles collected from http://1sudoku.com (43,44).
Network implementation
The source code for ProteinSolver is freely available at https://gitlab.com/ostrokach/proteinsolver. The network was implemented in the Python programming language using PyTorch (45) and PyTorch Geometric (46) libraries. The repository also includes Jupyter notebooks that can be used to reproduce all the figures presented in this manuscript.
Network architecture
We used Sudoku as a toy problem while optimizing the general design of the ProteinSolver network, including selecting the objective function that is optimized (masking a fraction of node labels and minimizing the cross-entropy loss between predicted and actual labels), tuning the specific implementation of edge convolutions and aggregations (using a 2-layer feed-forward network to update edge attributes, summing over linearly-transformed edge attributes of the incident edges to update node attributes, etc.), and selecting the types of non-linearities and normalizations that are applied (ReLU and LayerNorm, respectively).
Once we had a network that showed promising results in its ability to solve Sudoku puzzles, we tuned the specific hyperparameters of that network and selected variants that achieved the highest accuracies on the validation datasets. For the task of solving Sudoku puzzles ( Fig. 2A-C), the model that achieved the highest accuracy on the validation dataset had 16 residual edge convolution and aggregation (ECA) blocks and a node and edge embedding space of 162. For the task of reconstructing protein sequences (Fig. 2D-F), the model that achieved the highest accuracy on the validation dataset had 4 residual blocks and a node and edge embedding space of 128.
Scoring existing protein sequences
We evaluated the plausibility of existing protein sequences by using a trained ProteinSolver network to calculate the log-probability of every residue in those sequences, given all other residues, and taking the average of those log-probabilities. In effect, for every residue in a given sequence, we replaced the node label corresponding to that residue with a "mask" token, and we used a trained network to obtain the log-probability of the correct residue at the masked position. The score for the protein (e.g. Fig. 2I) was calculated as the average of the log-probabilities assigned to all residues. The score for a mutation (e.g. Fig. 2G,H) was calculated as the difference between log-probabilities of the mutant and the wildtype protein.
Generating novel protein sequences
In order to calculate the most probable protein sequence, given a specific distance matrix, we evaluated two different approaches: one-shot generation and incremental generation. In one-shot generation, we passed the inputs with the missing node labels through the network only once, for every node accepting the label assigned by the network the highest probability. In incremental generation, we passed the inputs through the network once for every missing label. At each iteration, we accepted the label for which the network was making the most confident prediction, and we treated that label as given in all subsequent iterations. The one-shot generation method is substantially faster, requiring O(1), rather than O(N), passes through the network, while the incremental generation method appears to produce more accurate results, especially in the case of Sudoku (see Fig. 2C,E).
In order to generate a library of protein sequences, given a specific distance matrix, we used an approach similar to the incremental generation method described above. However, at each iteration, instead of deterministically accepting the residue to which the network assigns the highest probability, we select the residue by randomly sampling from the probability distribution assigned to that position by the network.
Molecular dynamics
All water and ion atoms were removed from the structure with PDB codes 1N5U, 4Z8J, 4UNU, and 1OC7, corresponding to an all-protein, a mainly-β protein, an all-β protein and a mix-β protein, ɑ ɑ respectively. The structural models for the generated sequences were produced using Modeller, with the PDB files described above serving as templates. Using TLEAP in AMBER16 (47) and AMBER ff14SB force field (48), the structures were solvated by adding a 12 nm 3 box of explicit water molecules, TIP3P. Next, Na+ and Cl-counter-ions were added to neutralize the overall system net charge, and periodic boundary conditions were applied. Following this, the structures were minimized, equilibrated, heated over 800 ps to 300 K, and the positional restraints were gradually removed. Bonds to hydrogen were constrained using SHAKE (49) and a 2 fs time step was used. The particle mesh Ewald (50) algorithm was used to treat long-range interactions.
Web server implementation
In order to make our method accessible to the widest possible audience, we developed a web server which allows the user to run a trained ProteinSolver model to generate new protein sequences matching geometric constraints extracted from a reference structure and, optionally, specific amino acid constraints imposed directly by the user (Supp. Fig. S4
Circular dichroism
All CD measurements were made on a Jasco J-810 CD spectrometer in a 1 mm Quartz Glass cuvette for high performance (QS) (Hellma Analytics) with the exception of 4beu_Design where a 10 mm Quartz Glass cuvette (QS) (Hellma Analytics) was preferred. 1n5u and 1n5u_Design were analysed in PBS whereas 4beu and 4beu_Design were analysed in 10 mM Na-Phosphate, pH 8. The CD spectra were collected in the 198 nm to 260 nm wavelength range using a 1 nm bandwidth and 1 nm intervals at 50 nm/min; each reading was repeated then times. All measurements were taken at 20 °C. S4. Screenshot of the ProteinSolver design web server (http://design.proteinsolver.org). (A) The user can upload the structure of a reference protein whose geometry will be used to restrain the space of generated amino acid sequences. Alternatively, the user can select one of four example proteins. (B) The user is given the option to explicitly fix one or more amino acids in the generated sequences to specific residues. (C) When the user clicks the "Run ProteinSolver" button, a background ProteinSolver process starts generating sequences matching the specified geometric and amino acid constraints. By default, 100 sequences are generated, although this number can be adjusted. The progress of the ProteinSolver process can be monitored by looking at the progress bar and the sequence logo displayed to the right of the "Run ProteinSolver" button, while GPU utilization can be monitored by looking at the status bars displayed below the "Run ProteinSolver" button. (D) Once a sufficient number of sequences have been generated, the user can click the "Generate download link" button, at which point a download link will appear, allowing the user to download the generated designs. . The natural protein, with an unpaired cysteine, forms high MW species not present in reducing conditions. Design 1 is only represented as a single band in both environments. Interestingly, the chosen sequence from the albumin template contains 4 pairs of potential disulfide bonds, whereas it is known that the albumin template used has only 3 of these bonds (PDB 1n5u). The designed 1n5u run in an SDS-PAGE in oxidising conditions as a single band. Furthermore, the mass spectrometry analysis by electrospray (ESI) of the molecular weight (MW) of designed 1n5u showed a loss of 8 Da against the theoretical MW, consistent with the loss of 8 protons. All together this indicates that a new disulfide bond not present in the albumin structural template was efficiently inserted into the designed sequence. | 7,826 | 2019-12-10T00:00:00.000 | [
"Computer Science",
"Biology"
] |
A Novel High-Efficiency Natural Biosorbent Material Obtained from Sour Cherry (Prunus cerasus) Leaf Biomass for Cationic Dyes Adsorption
The present study aimed to investigate the potential of a new lignocellulosic biosorbent material derived from mature leaves of sour cherry (Prunus cerasus L.) for removing methylene blue and crystal violet dyes from aqueous solutions. The material was first characterized using several specific techniques (SEM, FTIR, color analysis). Then, the adsorption process mechanism was investigated through studies related to adsorption equilibrium, kinetics, and thermodynamics. A desorption study was also performed. Results showed that the Sips isotherm provided the best fit for the adsorption process of both dyes, with a maximum adsorption capacity of 168.6 (mg g−1) for methylene blue and 524.1 (mg g−1) for crystal violet, outperforming the capacity of other similar adsorbents. The contact time needed to reach equilibrium was 40 min for both studied dyes. The Elovich equation is the most suitable model for describing the adsorption of methylene blue, while the general order model is better suited for the adsorption of crystal violet dye. Thermodynamic analyses revealed the adsorption process to be spontaneous, favorable, and exothermic, with physical adsorption involved as the primary mechanism. The obtained results suggest that sour cherry leaves powder can be a highly efficient, eco-friendly, and cost-effective adsorbent for removing methylene blue and crystal violet dyes from aqueous solutions.
Introduction
Water is an essential resource for sustaining life on Earth. Industrial development, urbanization, and population growth have led to an increase in the water requirement. Pollution of water sources, underground and surface, has become a global problem that requires special attention [1][2][3][4].
Among the compounds playing a major role in water pollution are organic substances. Of these, dyes generate significant water pollution [1,3,5,6]. Industries that release considerable amounts of colored wastewater into the environment are textiles, pulp and paper, plastic, leather, cosmetics, pharmaceuticals, rubber, food processing, etc. The dyes have a complex aromatic structure, are stable to light, heat and oxidizing agents, presenting toxic mutagenic, teratogenic and carcinogenic effects on living organisms. Therefore, the elimination of these compounds from wastewater is a necessity [1,5,[7][8][9][10][11].
Cationic dyes are more toxic compared to anionic and non-ionic ones due to their ability to interact with negatively charged cell membranes, and present a higher risk for human health [1,5]. Nowadays, methylene blue (MB) and crystal violet (CV) are used in numerous industrial activities, having also important human and veterinary medicine applications. However, their presence in natural waters has a negative impact on aquatic life. They can cause various adverse effects on people, such as irritation of the skin and
Materials and Methods
The Prunus cerasus L. mature leaves were collected from a sour cherry tree located in a private garden in Cerneteaz village, Timis County, Romania. The leaves were washed with distilled water, dried at room temperature for 5 days, and then placed in an air oven at 90 • C for 24 h. The dried leaves were then ground into a fine powder material with an electric mill, passed through a 2 mm sieve, and washed again with distilled water to remove any turbidity and color. The washed powder material was then dried in an air oven at 105 • C for 8 h.
A Shimadzu Prestige-21 FTIR spectrophotometer (Shimadzu, Kyoto, Japan), a Quanta FEG 250 microscope (FEI, Eindhoven, The Netherlands), and a Cary-Varian 300 Bio UV-VIS colorimeter (Varian Inc., Mulgrave, Australia) were used to carry out FTIR (Fouriertransform infrared spectroscopy), SEM (Scanning Electron Microscopy), and color analysis, respectively. For FTIR analysis, the adsorbent sample was mixed with KBr and formed it into a pellet, while the SEM micrograph was taken at 3000× magnification. The color analysis was conducted under D65 (natural light) illumination and with 10 observer angles. The point of zero charge (pH PZC ) was identified using the solid addition method [37].
To investigate the adsorption process of each dye, an individual batch system was used. The experiments were carried out in three independent replicates at a constant stirring speed. The pH of the solutions was adjusted with dilute solutions of hydrochloric acid (HCl) and sodium hydroxide (NaOH), both at a concentration of 0.1 (mol dm −3 ), while the effect of ionic strength was tested by adding sodium chloride (NaCl). Finally, the methylene blue and crystal violet concentrations were measured with a UV-VIS spectrophotometer (Specord 200 PLUS UV-VIS spectrophotometer, Analytik Jena, Jena, Germany), at a wavelength of 664 nm and 590 nm, respectively. Limit of Detection (LOD) and Limit of Quantitation (LOQ) for methylene blue concentration determination were 0.21 (mg L −1 ) and 0.61 (mg L −1 ), respectively. For the crystal violet concentration determination, the values for this parameters were LOD = 0.16 (mg L −1 ) and LOQ = 0.49 (mg L −1 ).
Five different isotherm models and five kinetic models were used to analyze the equilibrium and kinetics of adsorption. These models and their equations [38,39] are detailed in the Supplementary Materials, Table S1. The suitability of the tested models was evaluated by determining the value of the determination coefficient (R 2 ) and the sum of square error (SSE), chi-square (χ 2 ), and average relative error (ARE) [39]. The equations for these error parameters are described in the Supplementary Materials, Table S2. The experimentally obtained results at temperatures of 283, 297, and 317 K were used to calculate the thermodynamic parameters, whose equations [38] are listed in Table S3 of the Supplementary Material.
The desorption process was conducted using three different substances, distilled water, 0.1 (mol dm −3 ), HCl and 0.1 (mol dm −3 ) NaOH, in a batch system with constant stirring for a period of two hours. The desorption efficiency was then calculated using the equation presented in Table S4 in the Supplementary Material. Figure 1 presents the FTIR spectra of the sour cherry leaves powder (before adsorption). This spectrum shows specific peaks corresponding to different functional groups ( Table 1). Analysis of the spectrum indicates that the primary constituents of the adsorbent are cellulose, hemicellulose, and lignin. This fact highlights its affinity to bind dye molecules [27]. aromatic ring C=C bond [42] 1605 cm −1 aromatic skeletal and C=O stretch vibrations characteristic of lignin [43] 1422 cm −1 -C-H deformation in lignin [44,45] 1255 cm −1 -C-O stretching and CH or OH bending of hemicellulose structures [46,47] 1057 cm −1 C-O-C stretching of cellulose [23,48] 625 cm −1 bending modes of aromatic compounds [49] After dye adsorption, only two peaks were shifted as follows: 3282 cm −1 shifted to 3120 cm −1 (methylene blue adsorption) and 3227 cm −1 (crystal violet adsorption), respectively; 1422 cm −1 shifted to 1370 cm −1 at both dye adsorption. These observations suggest that O-H and C-H bonds may be involved in dye retention. The rest of the peaks kept their initial positions and no new ones appeared, indicating no breaking or formation of new bonds after adsorption; therefore, physical adsorption is the main mechanism involved in the process [50][51][52].
Adsorbent Material Characteriation
The SEM images of the adsorbent material are displayed in Figure 2. Before adsorption, the adsorbent surface appears to be irregular and complex, with pores, crevices, and empty spaces of various sizes and shapes that suggest it is suitable for capturing dyes. After the adsorption process, the adsorbent surface became more uniform, smoother, and consistent, which indicates that the dye molecules filled up the pores and covered up the surface irregularities ( Figure 2B,C). After dye adsorption, only two peaks were shifted as follows: 3282 cm −1 shifted to 3120 cm −1 (methylene blue adsorption) and 3227 cm −1 (crystal violet adsorption), respectively; 1422 cm −1 shifted to 1370 cm −1 at both dye adsorption. These observations suggest that O-H and C-H bonds may be involved in dye retention. The rest of the peaks kept their initial positions and no new ones appeared, indicating no breaking or formation of new bonds after adsorption; therefore, physical adsorption is the main mechanism involved in the process [50][51][52].
The SEM images of the adsorbent material are displayed in Figure 2. Before adsorption, the adsorbent surface appears to be irregular and complex, with pores, crevices, and empty spaces of various sizes and shapes that suggest it is suitable for capturing dyes. After the adsorption process, the adsorbent surface became more uniform, smoother, and consistent, which indicates that the dye molecules filled up the pores and covered up the surface irregularities ( Figure 2B,C).
The adsorption process can be characterized by analyzing the initial and final color of the adsorbent using the CIELab* color parameters. During the adsorption process, the color of the dye in the solution is transferred to the sour cherry leaves powder ( Figure 3). This causes the luminosity of the adsorbent to decrease and the color parameters a* and b* to change. Point (1), which describes the initial color of the sour cherry leaves, becomes point (4) after adsorption and shifts into the color area of methylene blue, which was initially represented by point (2). The same observation can be made for the absorption of crystal violet dyes: point (1) becomes point (5) after adsorption and shifts into the color area of crystal violet, which was initially represented by point (3).
The point of zero charge (pH PZC ) is a measure of the adsorbent surface charge. When the pH is below the pH PZC , the surface of the adsorbent becomes positively charged, and when the pH is above the pH PZC , the surface becomes negatively charged. The surface charge affects the adsorption of cationic dyes, as a negatively charged surface is more favorable for adsorption [14,23]. According to Figure 4, the pH PZC of the sour cherry leaves powder was determined to be 5.5, meaning that a pH above this value is suitable for the adsorption of methylene blue and violet crystal dyes. The adsorption process can be characterized by analyzing the initial and final color of the adsorbent using the CIELab* color parameters. During the adsorption process, the color of the dye in the solution is transferred to the sour cherry leaves powder ( Figure 3). This causes the luminosity of the adsorbent to decrease and the color parameters a* and b* to change. Point (1), which describes the initial color of the sour cherry leaves, becomes point (4) after adsorption and shifts into the color area of methylene blue, which was initially represented by point (2). The same observation can be made for the absorption of crystal violet dyes: point (1) becomes point (5) after adsorption and shifts into the color area of crystal violet, which was initially represented by point (3). The point of zero charge (pHPZC) is a measure of the adsorbent surface charge. When the pH is below the pHPZC, the surface of the adsorbent becomes positively charged, and when the pH is above the pHPZC, the surface becomes negatively charged. The surface charge affects the adsorption of cationic dyes, as a negatively charged surface is more fa- The point of zero charge (pHPZC) is a measure of the adsorbent surface charge. When the pH is below the pHPZC, the surface of the adsorbent becomes positively charged, and when the pH is above the pHPZC, the surface becomes negatively charged. The surface charge affects the adsorption of cationic dyes, as a negatively charged surface is more favorable for adsorption [14,23]. According to Figure 4, the pHPZC of the sour cherry leaves powder was determined to be 5.5, meaning that a pH above this value is suitable for the adsorption of methylene blue and violet crystal dyes.
Effect of pH, Ionic Strength, and Adsorbent Dose on Cationic Dyes Adsorption
The pH, ionic strength, and adsorbent dose are parameters that significantly influence the dye's adsorption process. Figure 5 illustrates the effect of these parameters on methylene blue and crystal violet adsorption on sour cherry leaves powder.
Effect of pH, Ionic Strength, and Adsorbent Dose on Cationic Dyes Adsorption
The pH, ionic strength, and adsorbent dose are parameters that significantly influence the dye's adsorption process. Figure 5 illustrates the effect of these parameters on methylene blue and crystal violet adsorption on sour cherry leaves powder. As expected, the adsorption capacity was positively influenced when the solutions pH were higher than pHPZC, the electrostatic attraction between the cationic dye molecules and the negatively charged adsorbent surface favoring the adsorption process. Similar results were recorded for methylene blue adsorption on pineapple leaf powder [46], citrus As expected, the adsorption capacity was positively influenced when the solutions pH were higher than pH PZC , the electrostatic attraction between the cationic dye molecules and the negatively charged adsorbent surface favoring the adsorption process. Similar results were recorded for methylene blue adsorption on pineapple leaf powder [46], citrus limetta peel [13], and lotus leaf powder [53], and for crystal violet dye adsorption on Ananas comosus leaves [20], Ocotea puberula bark [54], and Terminalia arjuna sawdust [14].
The presence of other ions in the dyeing wastewater can have a negative effect on the adsorption process. As illustrated in Figure 5, when the ionic strength is increased, due to the addition of NaCl, the adsorption capacity decreases because the sodium ions are competing with the dye cations for the available adsorption sites on the material surface. A similar effect of ionic strength on the methylene blue and crystal violet adsorption was observed in other studies in which similar adsorbents were used, such as: Daucus carota leaves [37], phoenix tree's leaves [55], potato leaves [56], Ananas comosus leaves [46], lotus leaves [53], Arundo donax L. [57], and Artocarpus odoratissimus leaf-based cellulose [48].
The data in Figure 5 show that higher adsorbent dosages lead to an increase in the adsorption efficiency, based on a larger adsorption surface area and a higher number of active adsorption sites. The decrease in the adsorption capacity is probably due to the fact that many of these sites remain unsaturated and also to the agglomeration of adsorbent material particles [13,55,58,59]. Other researchers previously observed that the amount of adsorbent used had the same effect on the adsorption capacity and removal efficiency of methylene blue and crystal violet [13,14,23,[53][54][55].
Equilibrum Study
The equilibrium adsorption process was evaluated using the non-linear isotherms Langmuir, Freundlich, Temkin, Sips, and Redlich-Peterson. After analyzing the fitted isotherm curves (Figures 6 and 7) and the corresponding error parameters (Table 2), it was found that the applicability of the five isotherms for the obtained experimental data follows the order: Sips > Redlich-Peterson > Langmuir > Freundlich > Temkin for the methylene blue adsorption. For crystal violet adsorption, the order of applicability is slightly modified: Sips > Freundlich > Redlich-Peterson > Langmuir > Temkin. Table 2. Parameters of the adsorption isotherms used to assess the dyes adsorption behavior on sour cherry (Prunus cerasus) leaves powder.
MB Adsorption CV Adsorption
Langmuir non-linear K L (L mg −1 ) 0.0026 ± 0.0005 0.0041 ± 0.0008 q max (mg g −1 ) 543 q m and Q sat are the maximum absorption capacities; K L , K F , K T , K S , and K RP are the Langmuir, Freundlich, Temkin, Sips, and Redlich-Peterson isotherms constants; 1/n F is an empirical constant indicating the intensity of adsorption; b is Temkin constant which related to the adsorption heat; n is Sips isotherm exponent; a RP is Redlich-Peterson isotherm constant, β RP is Redlich-Peterson exponent; R 2 is determining the value of the determination coefficient; SSE is the sum of square error; χ 2 is chi-square and ARE is average relative error.
Equilibrum Study
The equilibrium adsorption process was evaluated using the non-linear isotherms Langmuir, Freundlich, Temkin, Sips, and Redlich-Peterson. After analyzing the fitted isotherm curves (Figures 6 and 7) and the corresponding error parameters (Table 2), it was found that the applicability of the five isotherms for the obtained experimental data follows the order: Sips > Redlich-Peterson > Langmuir > Freundlich > Temkin for the methylene blue adsorption. For crystal violet adsorption, the order of applicability is slightly modified: Sips > Freundlich > Redlich-Peterson > Langmuir > Temkin. Previous studies showed that the Sips isotherm best characterized the adsorption process of methylene blue on Maclura pomifera biomass [60], bilberry leaves [61], raspberry leaves [62], dicarboxymethyl cellulose [63], and the adsorption process of crystal violet dye on Artocarpus altilis skin [64], Eragrostis plana Nees [65], and motherwort biomass [42]. Table 3 presents a comparison of the maximum absorption capacities of various similar absorbents used for the absorption of methylene blue and crystal violet dyes from aqueous solutions. Analyzing the presented data, it can be seen that the sour cherry leaves powder has a superior adsorption capacity compared to many other similar adsorbents, indicating the practical utility of the new adsorbent proposed in this study.
Kinetic Study
The effect of contact time on adsorption capacity for methylene blue and crystal violet retention using sour cherry powder as adsorbent material is shown in Figures 8 and 9. During the first 5-10 min of the adsorption process, the capacity of the adsorbent to retain the dyes increased at a rapid rate. As the contact time increased, active adsorption sites gradually filled up, resulting in a slower increase in adsorption capacity. Finally, after 40 min, an equilibrium was reached in which the amount of dye absorbed had stabilized. This suggests that dye diffusion occurred in the pores of the adsorbent and that a monolayer of dye was formed on its surface, resulting in a decrease in the adsorption rate [23,53,86], therefore, the value of the adsorption capacity remained constant.
Kinetic Study
The effect of contact time on adsorption capacity for methylene blue and crystal violet retention using sour cherry powder as adsorbent material is shown in Figures 8 and 9. During the first 5-10 min of the adsorption process, the capacity of the adsorbent to retain the dyes increased at a rapid rate. As the contact time increased, active adsorption sites gradually filled up, resulting in a slower increase in adsorption capacity. Finally, after 40 min, an equilibrium was reached in which the amount of dye absorbed had stabilized. This suggests that dye diffusion occurred in the pores of the adsorbent and that a monolayer of dye was formed on its surface, resulting in a decrease in the adsorption rate [23,53,86], therefore, the value of the adsorption capacity remained constant. Table 4 shows comparatively the time taken to reach equilibrium during the adsorption of methylene blue and crystal violet on various similar adsorbents obtained from plant biomass. The kinetic data for both dyes adsorption were modeled using five different nonlinear kinetic models. Analyzing these models plots (Figures 8 and 9), the constants, and their corresponding error functions (Table 5), it is concluded that the Elovich model is the most appropriate to describe the methylene blue adsorption, while for the adsorption of the crystal violet dye, a more suitable model is the general order. The coefficient of determination (R 2 ) values for some tested kinetic models were very similar, however, the lower values for χ 2 , SSE, and ARE are the main arguments that ultimately led to the final conclusion. Table 5. Parameters of the kinetic models used to assess the dyes adsorption behavior on sour cherry (Prunus cerasus) leaves powder.
MB Adsorption CV Adsorption
Pseudo-first order k 1 (min −1 ) 1.33 ± 0.07 0.41 ± 0.05 q e,calc (mg g −1 ) 15 q t is the dye amount adsorbed at time t; k 1 , k 2 , k n , and k AV are the rate constants of pseudo-first-order, pseudosecond-order, general order, and Avrami kinetic models; q e , q n , and q AV are the theoretical values for the adsorption capacity; a is the desorption constant of Elovich model; b is the initial velocity; n is the general order exponent and n AV is a fractional exponent; R 2 is determining the value of the determination coefficient; SSE is the sum of square error; χ 2 is chi-square and ARE is average relative error.
Thermodynamic Study
The thermodynamic parameters, calculated from the experimental results obtained at temperatures of 283, 297, and 317 K, are depicted in Table 6. These parameters indicate that the process is spontaneous, favorable, and exothermic, as evidenced by the negative values of the standard Gibbs energy change (∆G 0 ) and the standard enthalpy change (∆H 0 ). Similar results were obtained by other researchers who studied the adsorption of methylene blue on Salix babylonica leaves [23], Daucus carota leaves [37], potato leaves [56], Maclura Pomifera biomass [60], and Typha angustifolia (L.) leaves [68] and, respectively, the adsorption of crystal violet on pineapple leaf [21], Ocotea puberula bark powder [54], Arundo donax L. [57], Moringa oleifera pod husk [78], and jackfruit leaf powder [80]. Table 6. The thermodynamic parameters used to assess the dyes adsorption process.
Dye
∆G 0 (kJ mol −1 ) ∆H 0 (kJ mol −1 ) ∆S 0 (J mol − The positive value of the standard entropy change (∆S 0 ) suggests that there is an increased randomness at the solid-liquid interface [13,53,69]. The values of ∆G 0 , both for the adsorption of methyl blue and crystal violet, fall within the range −20 to 0 (kJ mol -1 ). In addition, the ∆H 0 value is less than 40 (kJ mol -1 ). These two observations indicate that the primary mechanism involved in the absorption is physisorption [23,31,87]. The value of ∆H 0 lower than 20 (kJ mol −1 ) indicates that van der Waals forces are implied and have an important role in the physical adsorption process [52,88,89].
Desorption Study
The data obtained in this study are illustrated in Figure 10. The highest methylene blue desorption efficiency was obtained when HCl was used as desorption agent ( Figure 10A). The regenerated adsorbent was reused for methylene blue adsorption, but the obtained adsorption capacity was approximately 50% lower. In conclusion, it can be stated that the regeneration of the adsorbent material is not justified from both technical and economic point of view.
Desorption Study
The data obtained in this study are illustrated in Figure 10. The highest methylene blue desorption efficiency was obtained when HCl was used as desorption agent ( Figure 10A). The regenerated adsorbent was reused for methylene blue adsorption, but the obtained adsorption capacity was approximately 50% lower. In conclusion, it can be stated that the regeneration of the adsorbent material is not justified from both technical and economic point of view. The desorption efficiency of the crystal violet dye was less than 20% regardless of the desorption agent used ( Figure 10B). In this case, the regeneration of the exhausted absorbent cannot be considered as feasible.
The fact that sour cherry leaves are a low-cost and readily available material in large quantities in nature compensate for this disadvantage. Furthermore, due to the combustion properties of plant leaves, the incineration of the exhausted adsorbent can be a simple and efficient reuse solution. Another possible use is as a foaming agent to produce ceramic or glass foams. During the combustion process, a large amount of gas results, which makes it an ideal sporogenous precursor for this type of materials [61,62].
Conclusions
This study proposes a new natural adsorbent material, obtained from mature sour cherry (Prunus cerasus L.) leaves, suitable for methylene blue and crystal violet dyes removal from aqueous solutions. This material was characterized and then subjected to adsorption experiments to evaluate its effectiveness in dye removal. The FTIR analysis shows that the adsorbent contains different functional groups specific for cellulose, hemicellulose, and lignin, able to bind dyes. The structure of the adsorbent surface was studied using SEM images, both before and after adsorption, highlighting the importance of the adsorbent porous structure. The dyes retention was indicated using color analysis, dyes color being transferred from the initial solution to the sour cherry leaves powder. pH, ionic strength, and adsorbent dose on cationic dyes adsorption were identified as key factors influencing the effectiveness of the adsorbent. The Sips isotherm best describes the adsorption processes for both studied dyes, with a maximum adsorption capacity of 168.6 (mg g −1 ) for methylene blue adsorption and 524.1 (mg g −1 ) for crystal violet adsorption, superior to other similar adsorbents. The contact time needed to reach equilibrium was 40 min for both studied dyes. The Elovich model is the most appropriate to describe the methylene blue adsorption, while for the adsorption of the crystal violet dye more suitable model is general order. Thermodynamic analyses reveal a spontaneous, favorable, and exothermic process, the calculated values for ∆G 0 and ∆H 0 suggesting physisorption as the primary mechanism involved in the absorption process for both dyes.
Regenerating the absorbent is not a viable option, but this fact is compensated by its very low price.
All results indicate sour cherry leaves powder as an affordable, readily available, environmentally friendly, and efficient adsorbent to remove cationic dyes from aqueous solutions.
Supplementary Materials:
The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/ma16124252/s1, Table S1: The non-linear equations of the adsorption isotherms and kinetic models used to assess the adsorption process, Table S2: The calculation equations for error parameters R 2 , χ 2 , SSE, and ARE,
Conflicts of Interest:
The authors declare no conflict of interest. | 5,889.8 | 2023-06-01T00:00:00.000 | [
"Engineering"
] |
Global linear gyrokinetic simulation of energetic particle-driven instabilities in the LHD stellarator
Energetic particles are inherent to toroidal fusion systems and can drive instabilities in the Alfvén frequency range, leading to decreased heating efficiency, high heat fluxes on plasma-facing components, and decreased ignition margin. The applicability of global gyrokinetic simulation methods to macroscopic instabilities has now been demonstrated and it is natural to extend these methods to 3D configurations such as stellarators, tokamaks with 3D coils and reversed field pinch helical states. This has been achieved by coupling the GTC global gyrokinetic PIC model to the VMEC equilibrium model, including 3D effects in the field solvers and particle push. This paper demonstrates the application of this new capability to the linearized analysis of Alfvénic instabilities in the LHD stellarator. For normal shear iota profiles, toroidal Alfvén instabilities in the n = 1 and 2 toroidal mode families are unstable with frequencies in the 75 to 110 kHz range. Also, an LHD case with non-monotonic shear is considered, indicating reductions in growth rate for the same energetic particle drive. Since 3D magnetic fields will be present to some extent in all fusion devices, the extension of gyrokinetic models to 3D configurations is an important step for the simulation of future fusion systems.
Introduction
Instabilities driven by energetic particle (EP) components are of interest for magnetic fusion concepts since they can lead to decreased heating efficiency, high heat fluxes on plasmafacing components, and decreased ignition margins for reactor systems. Since 3D magnetic field perturbations will be present to some extent in all toroidal configurations, the analysis of EP instabilities in 3D systems is an important goal for fusion simulations. To address this, the GTC global gyrokinetic particle-in-cell (PIC) model [1] has been adapted to the VMEC 3D equilibrium model [2], and 3D effects included in the field solvers and particle push. Initial applications of this model have been made [3] to EP instabilities in several stellarators (LHD, W7-X) and pedestal turbulence in tokamaks with 3D fields [4]. Other gyrokinetic models that have been developed for both stellarators and tokamaks include the EUTERPE [5] and GENE [6] models. EUTERPE is a particle-based approach, while GENE is a continuum model that solves the 5D kinetic equations of all species. Additionally, the MEGA model [7] is a hybrid MHD-kinetic approach that couples a particle description for the fast ions with a full MHD model for the thermal plasma component. MEGA is applicable to EP instabilities in both tokamaks and stellarators. Another hybrid model is FAR3D [8], which couples reduced MHD equations for the thermal plasma with a Landau closure model for the fast ions and is designed for 3D systems. These models are all based to varying degrees on the gyrokinetic approach [9][10][11][12], which incorporates both the guiding center dynamics of particle trajectories and the effects arising from the finite helical Larmor orbits that center upon the guiding center trajectory. GTC solves the gyrokinetic equation using particle-based methods; the feasibility of the PIC method for gyrokinetics was initially demonstrated by W. E. Lee [13]. The specific gyrokinetic approach with adiabatic electrons, as described below, was formulated [14] to avoid high frequency modes and the time step limitation related to the electron Courant condition. The gyrokinetic orderings ) are applicable to most plasma components and regimes of interest for magnetic confinement systems. Gyrokinetics constitutes the most advanced first principles model that is also feasible to apply to global energetic particle instabilities in magnetically confined plasmas. The gyrokinetic PIC method used by GTC couples particle stepping in fluctuating fields with self-consistent electromagnetic field solutions based on Poisson's and Ampere's laws (based on retaining the potential, φ, and parallel vector potential, A ∥ ). For particle based gyrokinetic models, the small electron mass presents a numerical difficulty for simultaneously treating the dynamics of ions and electrons in simulations. A fluid-kinetic electron model currently implemented in GTC overcomes this difficulty by expanding the electron drift kinetic equation using the electron-ion mass ratio as a small parameter. The model accurately recovers low frequency plasma dielectric responses and faithfully preserves linear and nonlinear wave-particle resonances. Maximum numerical efficiency is achieved by overcoming the electron Courant condition and suppressing tearing modes and high frequency modes, thus effectively suppressing electron noise. In GTC the parallel vector potential is separated into adiabatic and nonadiabatic components, similar to the mixed variables (symplectic/Hamiltonian) pullback transformation [15] used in EUTERPE to avoid the so-called 'cancellation' problem. GTC can address kinetic issues specific to 3D configurations, such as multiple trapping regions, particles that transition back and forth between trapped and passing, and orbit trajectories that are more non-local than in similar axisymmetric tokamak systems.
In this paper the application of GTC for the linear analysis of energetic particle instabilities that have been observed in the LHD stellarator is demonstrated. A parameter survey indicates that Alfvén modes similar to those observed in LHD resonate with injected beam ions and are predicted to be unstable. Predicting the onset and effects of these instabilities is of significant importance due to their impact on heating efficiency, plasma-facing component heat loads, and possible diagnostic use. The importance of non-axisymmetric effects in all toroidal devices motivates the development of comprehensive new gyrokinetic simulation methods that can apply across the full range of symmetry-breaking perturbation levels. The paper is organized as follows. In section 2, we describe the gyrokinetic model, the LHD profiles and equilibria that are used, and discuss the resonance conditions that allow the neutral beam ions to destabilize the shear Alfvén eigenmodes. Next in section 3, the gyrokinetic results are presented for both normal and non-monotonic shear discharges in LHD; these include variations of growth rates and real frequencies with beam parameters, the relation of the frequencies obtained with the shear Alfvén continuum gap structure, and the Alfvén mode structures. Finally, in section 4 the conclusions are presented.
Description of the GTC gyrokinetic model
GTC is a global gyrokinetic, full torus, electromagnetic particle-in-cell model [16] based on Boozer magnetic coordinates [17]. Computational efficiency is gained by modifying these coordinates to approximately follow field lines. The applicability of GTC to fast ion destabilized global Alfvénic instabilities in tokamaks has been extensively demonstrated [18][19][20]. The implementation of GTC primarily used in this paper can be classified as a gyrokinetic model with adiabatic electrons. As will be described below, the energetic particle and thermal ion species are treated using gyrokinetics, while an adiabatic fluid description is used for the electrons. This includes most of the physics expected to be of importance for Alfvén instabilities. Several tests have been made including the non-adiabatic gyrokinetic electron terms for cases given in this paper and this leads to about a 10% reduction of growth rate due to electron Landau damping effects, but no significant change in real frequency or mode structure; however, including such effects increases the computational requirements and will be left for future research. The gyrokinetic equation for the thermal and energetic ions (σ is the species index) is given below: This is supplemented by the equations of motion for the position X of the gyrocenters, and the parallel velocity v ∥ : where m σ , Z σ , Ω σ are the mass, charge number, and cyclotron frequency of species σ, μ is the magnetic moment, b 0 is the unit vector b 0 = B 0 /|B 0 |, and the E × B and magnetic drift velocities are given as: With the curvature and grad-B drifts given by In our model we exclude the compressional component of the magnetic field perturbation by assuming δB ∥ = 0 and representing the perturbed magnetic field as: In equations (1)-(7) the perturbed electrostatic φ potential, magnetic fields (δB), and vector potentials A ∥ are gyroaveraged quantities, evaluated at the gyrocenter's position.
Equation (1) is solved by evolving δ = w f f 0 / weights synchronized with the particle trajectories using the following weight evolution equation.
In our electromagnetic simulations we use the fluid-kinetic adiabatic electron model [16] based on the separation between adiabatic and non-adiabatic electron response. In this model the non-zonal component of parallel electric field in written as The effective potential δφ eff in the lowest adiabatic order Here we have used the Clebsch representation of the magnetic field δ ψ δψ α δα where ψ is the poloidal flux function, α = q(ψ)θ − ζ is a field line label, and θ, and ζ are the poloidal and toroidal angles of Boozer magnetic coordinates [17]. In equation (10) δn e is obtained by solving the electron continuity equation The electron current is calculated from the Amperé's law The perturbed electrostatic potential is calculated from the gyrokinetic Poisson's equation [21] ( ) Where δn is the perturbed gyrocenter's density and δφ ∼ is the second-gyroaveraged potential defined as The perturbed magnetic field potential functions are obtained from Faraday's law Where δφ δφ δφ = − ind e ff . These equations (1)-(19) form a closed nonlinear system to lowest order in the electron-ion mass expansion. The electron non-adiabatic terms and kinetic equation have been presented in [16] and will not be given here since, as mentioned above, the calculations of this paper are based on the adiabatic fluid model for the electrons.
In order to test the use of this model for stellarators, parameters are chosen to approximately match an LHD regime where Alfvénic activity was observed [22], although some simplifications have been made in the profiles. Specifically, the thermal ion density, ion temperature, and electron temper ature profiles are taken as constant in order to null out the drives for other instabilities caused by core density and temper ature gradients. The electron density profile is determined from the quasi-neutrality condition n e = n ion + n fast-ion for the three species (electrons, ions, fast ions) included in the calculation. The fast ions and thermal ions are treated using gyrokinetics, while the electrons are incorporated using an adiabatic fluid model [16]. The LHD major radius is 3.7 m; the magnetic field on axis is 0.62 T; ion and electron temperatures are 1 keV; the central electron density is 0.884 × 10 13 cm −3 ; the plasma and beam species were hydrogen. The fast ion component is model led as a Maxwellian distribution with a constant temper ature versus flux surface. GTC also includes options for slowing-down models of beam distributions; these will be considered in future research on EP instabilities in stellarators. The computational parameters for these calculations were: 60 radial grid points, 128 toroidal grid points, 200 poloidal grid points, 40 particles per cell for ions and fast ions, 20 particles per cell for electrons, uniform marker temperatures, and 4-point gyro-orbit stencils for ions and fast ions. The time step for LHD is limited to about 1/10 of that for a similar axisymmetric system. These resolution and time step limits for 3D systems lead to significant computational requirements and currently limit the extent of parameter/profile surveys. Another modelling issue is that stellarators generally tend to have higher fast ion losses through the last closed flux surface than tokamaks; several methods have been tested in the simulations for taking these escaping fast ions into account. For the results given in this paper, as ions escape through the last surface, their δf weights are set to zero. For the LHD cases given in this paper, about 40% of the initial fast ion markers and 7% of the thermal ion markers are lost through either the outer or inner radial boundaries. These leave the simulation domain at early times and do not present an obvious limitation to the simulation time. They do, however, reduce the marker resolution near the boundary regions, and reduce numerical efficiency by the retention of markers that do not contribute. Resolution has been tested in a few cases by increasing the initial particles per cell up to 100 without significant changes in the results, indicating particle counts are adequate for the linear analysis presented in this paper. Techniques that reinsert escaping ions back at another location at the same magnetic field value and such that they drift back into the simulation domain are under development. The calculations reported here are based on version 0706 of GTC. The primary changes from versions used in the earlier applications of this model to stellarators are the use of a Gaussian drop-off in the fields for the edge and magnetic axis boundary conditions instead of a linear extrapolation, and the zero weight/no-reinsertion method described above for treating escaping fast ions.
In order to reduce noise levels and target specific instabilities, a Fourier mode filter is used. The filter takes effect between the field solve and particle steps and involves a fast Fourier transform of the field data, followed by a nulling out of components not included in the filter, and then an inverse fast Fourier transform of the fields before they are passed to the particle trajectory step. For simplicity, the calculations given in this paper are based on one toroidal mode with 8 poloidal modes for the filter. Specifically, for n = 1, m = 1-8 are used; for n = 2, m = 1-8 are used, etc. Previous calculations [3] have also included the toroidal field period coupled modes, e.g. n = 1, n = −9, n = 11, but have not indicated for LHD that significant changes in stability properties result from including the higher order modes, due to its relatively high aspect ratio ( R 0 / a ~ 6) and number of field periods (N fp = 10).
LHD test case
Two LHD cases are considered here. The first case is motivated by an LHD discharge [22] where Alfvén activity was observed with toroidal modes numbers n = 1 and 2. This case had β~3% , magnetic axis at R 0 = 3.7 m, and an iota profile with normal shear (increasing with radius). The shape of the LHD outer flux surface is shown in figure 1(a) with the colors indicating the magnetic field strength (red = higher, blue = lower). The second case has a non-monotonic (reversed) shear region in the iota profile near the center; such profiles have been produced in LHD [23] by using neutral beam current drive and appropriate plasma start-up programming. The iota profiles for these cases are given in figure 1(b), with the reversed shear region indicated for the second profile.
In figure 1(a) and in subsequent figures, the variable denoted as r a / is the flux surface label equal to (ψ/ψ edge ) 1/2 , where ψ is the toroidal magnetic flux. The variation in flux surface shape as the toroidal angle is incremented within a field period is shown in figure 2. Here the direction of increasing toroidal angle is counter-clockwise (VMEC convention).
As there are no direct measurements of the fast ion density profile, a set of model profiles (normalized to the central electron density) as given in figure 3 are used. For the normal shear case, the fast ion profile model consists of a centrally flattened region with an exponentially decaying region on the outside (solid lines). This profile shape has been chosen specifically to select out the toroidal Alfvén eigenmodes that reside in the outer gaps, which are expected to be the ones that were observed [22]. For the non-monotonic shear case, a centrally peaked sequence of profiles have been used (dotted lines) that match onto the profiles used in the normal shear case on the outside. These are used to provide instability drive in the central reversed shear region of the plasma.
Alfvén resonance conditions
The resonance conditions for wave-particle interactions in stellarators depend on the frequency, mode number and rotational transform profiles of the device through the relation [24]: ω µ ω where m, n are the mode numbers of the instability, µ ν , = equilibrium mode numbers, N fp = number of field periods (=10 for LHD), and j = 0, ±1 is a coupling parameter. ω θ and ω φ are the poloidal/ toroidal drift frequencies, which may be calculated by following orbit trajectories in the 3D equilibrium fields.
This has been evaluated for the normal shear LHD equilibrium and parameters of the observed [22] Alfvén instabilities, leading to the results shown in figure 4. The resonance lines cross over most of the regions of phase space encountered by passing particles, confirming that tangentially injected beams should excite such instabilities.
Normal shear discharge
The GTC gyrokinetic model starts with an initial field perturbation and integrates particle trajectories and electromagnetic fields (φ and A ) in time to follow the growth of EP driven Alfvén instabilities. The characteristic behaviour of an unstable Alfvén frequency mode is shown in figure 5, where perturbations oscillating in the Alfvén range frequency grow exponentially with all modes growing at close to the same rate. This example is for the normal shear case with n = 1, T fast = 120 keV, n fast (0)/n e (0) = 0.0185. The growth rate can be inferred from the slope of the curves in figure 5(b) and the frequency from a Fourier transform of the data in figure 5(a).
In some cases, especially for stellarators, several eigenmodes can be present, each with unique frequencies and growth rates. This leads initially to a modulational waveform; however, if followed long enough, one mode dominates. For simplicity, this paper will restrict its analysis to the time intervals where a single mode dominates.
Another useful diagnostic for displaying the instability growth and radial extent is shown in figure 6; here the RMS averaged evolution of the potential is shown as a function of radius and time for the normal shear case with n fast (0)/n e (0) = 0.0185. Figure 6(a) is for n = 1, T fast = 120 keV while figure 6(b) is for n = 2, T fast = 60 keV. As can be seen, the n = 2 has a narrower radial extent than the n = 1 case.
As the fast ion density is increased, the Alfvén instability drive increases, leading to larger growth rates. An example of the variation of the growth rate and frequency with fast ion density for an n = 1 TAE instability with T fast = 100 keV is given in figure 7. Due to the increased simulation time needed to resolve smaller growth rates, it has not been feasible to determine the marginal stability threshold (growth rate = 0) with this model.
In figure 8 the effects of varying the fast ion temperature are given for n = 1 and n = 2 at n fast (0)/n e (0) = 0.022. In the LHD experiment, beams are injected at 180 keV. The energy moment of a slowing-down distribution with 180 keV birth energy is equal to that of the Maxwellian distribution used here at about 100 keV. While fast ion destabilized Alfvén modes involve wave-particle resonances, the gyrokinetic results show very broad peaks in the variation with beam energy. This is due to the fact that sideband couplings induce secondary resonances at other velocities, which can encompass a wider range of energies. 3D stellarator equilibria offer significantly more mode coupling combinations (i.e. sideband couplings) than tokamaks. Also, since the gyrokinetic model includes all of the various trapped particle populations that are present in 3D systems, there are many other resonant frequencies beyond the usual passing particle transit resonance that can be involved. The n = 2 results show more structure than the n = 1, with several maxima present, possibly due to interactions with different particle resonances. Calculations were also carried out for n = 3 and 4, but these mode families did The frequency ranges shown in figure 8(b) can be related to shear Alfvén continua obtained from the STELLGAP code [25] with acoustic coupling effects [26] included. Continuum plots for n = 1 and n = 2 are shown in figure 12. Here the slow-sound approximation [27] has been used to simplify the plots. The dashed black lines indicate the frequency ranges of the data in figure 8(b) and indicate that the unstable modes reside in the upper part of the m = 1,2 gap for n = 1 and the upper part of the m = 2,3 gap for n = 2. The frequencies obtained from these gyrokinetic calculations (f ~ 75-82 kHz for n = 1) are somewhat higher than seen experimentally (f ~ 60-70 kHz for n = 1). This is likely due to the use of a flat ion density profile in the gyrokinetic model calculations.
Recently reported [7] reconstructions of the experimental profiles have indicated a hollow ion density profile with higher ion densities near the edge (leading to a lower Alfvén velocity) than assumed here.
Non-monotonic shear discharge
Non-monotonic (reversed) shear rotational transform profiles have been of significant interest in tokamaks. Such profiles lead to new branches of Alfvén modes (the RSAE or Reversed Shear Alfvén Eigenmode) that are typically dominated by a single poloidal mode number. The frequencies of the RSAE modes are more sensitive to the rotational transform profile and show more dynamic behaviour (upward/downward frequency sweeps) in experiments than the TAE modes [28]. RSAEs have also been associated with higher levels of fast ion transport. Non-monotonic shear profiles were formed in LHD [23] using strong neutral beam current drive at low plasma densities. In this case the non-monotonic region refers to a region with negative shear in rotational transform, since the typical stellarator transform profile increases toward the plasma edge (positive shear). The tokamak non-monotonic shear profile has the opposite direction of shear; i.e. a positive shear region superimposed on a dominantly negative shear iota profile. When LHD was operated in this mode, n = 1 and n = 0 GAM (geodesic acoustic mode) activity was observed. The n = 1 signal was characterized by frequency sweeping both upward and downward in frequency, followed by more steady-state frequency lines covering a range from 50 kHz up to 150 kHz. These modes have been analysed using the STELLGAP and AE3D models [29], resulting in stable eigenfrequencies in the ranges seen experimentally. The GTC model has been applied to an LHD reversed shear profile case similar to those realized experimentally. In order to compare with the earlier normal shear results, the plasma profiles and parameters have been kept the same, except for the fast ion density profile. Both a peaked profile case (shown in figure 3) and an equivalent flat profile case have been used. The peaked profile was tested to determine if placing instability drive in the reversed shear region would produce an RSAE mode; the flat profile was used to allow direct comparison with the normal shear result. For the equivalent flattened profile, the n = 1 growth rate is reduced from that of the normal shear case by about 28% (from 24.9 to 17.9 × 10 3 s −1 ) and the frequency is increased from 79.7 kHz to 115 kHz. The difference in the mode amplitude growth between the normal and reversed shear cases is shown in figure 13(a). In the case of the peaked fast ion profile, the dominant mode remains radially located outside the reversed shear region, as shown in the rms amplitude growth versus time and radius in figure 13(b).
A clearly defined RSAE localized around the minimum in the iota profile did not emerge in the simulation. However, there is a secondary component present around r a / ~ 0.5, that can be seen in figures 13(b) and 14(a) and (b) and may be related to the reversed shear region. In the reversed shear case the primary mode is dominated by m = 2, 3, 4 components for both the peaked and flattened profile cases.
The Alfvén continuum gap for this case is displayed in figure 15 with the frequency and approximate radial extent of the mode indicated by the dashed black line. The mode is predominantly related to the m = 2, 3 gap near r a / = 0.7. There should also be reversed shear Alfvén modes present near r a / ~ 0.4 above the m = 3 and under the m = 4 continuum lines. Finding appropriate fast ion profiles and conditions to excite these modes will be the topic of future research.
Summary
The gyrokinetic GTC model has been adapted to general 3D configurations that can include stellarators, tokamaks with 3D effects, and reversed field pinch helical states. To demonstrate this capability, it is applied here to the LHD stellarator, looking specifically at a low-field, low density regime where Alfvénic activity was observed [22]. Unstable modes that reside near the upper frequencies of the Alfvén gaps are found for n = 1 and 2, but not for n = 3 or 4. Phase space resonance analysis also indicates that tangentially injected beam ions should readily couple to Alfvén modes in the observed frequency ranges. The n = 1 and 2 mode structures have a global characteristic and may be expected to impact fast ion confinement and heating efficiency. The evolution of these modes in some cases shows modulational effects related to multiple competing Alfvén instabilities at separate frequencies. A second LHD application described here is to regimes with non-monotonic shear rotational transform profiles. When compared for similar parameters and plasma profiles, the non-monotonic profile results in about a 28% reduction in the n = 1 growth rate. The dominant mode remains a TAE, but a subdominant coupling to the reversed shear region is apparent in the eigenfunctions. The GTC model is a comprehensive first principles electromagnetic gyrokinetic PIC method, and can include most of the relevant growth and damping effects expected to be of importance for these instabilities. This model can also provide a calibration reference for reduced models of these instabilities. The calculations presented here demonstrate its increasing usefulness for the analysis of Alfvénic instabilities in 3D systems. Future work with this model will add more realism and investigate the nonlinear consequences of these instabilities. For example, experimentally measured profiles for the thermal ion/electron temperature and density will be included; slowing-down beam ions distributions can be used instead of Maxwellian; fast ion density profiles derived from beam deposition models can be factored in; the next order (non-adiabatic) electron kinetic terms will be included; larger mode filter sets can be used (must be accompanied with a higher poloidal grid resolution); and particle reinsertion methods will be developed for 3D systems. | 6,175.4 | 2017-06-23T00:00:00.000 | [
"Physics"
] |
dbMPIKT: a database of kinetic and thermodynamic mutant protein interactions
Background Protein-protein interactions (PPIs) play important roles in biological functions. Studies of the effects of mutants on protein interactions can provide further understanding of PPIs. Currently, many databases collect experimental mutants to assess protein interactions, but most of these databases are old and have not been updated for several years. Results To address this issue, we manually curated a kinetic and thermodynamic database of mutant protein interactions (dbMPIKT) that is freely accessible at our website. This database contains 5291 mutants in protein interactions collected from previous databases and the literature published within the last three years. Furthermore, some data analysis, such as mutation number, mutation type, protein pair source and network map construction, can be performed online. Conclusion Our work can promote the study on PPIs, and novel information can be mined from the new database. Our database is available in http://DeepLearner.ahu.edu.cn/web/dbMPIKT/ for use by all, including both academics and non-academics. Electronic supplementary material The online version of this article (10.1186/s12859-018-2493-7) contains supplementary material, which is available to authorized users.
Background
Protein-protein interactions (PPIs) play crucial roles in organisms particularly by mediating the majority of biological functions [1]. Mutations in PPIs are associated with some human diseases, for instance, cancer and Alzheimers disease [2]. In some studies, the mechanism of PPIs has been investigated and used for treat intervention and drug design [3,4]. PPI interfaces contain many amino acid residues, but only a few of these amino acids greatly contribute to binding free energy, which are defined as hot spots [5]. Hot spots can be determined by the calculation of mutant data on protein interactions. The knowledge of hot spots is extremely important in designing PPI inhibitors [6]. Many researchers have developed different methods to obtain mutant information on protein-protein interactions and have built public databases for users to investigate hot spots [7].
Traditionally, hot spots can be determined using biological experiments, such as alanine scanning mutagenesis and alanine shaving [8]. In general, residues with alanine mutations that exhibit changes in binding free energy (G) of 2.0 kcal/mol are defined as hot spots (HS), whereas others are defined as nonhot spots (NS) [9]. Several studies have attempted to build mutation databases associated with hot spots. The first database of alanine mutations in protein interactions named ASEdb was built by Thorn and Bogan [10], and experimentally determined binding affinity data were collected. Then, BID was developed by Fischer et al. This database extracted hot spots in protein interfaces from scientific literature [11]. Kumar and Gromiha built the PINT database, which mainly stored thermodynamic data on PPIs, such as binding free energy change, dissociation constant, and heat capacity change [12]. SKEMPI is a manually curated database containing 3046 binding free energy changes upon mutation in the literature [13].
However, experimental methods for hot spot identification are time-consuming and labor-intensive. In addition, it is also difficult to measure all potential binding hot spots in a large number of proteins [14,15]. Therefore, many researchers have developed computational tools to identify hot spots. Machine learning methods were most widely used in the related fields of hot spot identification, such as SVM, Random Projection, and Random Forest [16][17][18][19][20][21]. The group used existing databases to build a training model and further applied this model to predict potential hot spots from unknown amino acid residues [22]. In addition, these hot spot residues can be used to identify the effects of protein-protein affinity changes when missense mutations occur. Some researchers have combined sequence-and structure-based methods to judge the effect of point mutations on protein-protein affinity using the change in free energy [23]. Furthermore, some studies have attempted to study the effects of single or multiple missense mutations on protein-protein affinity. Li et al. improved predictive performance by changing energy functions or adjusting parameters [24]. However, in recent years, these databases were not maintained and updated in a timely manner. To address this issue, we built a state-of-the-art database by mining mutants of protein interactions from related databases and literature.
This work presents a kinetic and thermodynamic database of mutant protein interactions called dbMPIKT. The database consists of data from previous databases about mutant protein interactions, including BID, SKEMPI and AB-Bind, and data extracted from scientific literature published in recent years. The dbMPIKT contains 5291 nonredundant mutants of experimental kinetic and thermodynamics data upon mutation. Our database will facilitate research on hot spot prediction, drug discovery, and other topics.
Data collection
This database consists of two types of data sources. On data source involves existing databases, i.e., SKEMPI, BID, and AB-Bind; the other data source is curated literature. Our curated literature database collected the mutation data of protein interactions from scientific literature within the past three years (The detailed literature can be found in Additional file 1: Figure S1). To build the curated database, first, a comprehensive literature search method was performed to identify related literature in PubMed using two sets of keywords. One set contains the terms of PPIs, G and thermodynamics data, and the other set contains the terms of PPIs, amino acid mutations and kinetic data. The kinetic and thermodynamics data of mutants were curated from PubMed literature. Although some of the studies were missed, 425 credible studies were obtained. Figure 1 shows the detailed information of data collection.
Then, the structures of protein complexes were obtained by advanced searches of the PDB database using various query items, i.e., macromolecule type (only contains protein), protein stoichiometry (heterodimer complexes), release date (from1 January 2013 to 31 December 2016) and X-ray resolution (less than 3°A). As a result, 1017 protein structures were obtained from 682 citations in PDB, which were mapped to the PDB-Bind database to extract the corresponding thermodynamic data. A total of 99 complex structures from 85 citations containing dissociation constant (Kd value) information were obtained. All of the literature was manually assessed, and all Kd values of the structures were recorded [25]. The details of the collection of protein complexes and their sources can be referred to the Additional file 2: Figure S2.
After removing redundancy based on the above procedure, our database contains 5291 mutations that are composed of manually curated data and the three existing databases.
Database construction
The dbMPIKT database is available online and is composed of some functional modules, such as query, statistics and analysis. For example, a quick search is located on the top right of the homepage. Users can search for a target protein in the database and obtain relevant mutant information using PDB ID. Additionally, users can find statistical information in the database and links to related websites in the homepage of dbMPIKT.
The webserver includes the following pages: home, browse, document, upload, download and contact. Figure 2 presents the entire database structure. The Browse page presents all data in the database. Here, you can see the details of mutants from the four sources. All data can be freely downloaded. To continuously update the database, an upload link is provided to help users upload their own data that is subsequently assessed and stored in the database through a user-friendly interface. In addition, the newly uploaded data are also presented on the browse page.Our dbMPIKT was constructed using MySQL and PHP. More information about the database can be obtained by browsing the six webpages.
Analysis of protein-protein interaction pairs and interaction network construction
In addition to mutation data collection, related protein-protein interaction pairs were also recorded in our database. All protein-protein complexes were classified into different categories based on atomic structures of complexes. In addition, to illustrate whether each pair of PPIs is linked, a network analysis tool (Cytoscopeversion 3.5.1) [26] was embedded into dbMPIKT to construct interaction networks.
According to the network map, some features of PPI network, such as the regularity of PPIs, can be obtained by analyzing the association of PPIs and network structure.
Important features in database
In this paper, although data entries in dbMPIKT were obtained from different sources, the database contains distinguished attributes. The first feature is the PDB ID, which denotes the ID of the protein-protein complex in the PDB database. This ID is linked to a related PDB website, so users can obtain more information on the complex. The second attribute is mutation information, which consists of original residue, chain identifier, the position of the mutant residue in sequence and the name of mutant residue. The third attribute includes the names of the two interacting proteins, namely protein 1 and protein 2. Additional attributes in-clude kinetic data and thermodynamic data. In general, kinetic data (Kd), includes the association rate (Kon), and dissociation rate (Koff ). Most data are presented in units of nM, M − 1 S − 1 and S − 1 . Other units can be converted into these units. Moreover, thermodynamic data contain changes in binding free energy (ΔG) and differences in binding free energy changes between the mutant and wild-type complex (ΔΔG). These values are reported as kcal/mol. PubMed ID is another attribute. This ID is the source of kinetic and thermodynamic data. In addition, you can refer to more details by clicking on each PubMed ID in the table and download literature from NCBI. The last attribute is Method, which presents the experimental measurement method of the affinity of PPIs. There are mainly two methods: SPR (surface plasmon resonance) and ITC (isothermal titration calorimetric) [27]. Temperature information is also included as an attribute. The other three databases contain data attributes similar to our curated database, and users can be referred to corresponding literature.
Database statistics
The dbMPIKT database collected 5291 mutants with kinetic or thermodynamic data. The data were divided into four sources: SKEMPI, AB-Bind, BID and literature, containing 3046, 815, 256 and 1174 mutants, respectively. The mutants are derived from 233 structures of 245 protein-protein complexes, and only 12 complexes do not have PDB IDs. Some statistical information of dbMPIKT can be found on our website, where the comparison of the four databases with respect to mutation type is presented. The mutations in each database are clustered into three mutation types: single mutants, double mutants and multiple mutants. The data distribution from different sources is presented in Fig. 3 (More details can be found in Additional file 3: Table S3 of supplementary materials). In general, the SKEMPI database contains the greatest number of single mutants, and the curated database contains the second most single mutants. Regarding mutation type, single mutations account for 75.88% of the total mutations, double mutations account for 13.28% and multiple mutations account for 10.84%. The database contains almost all experimental mutants to date. For single mutations, we counted the number of mutations for each type of amino acids. Table 1 presents the distribution of 20 types of amino acids in single mutant data (More details can be found in Additional file 4: Table S4 of supplementary materials). Statistically, alanine mutation accounts for 56% of single mutant data, and threonine has the lowest mutant rate. Compared with other data sets, these results are more commonly observed in the curated database, where alanine mutations account for 66.7% of all mutations. In terms of amino acid properties, the 20 types of amino acids are divided into five categories: polar (S, T, N and Q), hydrophobic (A, I, L, M, V, W, Y and F), positive (R, K and H), negative (D and E) or other (G, P and C) [28].
Analysis of protein-protein pairs in dbMPIKT
In our database, 5291 mutants were obtained from 245 protein-protein complexes, including heterodimer complexes, antigen-antibodies, and enzyme-inhibitors [29].
In addition, human, Mus musculus, and Bos taurus proteins are included in the database, and human proteins represent the largest group. A protein interaction network was constructed based on protein interaction pairs, which can be used to identify protein functions for specific protein interactions [30]. Figure 4 illustrates a part of the protein interaction network, and the entire network is presented in Additional file 5: Table S5 of the supplementary material. In Fig. 4, most of the protein interactions are independent, but it is interesting that a small portion of proteins interact with each other to form an interaction network. Figure 4 demonstrates seen that a small network is centered at basic pancreatic trypsin inhibitor (BPTI) and bovine alpha-chymotrypsin protein, which are both Bos taurus proteins. BPTI plays an important role in biomedical science given that it can be used to study the conformations and PPIs of globular proteins reduce hemorrhagic complications in clinical practice [31]. Furthermore, the protein interaction network is an important tool to analyze the biological function of proteins [32].
Data source analysis
The dbMPIKT consists of data from four data sources, which all include kinetic or thermodynamic data of mutant protein interactions. However, these data are somewhat different. The SKEMPI database contains the largest number of mutants, and the manually curated database is the second largest source. The BID is the least represented source given that the BID database is not currently operational and data cannot be downloaded directly. Some BID data are extracted from the additional studies in the literature [33]. In addition, our curated database contains the largest number of alanine mutations in terms of mutant types. Therefore, our database is more useful for hot spot predictions. Moreover, based on protein types, previous databases almost exclusively targeted specific complexes. For example, AB-bind is an antibody binding mutational database extracted from information regarding antigen-antibody complexes. Our work integrated these databases together so that it is easy for researchers to obtain required data. Although SKEMPI has been updated recently, i.e., SKEMPI2.0 [34], the description of mutation data in our database is more consistent with scientific research compared with SKEMPI2.0. To clearly describe the characteristics of mutation data in our curated database, mutation data features are classified into two simple categories: wild type data (WT data) and mutated data (MT data). Among them, each type of data contains thermodynamic data or kinetic data.
Biological significance of database data
Protein-protein interactions have been extensively studied, and many researchers have proposed calculation methods for PPI predictions. Among them, disease-related PPIs deserve in-depth study [35]. Our database provides information on mutant data and PPI pairs as well as links to related websites that can indirectly capture structure and sequence information for each protein complex. This information can be used as features for PPI predictions. For example, evolutionary features can be obtained from protein sequences and incorporated into Ensemble to predict hots spots [16,19]. Protein pairs also represent an important part of our database, i.e., self-interacting proteins (SIP) are a type of PPI, and SIP detection is a recent hot topic of related research [36]. In general, our database can provide valid datasets and relevant feature information for PPI predictions.
Conclusions
The paper proposes to integrate the three previous databases and manually curated data presented in the literature over the last three years. We built a web server to store kinetic and thermodynamic data on mutant protein interactions. More detailed information about mutants and protein-protein interactions can be found on the web server. In our database, kinetic and thermodynamic data of mutants, including Kd, ΔΔG, ΔG, Koff and Kon, are obtained. In addition, some data can be calculated using other data. For example, ΔΔG, a parameter that can be used to diametrically distinguish hot spots and nonhot spots, can be indirectly obtained using the following equation: The database provides a large hot spot data set that can help improve the applications of hot spots and hot spot predictions.
Webserver
Our free website is available at http://DeepLearner.ahu.edu.cn/web/dbMPIKT/. Users can perform advanced searches on the home page to obtain interesting data and browse all data on the browse page.
Additional files
Additional file 1: Table S1. Literature list for the collected data. (XLSX 60 kb) Additional file 2: Table S2. The collection of protein complexes and their sources. (XLSX 38 kb) Additional file 3: Table S3. Distribution of mutation types. (XLSX 10 kb) Fig. 4 Part of interactions on the network map. Each node in the picture represents a protein, and the connection between nodes represents an interaction | 3,743.6 | 2018-11-27T00:00:00.000 | [
"Biology"
] |
Direct Evidence that Myocardial Insulin Resistance following Myocardial Ischemia Contributes to Post-Ischemic Heart Failure
A close link between heart failure (HF) and systemic insulin resistance has been well documented, whereas myocardial insulin resistance and its association with HF are inadequately investigated. This study aims to determine the role of myocardial insulin resistance in ischemic HF and its underlying mechanisms. Male Sprague-Dawley rats subjected to myocardial infarction (MI) developed progressive left ventricular dilation with dysfunction and HF at 4 wk post-MI. Of note, myocardial insulin sensitivity was decreased as early as 1 wk after MI, which was accompanied by increased production of myocardial TNF-α. Overexpression of TNF-α in heart mimicked impaired insulin signaling and cardiac dysfunction leading to HF observed after MI. Treatment of rats with a specific TNF-α inhibitor improved myocardial insulin signaling post-MI. Insulin treatment given immediately following MI suppressed myocardial TNF-α production and improved cardiac insulin sensitivity and opposed cardiac dysfunction/remodeling. Moreover, tamoxifen-induced cardiomyocyte-specific insulin receptor knockout mice exhibited aggravated post-ischemic ventricular remodeling and dysfunction compared with controls. In conclusion, MI induces myocardial insulin resistance (without systemic insulin resistance) mediated partly by ischemia-induced myocardial TNF-α overproduction and promotes the development of HF. Our findings underscore the direct and essential role of myocardial insulin signaling in protection against post-ischemic HF.
Tumor necrosis factor-α (TNF-α ) is a pro-inflammatory cytokine that promotes ischemic myocardial injury and cardiac dysfunction 9 . After myocardial infarction (MI), TNF-α is locally released from ischemic cardiomyocytes and remains markedly elevated in advanced HF 10 . TNF-α impairs insulin signaling and action, in part, by increasing serine phosphorylation of insulin receptor substrate-1 (IRS-1). This impairs insulin-stimulated tyrosine phosphorylation of IRS-1 that reduces binding of phosphatidylinositol 3-kinase (PI3K) to IRS-1. Consequently, activation of PI3K and downstream signaling molecules essential for regulation of glucose metabolism is impaired 11 . In peripheral insulin targets including skeletal muscle and liver (in vitro and in vivo), TNF-α promotes insulin resistance by similar mechanisms 12,13 . Thus, elevated local and/or circulating levels of TNF-α may contribute directly to myocardial insulin resistance.
Our previous studies have demonstrated that insulin exerts cardioprotective effects via PI3K/Akt-dependent survival signaling pathways that promote metabolic, anti-apoptotic and anti-inflammatory actions in various animals, including canine model of myocardial ischemia/reperfusion [14][15][16] . Insulin-triggered survival signaling and actions in the heart are blunted in both systemic and myocardial insulin resistance 17 . Systemic insulin resistance (that likely includes myocardial insulin resistance) increases infarct size in ischemic hearts leading to reduced functional recovery.
We therefore hypothesize that myocardial insulin resistance per se contributes to progression of post-ischemic HF. To test our hypothesis, we developed a rodent model of MI to study the effects of local cardiac TNF-α overexpression (adenoviral) and blockade (etanercept), insulin treatment, and cardiac insulin receptor signaling with respect to post-MI heart structure/function and subsequent HF.
Results
Cardiac dysfunction and remodeling after MI. We performed serial echocardiography in rats before MI, and 1, 2, 4, and 8 wk post-MI. Progressive LV dilation and heart dysfunction were observed over time ( Fig. 1A-C). When compared with sham-operated rats, substantial reduction in EF (53 ± 3%) and increased LV diameter (LVESD 0.45 ± 0.03 cm) were observed 1 wk post-MI. These abnormalities worsened over time with maximal LV dysfunction (EF 40 ± 3%) and dilation (LVESD 0.62 ± 0.03 cm) achieved and maintained 4 to 8 wk post-MI.
Myocardial insulin resistance occurred early in ischemic HF. Basal and insulin-stimulated FDG uptake in rat hearts was assessed 30 min, 1 d, 1 wk, and 2 wk after MI (Fig. 1D). At baseline, FDG uptake in hearts of rats assigned to MI or sham groups was comparable. Insulin injection (10 U/kg) produced robust increases in cardiac FDG uptake in sham rats 30 min after surgery (Fig. 1E, left bars). By contrast, insulin-stimulated myocardial FDG "Sham" means "non-MI + saline" at 30 min post-operation. **P < 0.01 vs. sham, # P < 0.05, ## P < 0.01 vs. MI 1 wk, † † P < 0.01 vs. vehicle in sham, § § P < 0.01 vs. insulin in sham, σσ P < 0.01 vs. vehicle in MI 1d. uptake was even higher at the 30 min time point in rats post-MI (Fig. 1E). However, after 30 min, over time, progressive decrease in insulin-stimulated myocardial FDG uptake consistent with progressive increase in myocardial insulin resistance was observed in hearts of rats 1 d, 1wk, and 2 wk post-MI (Fig. 1D,E).
Typically, myocardial insulin resistance follows, in parallel, changes in systemic whole body insulin resistance 18 . In our model system with MI, we did not observe any significant differences in fasting plasma glucose (FPG), fasting plasma insulin (FIN), quantitative insulin sensitivity check index (QUICKI), oral glucose tolerance test (OGTT), or insulin tolerance test (ITT) when sham-operated and post-MI rats were evaluated and compared at 1 or 2 wk post-MI (Supplement Figure S1). Thus, MI caused local myocardial insulin resistance without systemic insulin resistance or glucose intolerance.
Characterization of myocardial insulin resistance after surgically-induced MI. Elevated basal Akt
phosphorylation (without insulin stimulation) was induced by myocardial ischemia. This is well demonstrated both in our laboratory and others 19,20 . Insulin stimulation of either sham or post-MI rats 1 d after operation caused robust increases in cardiac Akt phosphorylation. However, 1 wk post-MI, insulin-stimulated Akt phosphorylation was barely detectable ( Fig. 2A). With respect to ERK 1/2 phosphorylation, both basal and insulin stimulated phosphorylation were a little higher in post-MI rats at 1 d when compared with sham animals. By 1 wk post-MI, basal ERK 1/2 phosphorylation was extremely elevated with no detectable insulin-stimulated increase (Fig. 2B). For p38 MAPK phosphorylation, there was no detectable insulin-stimulated increase in hearts of sham animals at baseline. Basal p38 MAPK phosphorylation was greatly increased in hearts 1 d post-MI, while insulin stimulation reduced phosphorylation. At 1 wk post-MI, basal phosphorylation levels were intermediate between those in hearts from sham rats and 1 d post-MI rats, while insulin now stimulated a robust increase in p38 MAPK phosphorylation (Fig. 2C). We next evaluated cell surface GLUT4 in heart as a downstream metabolic action (Fig. 2D). In sham animals, as expected, insulin stimulation caused increased GLUT4 translocation to the myocardial cell surface that predicts insulin-stimulated glucose uptake (Fig. 2D). In post-MI animals, insulin also stimulated GLUT4 Insulin-stimulated increase in cardiac plasma membrane GLUT4 was impaired 1d after MI and abolished 1 wk after MI compared with sham. The bar graphs below the respective immunoblots represent quantification of multiple independent experiments from 3 animals (mean ± SEM). "Sham" means "non-MI + saline" at 30 min post-operation. *P < 0.05, **P < 0.01 vs. vehicle in sham, ## P < 0.01 vs. vehicle in MI 1d, σσ P < 0.01 vs. insulin in sham.
Scientific RepoRts | 5:17927 | DOI: 10.1038/srep17927 translocation. However, the magnitude of this effect was smaller 1 d post-MI, and almost absent 1 wk post-MI. Thus, in hearts of rats subjected to MI, we observed local myocardial insulin resistance at the signaling level with respect to insulin-stimulated Akt phosphorylation, and at the functional level with respect to GLUT4 translocation (and presumably glucose uptake and metabolism).
when compared with adGFP-treated animals 1 wk post-MI (Fig. 3I,J). TNF-α levels were still significantly higher in adTNF-α -treated rats at 1 wk after MI when compared with adGFP-treated animals (Fig. 3K). Taken together, these results demonstrate that local cardiac overexpression of TNF-α mimics myocardial insulin resistance, and exacerbates poor heart function observed with surgically-induced MI.
TNF-α antagonism improved local myocardial insulin resistance caused by MI.
We next treated our animal model with etanercept to block endogenous TNF-α signaling and action. Our preliminary experiments have shown that the presence of etanercept at this dose (300 μ g/250 g body weight) had no significant effects on myocardial TNF-α production and cardiac function in non-MI (sham) rats. Pretreatment with etanercept during the first week after MI repressed local myocardial TNF-α production (circulating serum TNF-α levels were undetectable) (Fig. 4A). This substantially restored insulin-stimulated IRS-1 phosphorylation (Fig. 4D), Akt phosphorylation (Fig. 4E), and GLUT4 translocation (Fig. 4F) as well as myocardial FDG uptake (Fig. 4B,C) 1 wk post-MI. Moreover, etanercept treatment significantly suppressed p38 MAPK phosphorylation without or with insulin stimulation 1 wk post-MI (Fig. 4G). Thus, opposing TNF-α action preserved normal insulin signaling and action in myocardium post-MI. Etanercept treatment improved cardiac function and inhibited LV dilation at 1 wk after MI, while there were no significant differences in cardiac function or LV dilation when hearts from saline and etanercept-treated animals were evaluated 4 wk post-MI ( Fig. 4H-J). Thus, our data suggest that TNF-α plays an important role in myocardial insulin resistance resulting from MI.
Early insulin treatment suppressed cardiac TNF-α production and improved myocardial insulin sensitivity and cardiac function post-MI. In a healthy context with intact PI3K signaling (e.g., normal insulin sensitivity with euglycemia), insulin treatment opposes pro-inflammatory signaling/actions while promoting anti-inflammatory signaling/actions in vasculature and macrophages to ameliorate cardiovascular pathophysiology 22,23 . Thus, insulin treatment may be helpful post-MI (assuming intact PI3K signaling). Consistent with our previous studies 23 , early insulin treatment given immediately after MI suppressed TNF-α production when compared with saline treatment (Fig. 5B). Our preliminary experiments have shown that insulin treatment had no significant effects on myocardial TNF-α production in non-MI (sham) rats. There were no significant changes in blood pressure and heart rate after the injection of insulin in MI rats. Early insulin treatment also improved subsequent insulin-stimulated myocardial FDG uptake and Akt phosphorylation 1 wk post-MI (Fig. 5A,C,D), and alleviated cardiac dysfunction and dilation 4 wk post-MI as evidenced by increased EF (Fig. 5E) and decreased LVESD (Fig. 5F). However, late insulin treatment initiated at 1 wk after MI when myocardial insulin resistance has developed exerted no beneficial effects on cardiac function and remodeling at 4 wk after MI.
Aggravated post-ischemic LV dysfunction and remodeling in tamoxifen-induced cardiomyocyte-specific insulin receptor knockout (TCIRKO) mice. To substantiate the role of myocardial insulin resistance in progression of HF, we determined specific effects of cardiac insulin signaling independent of systemic insulin resistance. Tamoxifen injections (50 mg/kg/d) for 3 d in MHC-MerCreMer/IR fl/fl mice induced Cre-mediated recombination to produce TCIRKO mice. Immunoblotting confirmed reduction of insulin receptor protein by ~78% in cardiac muscle (P < 0.01) but not in skeletal muscle or liver (Fig. 6A). Insulin-stimulated myocardial Akt phosphorylation was almost entirely blocked in TCIRKO mice (Fig. 6B). No significant differences in heart weight/body weight were observed when control and TCIRKO mice were compared (Fig. 6C). Overnight-fasted (16 h) mice were injected i.p. with glucose (2 g /kg) for glucose tolerance tests (GTT) or insulin (0.75 U/ kg) for insulin tolerance tests (ITT). No significant differences in GTT or ITT were observed among control and TCIRKO mice (Supplement Figure S2). Thus, systemic insulin sensitivity and glucose tolerance was not altered in TCIRKO mice. No significant differences in cardiac function or LV dimensions were observed when TCIRKO sham mice were compared with control mice (MHC-MerCreMer/IR +/+ ). By contrast with littermate controls, TCIRKO mice had exaggerated LV dysfunction and dilation 4 wk post-MI (EF 31.5 ± 2.4 vs. 45.3 ± 2.7%, rats (A,C). Insulin-stimulated increase in Akt phosphorylation (D) was restored in early insulin-treated rats at 1 wk after MI. (G) Representative echocardiography images (major finding is E, F). Increased ejection fraction (E, EF) and reduced Left ventricular end-systolic dimensions (F, LVESD) were observed in early insulintreated rats at 4 wk after MI when compared with saline-treated or late insulin-treated animals. Values presented are mean ± SEM (n = 8 per group). **P < 0.01 vs. Non-MI + saline, # P < 0.05, ## P < 0.01 vs. MI + saline, † † P < 0.01 vs. insulin in MI + saline 1wk, ΔΔ P < 0.01 vs. Sham, ξ P < 0.01 vs. MI + saline, τ P < 0.05 vs. MI + insulin.
Discussion
We have made three major findings in the present study. First, we observed that myocardial insulin resistance occurred as early as 1 wk following surgically-induced MI in rats, while systemic insulin sensitivity and glucose tolerance remained normal. Second, this myocardial insulin resistance was mediated partly by increased TNF-α production from the ischemic heart. Third, insulin treatment itself opposed myocardial insulin resistance caused by MI. This ameliorated post-ischemic cardiac dysfunction, and subsequent HF. Moreover, cardiac specific insulin receptor knockout in adult mice exacerbated cardiac dysfunction and the development of HF post-MI. Indeed, the effects of cardiac-specific insulin receptor deletion occurred without causing changes in systemic insulin sensitivity. This determines an essential role of myocardial insulin signaling in protection against ischemic HF.
A close link between HF and systemic insulin resistance in humans has been recognized for decades. Most large clinical trials on HF report incident rates of diabetes of 15-35% 24 . Systemic insulin resistance often precedes development of HF suggesting that altered metabolic environment results in myocardial dysfunction and HF. In 1997, Botker et al. were the first to report myocardial insulin resistance in patients with metabolic syndrome 25 . Both systemic and cardiac insulin resistance are observed in non-diabetic patients or animals with moderate to advanced HF in most contexts 26,27 . Cardiac insulin resistance may occur as a result of systemic insulin resistance 18,28 . Amorim et al. have found that myocardial insulin resistance occurred at 2 wk after MI 29 , when the rats have developed heart failure (ejection fraction < 50%). In the present study, we found that myocardial insulin resistance occurred as early as 1 wk after MI (reduced insulin-stimulated GLUT4 translocation and FDG uptake) before the occurrence of heart failure (HF) without any detectable change in systemic insulin sensitivity, when ejection fraction was still higher than 50%. Thus, myocardial insulin resistance alone may be an early event in the development of ischemic HF independent of systemic insulin resistance and glucose intolerance in certain contexts.
One of the key characteristics of molecular signaling mechanisms underlying metabolic insulin resistance in the clinical setting is selective impairment in PI3K/Akt signaling pathways while other major branches of insulin signaling including Ras/MAPK (ERK1/2, p38MAPK and JNK) pathways remain intact or are even enhanced [30][31][32][33] . In the presence of myocardial insulin resistance 1 wk after MI, we observed blunted insulin stimulation of Akt and ERK1/2 phosphorylation with concomitant enhanced p38 MAPK phosphorylation characteristic of pathway-selective insulin resistance. Local TNF-α overexpression caused cardiac insulin resistance while exacerbating functional and structural sequelae of post-MI. As TNF-α can induced self expression in some pathological conditions 34,35 , etanercept treatment may block TNF-α -induced self expression after MI and then repressed local myocardial TNF-α production. Moreover, etanercept treatment partially restored myocardial insulin sensitivity and improved cardiac function at 1 wk after MI, but did not improve cardiac function and remodeling at 4 wk after MI. The reason for this may be that beneficial cardiac effects with etanercept early after MI is offset by its adverse cardiovascular effects such as potentiating platelet-monocyte aggregation and causing TNF-α imbalance in the long-term development of ischemic heart failure 36,37 . It should be noted that the mechanisms of myocardial insulin resistance post-MI are multi-factorial and complex. TNF-α overproduction is only one of the important mechanisms and further study still needs to be done to reveal the complex mechanisms.
Our study showed that insulin treatment activated Akt and ERK1/2, while reducing phosphorylation of p38 MAPK 1 d after MI. Thus, insulin therapy at the appropriate time can reverse pathophysiology underlying pathway-selective insulin resistance in heart that contributes to ischemic damage, myocardial dysfunction and HF. It seems likely that early insulin therapy improves insulin signaling and action, in part, through rebalancing insulin signaling between PI3K/Akt and MAPK insulin signaling branches to mediate anti-inflammatory actions opposing detrimental TNF-α actions in heart. Insulin treatment also exerts anti-apoptotic and anti-oxidative/nitrative stress effects that complement insulin anti-inflammatory actions 16 . This is supported by recent IMMEDIATE trial that immediate GIK administration was associated with lower rates of the composite outcome of cardiac arrest or in-hospital mortality in patients with ST segment elevation 38 . Our study further showed that late insulin administration (1 wk after MI) failed to exert beneficial effects, which may be due to blunted insulin-stimulated Akt phosphorylation.
To further confirm the role of myocardial insulin resistance in the development of ischemic HF, we used TCIRKO mice to achieve temporal control of insulin receptor disruption in the heart. This is an important advantage over the previous CIRKO model where absence of cardiac insulin receptors during development has resulted in cardiac pathology as shown by decreased myocyte size and modestly reduced cardiac function even at baseline 39 . In this study, no significant differences in heart weight/body weight ratio and cardiac function were observed when control and TCIRKO mice were compared at baseline. Importantly, loss of cardiac insulin receptors led to aggravated LV dysfunction and LV dilation after MI (without altering systemic insulin sensitivity), suggesting that myocardial insulin resistance contributes to the progression of ischemic HF. Taken together, our data showed that myocardial insulin resistance is not only a result of MI but also a contributing factor of cardiac dysfunction after MI.
In the clinical setting, antidiabetic agents aim to improve systemic insulin sensitivity, while some of them exert no beneficial effects on HF patients 40 . Our findings reveal that myocardial insulin resistance occurs early in the development of HF, which is independent of systemic insulin resistance, and contributes to the development of HF. Therefore, the intervention specifically targeted against myocardial insulin resistance may represent a potential therapeutical strategy for HF.
In summary, myocardial insulin resistance, independent of systemic insulin resistance, is an early event in the development of ischemic HF following MI. Impaired myocardial insulin action is at least partly mediated by overproduction of TNF-α . Our data suggest that potential therapeutic strategies targeted at reversing post-ischemic myocardial insulin resistance specifically may prevent or delay the progression of HF. Our findings are potentially translatable to the prevention and treatment of ischemic heart disease and HF.
Materials and Methods
Myocardial infarction protocol. All animal experiments were performed in accordance with the National Institutes of Health Guidelines on the Use of Laboratory Animals and were approved by The Fourth Military Medical University Committee on Animal Research. Male Sprague-Dawley rats (200-230 g) were anesthetized with 3% pentobarbital sodium. As described 41 , myocardial infarction was initiated after exposing the heart through a left thoracotomy at the fourth rib by using 4-0 silk to ligate the left anterior descending coronary artery (LAD) permanently near its origin from the left coronary artery. In sham rats, the silk suture was passed underneath the left anterior descending artery without ligation. Then the sham rats received saline administration.
Echocardiography measurements. Serial doppler echocardiography was performed with the ACUSON Sequoia 512 ultrasound machine (Siemens) before the operation and 1, 2, 4 and 8 wk after the operation. A 14-MHz probe was used to obtain two-dimensional, and M-mode imaging from parasternal short-axis view at the level of the papillary muscles and the apical four-chamber view. LV end-systolic and end-diastolic diameters (LVESD and LVEDD respectively) were measured. LV fractional shortening (FS) was calculated as (LVEDD-LVESD)/ LVEDD × 100%. Left ventricular end-diastolic volume (EDV) and end systolic volume (ESV) were calculated according to the formula by Teichholz et al. 42 . Ejection fraction (EF) was calculated as (EDV-ESV)/EDV × 100%.
Assessment of systemic insulin sensitivity.
Fasting plasma glucose (FPG) was analyzed by using standard glucose oxidase methods in rats fasted for 12 h. Fasting plasma insulin (FIN) was measured by using an enzyme linked immunosorbent assay. Quantitative insulin sensitivity check index (QUICKI) was calculated for estimating systemic insulin sensitivity. QUICKI = 1/[log(FIN) + log(FPG)] 43 . Rats were also given an oral glucose (2 g/kg) challenge or an i.p. injection of insulin (0.5 U/kg) (Novo Nordisk, China) to assess whole body glucose tolerance and insulin sensitivity. Whole blood glucose levels were determined at 0, 30, 60, 90 and 120 min after glucose challenge for oral glucose tolerance test (OGTT), or at 0, 30, 60, 90, 120, and 240 min for insulin tolerance test (ITT) (tail clipping used to obtain blood samples).
Assessment of insulin-stimulated myocardial FDG uptake. Insulin-stimulated myocardial
[18F]-fluorodeoxyglucose (FDG) uptake was measured as reported 44 . Briefly, animals were intravenously injected with ~1 mCi of FDG (tail vein) and subsequent PET/CT images were acquired (microPET/CT scanner; Medison, Budapest, Hungary). Insulin stimulation was initiated in overnight fasted rats through i.p. injection of 10 U/kg insulin 30 min before beginning image acquisition. Insulin-stimulated myocardial FDG uptake was assessed using Whole tissue extract and plasma membrane fractionation. Insulin stimulation was induced in fasted rats through i.p. injection of 10 U/kg insulin. After 30 min of insulin or saline injection, animals were euthanized and non-infarcted LV myocardial tissues were harvested. Whole tissue extract was prepared by homogenizing myocardium tissue in ice-cold lysis buffer. The lysates were centrifuged and the supernatants were retained. Heart plasma membrane (PM) fraction was performed as described previously 45 . Ventricle tissue was homogenized in buffer A containing (in mmol/L, pH 7.0): 10 NaHCO 3 , 5 NaN 3 , and then centrifuged at 7000 × g for 20 min. The pellet was resuspended in buffer B (10 mmol/L Tris-HCl, pH 7.4), and centrifuged at 200 × g for 20 min. The supernatant was gently layered on top of a 20% (vol/vol) percoll gradient in buffer C (in mmol/L: 255 sucrose, 10 Tris-HCl (pH 7.4), 2 EDTA) and centrifuged at 55,000 × g for 1 h. The band with density of 1.030 was aspirated and pelleted by centrifugation at 170,000 × g for 1 h and resuspended in buffer C as PM solution. Protein concentration of whole tissue or PM solution was determined using a BCA protein assay. Glucose transporter 4 (GLUT4) content in PM was determined by immunoblotting using standard methods.
Overexpression of TNF-α by adenovirus infection. Serotype 5 adenoviral vectors encoding TNF-α
were provided by GeneChem, Shanghai, China. In brief, the cDNA for TNF-α was cloned into pMD19-T simple vector and then transferred into pAdTrack-CMV, resulting in pAdTrack-TNF-α . The shuttle vectors were used to generate recombinant adenoviral vectors encoding TNF-α (Ad-TNF-α ). Adenoviral vectors encoding green fluorescent protein (Ad-GFP) were used as control. Adenoviruses were purified by double cesium chloride gradient ultracentrifugation. Viral titer was determined by plaque assay and expressed as plaque-forming units (pfu).
Adenovirus-mediated gene transfer to cardiac myocytes was performed as previously described 47 . A 24-gauge catheter containing 0.1 ml of viral solution (7.5 × 10 9 pfu) was advanced from the apex of the left ventricle to the aortic root. The aorta and pulmonary arteries were then clamped distal to the site of the catheter and the solution injected. The clamp was maintained for 45 sec while the heart is pumping against a closed system isovolumically. This allows the solution that contains the adenovirus to circulate down the coronary arteries and perfuse the heart without direct manipulation of the coronaries. After that, the clamp on the aorta and pulmonary artery was released. Insulin-stimulated myocardial FDG uptake and Akt phosphorylation and GLUT4 membrane translocation were measured as previously described in non-MI rats 1 wk after adenovirus infection.
Determination of myocardial and serum TNF-α levels.
Myocardial and serum TNF-α levels were measured as described previously 48 . Heart tissue samples for determination of myocardial TNF-α were obtained from the non-infarcted area of LV. The protein content of the samples was measured with the use of a protein assay kit (bovine serum albumin was used as a standard control). Quantitative expression of the TNF-α was assessed using a rat ELISA kit (Neobioscience Technology Co., Ltd). The sensitivity range of this assay is 25 pg/mL − 20000 pg/ml.
Antagonism of TNF-α action with etanercept. Rats that had surgically induced MI (or control rats)
were treated with the TNF-α inhibitor etanercept (Immunex, Seattle, Wash) by i.p. injection (300 μ g/250 g body weight) 2 d prior to coronary artery ligation and every 2 d thereafter in the first week after MI. The dose and timing of etanercept administration was based on early studies 49,50 and validated in our preliminary experiments. Insulin-stimulated myocardial FDG uptake and Akt phosphorylation and GLUT4 membrane translocation were measured as previously described for etanercept-treated rats at 1 wk after MI. Systemic insulin treatment. Subcutaneous daily injection with insulin (0.5 U/ml, 1ml/kg/d) was administrated in the first week after MI to investigate the effect of insulin on myocardial insulin sensitivity and the development of ischemic HF. Insulin-stimulated myocardial FDG uptake was measured as previously described in insulin-treated rats 1 wk after MI. Echocardiographic and hemodynamic analysis was performed in insulin-treated animals at 4 wk after MI.
Inducible cardiomyocyte-specific insulin receptor knockout mice. We obtained mice from the Model Animal Research Centre of Nanjing University that were homozygous for loxP-flanked insulin receptor exon, and positive for tamoxifen-inducible Cre recombinase driven by the cardiomyocyte-specific α -myosin heavy chain (MHC) promoter (MHC-MerCreMer/IR fl/fl ). Male mice aged 8-10 wk were injected intraperitoneally with tamoxifen (50 mg/kg) for 3 d to induce insulin receptor gene excision in cardiomyocytes. Age-matched male littermates (MHC-MerCreMer/IR +/+ ) were used as control animals. Littermate control mice were also treated with tamoxifen in an identical manner. After that, HW/BW ratio was measured and echocardiography was done before MI in knockout mice and controls.
Myocardial infarction protocol in mice.
Both knockout mice and controls were subjected to LAD ligation after the tamoxifen treatment. The mice were anesthetized with pentobarbital sodium (50 mg/kg i.p.) and connected to a rodent ventilator (volume-cycled ventilator supplying supplemental oxygen with a tidal volume of 2.5 ml and a respiratory rate of 120 beats/min) using a 20 G i.v. catheter. Via thoracotomy, the pericardial sac was opened and a 7-0 silk suture was passed beneath the root of LAD and tied to induce ischemia of the left ventricle. Finally the Scientific RepoRts | 5:17927 | DOI: 10.1038/srep17927 chest cavity and the skin were closed. All mice were maintained with free access to water and chow throughout the period of study. Echocardiography was performed again at 4 wk after MI in TCIRKO mice and controls.
Statistical analyses.
All values are presented as mean ± SEM. Differences among comparisons were evaluated with one-way ANOVA or two-way repeated measures ANOVA followed by Bonferroni post hoc tests where appropriate. Probabilities of p < 0.05 were considered statistically significant. Statistical tests were performed using GraphPad Prism software version 5.0 (GraphPad Software, Inc., San Diego, CA). | 6,146 | 2015-12-14T00:00:00.000 | [
"Biology",
"Medicine"
] |