id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
237592216 | pes2o/s2orc | v3-fos-license | Population level differences in overwintering survivorship of blue crabs (Callinectes sapidus): A caution on extrapolating climate sensitivities along latitudinal gradients
Winter mortality can strongly affect the population dynamics of blue crabs (Callinectes sapidus) near poleward range limits. We simulated winter in the lab to test the effects of temperature, salinity, and estuary of origin on blue crab winter mortality over three years using a broad range of crab sizes from both Great South Bay and Chesapeake Bay. We fit accelerated failure time models to our data and to data from prior blue crab winter mortality experiments, illustrating that, in a widely distributed, commercially valuable marine decapod, temperature, salinity, size, estuary of origin, and winter duration were important predictors of winter mortality. Furthermore, our results suggest that extrapolation of a Chesapeake Bay based survivorship model to crabs from New York estuaries yielded poor fits. As such, the severity and duration of winter can impact northern blue crab populations differently along latitudinal gradients. In the context of climate change, future warming could possibility confer a benefit to crab populations near the range edge that are currently limited by temperature-induced winter mortality by shifting their range edge poleward, but care must be taken in generalizing from models that are developed based on populations from one part of the range to populations near the edges, especially for species that occupy large geographical areas.
Introduction
For ectotherms, the importance of temperature as a master abiotic factor that affects organismal level processes, such as metabolic rate, survival, and growth, is widely accepted [1][2][3][4]. Temperature affects both the population dynamics and spatial distributions of species [5][6][7]. In temperate ecosystems especially, winter temperatures can explain temporal variation in the distribution, abundance, and biomass of entire assemblages [8][9][10]. Variation in winter temperature can strongly impact population dynamics by affecting overwinter mortality rates, causing episodic decreases in population size, regulating recruitment strength, or altering the size-structure of populations [11]. In some regions, such as in the Mediterranean [12] and in benthic but not surface waters of the northeast US shelf [13], winter temperatures are warming more quickly than other seasons. Furthermore, rapidly warming ecosystems, such as the North Atlantic, where sea-surface temperature is increasing by 0.1˚C/decade [14], have already experienced strong shifts towards warm-water species dominance, and it appears that these assemblage shifts can be predicted based on thermal affinities [15,16]. Many efforts have been made to project future species occurrences with habitat models that are largely based on temperature [17]. Implicit in these models is that thermal affinity and other temperature-related performance metrics, such as winter mortality, thermal breadth, and cold tolerance are similar within sub-populations of the same species. However, this necessary assumption is not always explicitly tested for or taken into account marine organisms. It can indeed be problematic to apply thermal tolerances to an entire species across its entire range based on the pattern of response in one population [18]. According to the climatic variability hypothesis [19], variation in intraspecific and interspecific temperature tolerance is correlated with latitude, such that poleward populations have a broader thermal range because they experience more climatic variability and are therefore less vulnerable to climate change. However, there is controversy about how ectotherm thermal ranges vary with latitude. Generally, thermal breadth increases with latitude primarily through declines in cold tolerance limits [20,21]. In some crustaceans, thermal tolerance has been shown to increase with latitude [22,23]. However for some other crustaceans, including crabs, thermal sensitivity varied inversely with latitude [24] and tropical species appeared to have wider thermal windows than temperate species [25], suggesting that tropical crustaceans rather than temperate ones, may be more resilient to climate change. Together, these contrasting findings on the relationship between latitude, temperature tolerance, and resilience to climate change within populations indicate that the climate variability hypothesis might be an oversimplification for crustaceans and that the responses of marine organisms to warming are likely less coherent and predictable than some previous studies have implied [22,26]. Perhaps these discrepancies are related to evolutionary differences in thermal adaptation, but it is also possible that some of these findings, which are based on single performance measures, are overstated. Consequently, it may be necessary to consider multiple metrics of thermal sensitivity to understand the mechanisms of range shifts and accurately predict them [27]. In order to model climate-induced range shifts of species with latitudinal variation in thermal performance, it is important to investigate thermal dependence at poleward range edges to quantify underlying variation, which can alter predictions of population dynamics at range edges [3].
While average temperatures are warming, changes at the extremes may influence abundance and distribution of species more strongly than changes in average temperatures [28]. For populations near their poleward range edges, variation in winter temperature could be particularly important because organisms are closer to their biological tolerance limits. If winter temperature limits a species' poleward range edges, then climate change can facilitate range expansions for those species. In fact, climate-induced range shifts at poleward edges more closely track changes in climate than at the warm edges of a species' range [29], and for some species, winter temperature clearly explains climate induced range expansions [30]. Since winter temperature can strongly limit populations near poleward range edges, warming winters will influence winter survivorship. In order to forecast and understand range expansions for economically and ecologically important temperate species, it is important to understand the mechanistic causes of winter mortality.
Temperature and thermal stress have been well studied as potential causes of winter mortality, but the patterns of winter mortality are determined by interactions between many factors [11]. The importance of other factors, such as salinity, size, and their interactions with temperature, are important but have not been as extensively studied. Salinity is particularly important for estuarine species, acting as the primary environmental factor that defines many of their structural and functional characteristics [31]. The active ion pumping systems used to cope with changes in salinity are often temperature dependent, which can impede proper osmoregulation at low temperature. The hypothesis that osmoregulatory failure is related to cold death is well-supported because blood ion concentrations become increasingly isotonic with the environment near lethal lower temperature limits in fish [11,[32][33][34], although the effects of salinity on temperature tolerance have not been as well studied in crustaceans. Blue crabs are less tolerant to temperature extremes at low salinity [35], and in two species of grapsid crabs, salinity dramatically changed the measured temperature tolerance [36]. The importance of size has been well-supported with ample field and lab-based evidence of positive sizedependent winter mortality in various fish species. However, there is also evidence of no size dependence and even negative size selection in some subtropical fish, which is further complicated by studies showing both latitudinal and interannual variation in the occurrence and direction of size-dependent mortality [37,38]. Therefore, the factors that determine winter mortality are multifaceted and likely vary by species and across temporal and spatial scales.
The Atlantic blue crab (Callinectes sapidus) supports a highly valuable fishery across its range. Blue crabs are widely distributed along the western Atlantic from as far south as Brazil to as far north as Maine [39,40]. They are primarily tropical in origin, and Cape Cod has been historically demarcated as their poleward range limit. Recent evidence suggests that they can tolerate temperatures further north in the Gulf of Maine because they have been occasionally spotted there during the warmer months but do not have established year-round populations [40]. It has been hypothesized that as global temperatures rise, blue crabs are likely to experience a poleward range expansion [40], and although the underlying causes of this potential range shift have yet to be thoroughly explored, it has been suggested that warming reduces overwintering mortality. Therefore, northern blue crab populations provide an opportunity to understand winter mortality mechanisms in the context of climate induced range shifts.
Blue crabs inhabit different habitat types and experience a wide range of environmental conditions at distinct stages of their development. After mating in the warmer months, females undertake a spawning migration where they travel large distances along deep channels towards spawning areas near the inlets of bays and estuaries [41][42][43]. Embryonic and larval blue crabs develop in high saline coastal waters until they are transported back into estuarine systems to settle into nursery habitats [44,45]. For temperate crabs, recruitment is strongly influenced by post-settlement processes, such as winter mortality, predation, and storms [46]. In northern estuaries, once temperatures fall below 10˚C in late fall or early winter, blue crabs enter a reduced metabolic state of torpor, burrowing into the sediment to overwinter [47,48]. In Chesapeake Bay (CB), where blue crab populations are well-monitored and studied, a distinct spatial segregation in winter habitat choice between the sexes and life stages has been documented [49]. Males and immature females are dominant in tributaries, but migrate to nearby channels to overwinter, while females are concentrated near high salinity inlets or on the shelf in the coastal ocean [50]. The differences in environmental conditions experienced in these different winter habitats are likely to pose unique levels of risk for crabs of different sexes and life stages.
Previous work on blue crabs in Chesapeake Bay has demonstrated that temperature, salinity, size and sex are important predictive variables for blue crab winter mortality but has shown somewhat conflicting results [51,52]. Rome et al. [51] observed significantly higher experimental mortality rates than mortality estimates based on winter dredge survey results, emphasizing the importance of acclimation procedures [52]. For blue crabs, both acclimation temperature and salinity affected measured temperature tolerances [35,36]. Bauer and Miller [52] acclimated crabs by more closely mimicking typical seasonal cooling, which produced results that were more congruent with field estimates of winter mortality but still differed enough that they recommended testing additional temperatures and salinities to improve the precision and reliability of their model. While the effect of temperature and salinity on blue crab mortality in the lab is well-documented for CB populations [51,52], less is known about overwintering in other northern estuaries. Since blue crabs exist over a broad latitudinal range, populations along this latitudinal gradient may experience different environmental conditions and may even be locally acclimated or adapted to those conditions. Therefore, it is yet unknown whether blue crabs from other estuaries will have functionally similar responses to temperature and salinity and if the quantitative relationships developed in previous studies can be applied to more northern populations [37,53].
The purpose of this study was to quantify the environmental dependence of blue crab winter mortality using a range edge population and to compare winter mortality of two temperate populations. We used similar methodologies as Bauer and Miller [52] to experimentally determine blue crab winter mortality rates at a broader range of experimental temperature and salinity using crabs from a more northerly estuary, Great South Bay (GSB), a coastal lagoon spanning the south shore of Long Island, New York. We compared the mortality rates of GSB and CB crabs in identical conditions and cross validated winter survival models based on these two populations. We expected that GSB crabs would have the same or better overwintering survivorship than Chesapeake Bay crabs. To test the effects of temperature and salinity on blue crab winter survival, we ran three independent winter mortality experiments in subsequent years that mimic fall temperature declines and winter conditions in the laboratory. For all three years of experiments, blue crabs from GSB ranging from 10-120 mm were obtained throughout the late fall during regularly scheduled biweekly trawl surveys (S1 Table). Supplemental collections also took place to obtain crabs in the smaller size ranges using beach seines near the mouth of Swan River in Patchogue, NY (40˚44'55.0"N, 72˚59'48.0"W), a small creek in the central, northern region of the bay.
Ethical statement
In all three years, collection and acclimation were the same. During the collection period, crabs were held together in large recirculating sea tables or tanks at room temperature and ambient salinity, which ranged from 26-30 psu at the Flax Pond Marine Laboratory. We provided structure for shelter and refuge from cannibalism and fed crabs pellet food ad libitum daily. Once specimen collection was complete, the acclimation period began. To mimic the seasonal temperature decline, we used Delta Star 1 in line chillers to slowly lower temperature at an expected rate of no more than one degree Celsius per day. Chillers were also used during the experiment itself in some years to maintain experimental temperatures. Salinity was adjusted by conducting water changes of the appropriate volume and concentration to reduce or increase the salinity of each tank by no more than one psu unit per day. Prior to the start of acclimation, biological data including carapace width, weight, sex, and leg counts were recorded for each individual as they were randomly assigned to an acclimation treatment tank. The first day of the experiment started once the experimental conditions were reached, which fell on a different date for each treatment. On the first day of the experiment, an individual was removed from its acclimation tank, biological data was recorded again, and then the individual was randomly assigned to an experimental tank. Crabs were not fed once temperatures fell below 10˚C because they do not grow below the T min threshold of 10.8˚C [48]. Experimental temperatures used are shown in Table 1.
In addition to the temperature and salinity treatments, we compared survivorship between blue crabs from GSB and Chesapeake Bay in 2017. We collected wild crabs from the York River (37˚16'03.3"N,76˚33'12.9"W & 37˚14'43.4"N,76˚30'14.2"W) using a seine net. To supplement the sample size of CB crabs and the size range of crabs from this estuary, we also obtained Chesapeake Bay blue crabs from the Institute of Marine and Environmental Technology (IMET) hatchery in November 2016. The crabs from the hatchery are spawned from a wild broodstock of inseminated females that are collected every fall [54]. Wild crabs from CB were kept in an aerated cooler overnight, and then individually wrapped in paper towel and/or burlap in coolers for the drive back to the laboratory, a practice we also used for the GSB collections. Hatchery crabs were acquired the following day and similarly transported. Once at the lab, all of the CB crabs were kept at the same ambient conditions as the wild Great South Bay crabs, and both were held at ambient levels for a week before the start of the acclimation period. The mortality rates during the transportation process were similar to the mortality rates we observe during our regular collections. We kept each type of crab (GSB wild, CB wild, CB hatchery) in their own separate recirculating tanks, until halfway through the acclimation period when individuals from each location were randomly assigned to one of three salinity treatments to complete the acclimation.
The experimental temperatures and salinities were chosen based on gaps in previous experiments to improve model fits and to more adequately sample within the range of environmental conditions that crabs might experience in the field. In the 2015 experiment we used four temperature and salinity treatment combinations. In the 2017 experiment, we used three salinity treatments at a constant temperature, and in 2018 we used the same three salinity treatments at a colder constant temperature because we acquired a new cold room that could maintain 2˚C (Table 1). The set-up of both acclimation and experimental tanks varied between years to accommodate the factorial design and because of logistical constraints in the lab. In the 2015 experiments, each treatment was contained in one of four large sea tables, whose temperatures were maintained by chillers. Crabs in sea tables were each placed in an individual bucket with holes drilled in the sides and a sealed lid with a large hole drilled in the top so that a tube with flowing water could enter the bucket. In 2017 each salinity treatment Table 1. Summary of all past and current blue crab winter mortality laboratory experiments. Units for temperature are in˚C, salinity is in psu, duration is in days, and carapace width is in mm. The design indicates what covariates were used in the experiment. (Sal = salinity, Temp = temperature, Sed = sediment).
Rome et al Bauer & Miller This Study This Study This Study
Year was contained in both sea tables with a chiller and in 2 aquaria in chest freezers to accommodate the larger sample size. Chest freezers were outfitted with thermostats to maintain the experimental temperature. In 2018, the aquaria were placed in a cold room at 2˚C. Crabs in the aquaria were in an individual acrylic glass cubicle with mesh openings between cubicles for flow; 16 cubicles were suspended in each aquarium. All crabs were supplied with an inch of clean sand as a substrate for burrowing. Every day of the experiment, crabs were gently prodded with a plastic pipette to check for mortality. Daily environmental parameters were recorded for each tank, including temperature, salinity, dissolved oxygen and water quality parameters. If a crab appeared dead, they were removed from their cubicle or bucket and observed for about 5 minutes at ambient air temperature for movement or evidence of breathing. Often crabs would begin making gentle movements, at which point they were returned to their bucket or cubicle. If no movement was observed, they were weighed and the time to death recorded. At the conclusion of the experiment, surviving individuals were recorded as censored, meaning they were marked as alive on the last day of the experiment. All experiments were terminated after about 120 days or until all crabs died to simulate the full duration of winter.
Statistical analysis
One of the main purposes of the experiments was to develop a survivorship model for blue crabs over a broad range of sizes, temperatures, and salinities. We conducted the survival analysis in R (Version 3.6.3) to quantify the effects of categorical and continuous variables on the observed patterns in survivorship. Briefly, survival analysis uses measurements of t, the elapsed time until the occurrence of an event, in this case an observed mortality event. Kaplan-Meier estimates of the survival function were derived from the event data using Surv() to create a Kaplan-Meier object and survfit() to produce the estimated survival function using the R libraries survival and flexsurv [55][56][57][58]. We used log rank tests to examine the effects of the following categorical variables: sex, size, estuary of origin, and hatchery vs. wild to determine whether the survival curves, and thus the hazard rates (i.e., the probability that an individual alive at time t experiences an event in the next time step), of two or more groups are statistically different. Even though previous studies have documented no statistical difference between wild and hatchery-reared blue crabs [59,60], we used Chesapeake crabs from the 2017 experiments to do the wild vs. hatchery log rank tests.
Kaplan-Meier objects were also fit to the generalized gamma distribution and several of its specialized cases with flexsurvreg() utilizing standard maximum likelihood methods [61]. The generalized gamma density function used in the flexsurv package [62] can be written as (modified from [63]): It has three parameters: location (μ), scale (σ), and shape(Q). To generate its specialized cases, the exponential has Q = σ = 1, the Weibull has Q = 1, and the lognormal has Q = 0. Covariates used in this parametric analysis included temperature, salinity, and crab size. They were incorporated into the location parameter as a linear function to produce an accelerated failure time model where they act as a multiplier (i.e., e −μ t) to "speed" or "slow" the passage of time [56]. Selection of the best model given the data was determined by using Akaike's information criterion (AIC) following the approach described by [64]: where L i is the maximum likelihood for model i, and k is the number of parameters. Model selection was aided by calculating ΔAIC, which is the difference between AIC i and the model with the lowest AIC. Akaike weights (wAIC i ) give the probability that each individual model is best given the data and set of models being considered and was used for model selection: Lastly, we repeated the model selection process using just the data from this study and then again using a combined dataset, which merged our data (CB & GSB) with data from Bauer and Miller [52] (CB crabs only).
Results
Males and females had similar survival curves (Fig 1; log rank test, χ 2 (1) = 1.2, p = 0.27), and there was no difference in mean size between sexes (Welch's two sample t-test, df = 259.6, t = -0.115, p = 0.908, S1 Fig). Based on the shape and modes in the size distributions, we grouped
PLOS ONE
experimental crabs into three size classes, large(> 60 mm), medium (30-60 mm), and small (� 30 mm). Mortality rates varied among size classes (log rank test, χ 2 (2) = 8.1, p = 0.02). The median survival time for small crabs was just 60 days, whereas the median survival times for large and medium crabs was closer to 90 days (Fig 2). Percent mortality by size in 10 mm bins indicted a non-linear size-specific pattern (Fig 3). Although the sample sizes were small, all crabs > 99mm died in the experiments, and the intermediate sized crabs experienced lower mortality than both the smallest and largest individuals.
Without considering size, mortality rates differed significantly between the two estuaries of origin (log rank test, χ 2 (1) = 21.2, p < 0.001, Fig 5A); however, there were notable differences in the size distributions of crabs obtained from each estuary due to the nature of the sampling (Fig 4). Crabs obtained from the IMET hatchery were larger on average than most of the crabs obtained locally in Long Island (Welch's two sample t-test, df = 81.3, t = 3.47, p < 0.001, Fig 4). CB crabs were on average 11 mm larger than crabs from GSB. But even when accounting for differences in the experimental size distributions, the Kaplan-Meier survival curves of CB and GSB crabs were still different (Fig 5B-5D), and a log rank test stratified by size confirmed that the difference between estuary of origin was significant (χ 2 (1) = 12.1, p < 0.001). There was no difference in survival rate between CB hatchery reared and CB wild crabs using a nonstratified log rank test (χ 2 (1) = 0.3, p = 0.6). In summary, of the categorical variables tested, sex and hatchery were not significant, but size and estuary of origin were significant.
Kaplan Meier curves for all the experimental treatments suggested that survival varied with salinity and temperature (Fig 6). Warmer temperatures and higher salinity resulted in higher survival rates. Survival probability increased from as low as 10% at 5 psu to as high as 85% at 35 psu across temperatures. At salinities greater than 30 psu, the chance of survival was always over 50%, even at 2˚C. Notably, in the most extreme cold and fresh treatment (2˚C and 5psu), only 10% of the crabs survived after 40 days (Fig 6A), while at the same salinity but 2˚C warmer (Fig 6F), 10% survival probability occurred at about 80 days. We only observed 100% mortality in the lowest salinity treatment.
Parametric accelerated failure time models for the GSB and CB experimental data from the three years of this study alone showed strongest support for the generalized gamma distribution with temperature (T), salinity (S), carapace width (CW), the T � CW interaction, and possibly the T � S interaction as covariates (S2 Table). Based on wAIC, the probability was 69% that the first two candidate models, utilizing the gamma distribution, with T, S, CW, T � CW as covariates, and differing only by inclusion of T � S, were the best models given the data. The covariates T, S, CW, and T � CW were all present in each of the ten highest supported candidate models (S2 Table), differing only by small variations in distributional form and the inclusion of T � S and S � CW interactions. There was some support for the Weibull and exponential distributions over the generalized gamma because these specialized cases of the generalized gamma distribution had fewer parameters. There was essentially no support for the lognormal, the other specialized case considered.
Parametric accelerated failure time models for the combined data set, which includes all of our data in addition to the data for CB crabs from Bauer and Miller [52], resulted in three top models that included the same covariates of temperature (T), salinity (S), carapace width (CW), the T � CW interaction, and the T � S interaction, differing only in their distributional form ( Table 2, S3 Table). Mainly because it used fewer parameters without sacrificing much in its fit, the exponential distribution had the stronger support over the Weibull, with one additional parameter, and the generalized gamma, with two additional parameters. Based on wAIC, the probability was~50% that these three similar models were the best choices given the data, emphasizing the importance of the T � CW and T � S interactions in representing the dataset. The remainder of the 10 highest supported models differed only by whether T � S was removed, whether the S � CW interaction was included, and by variations in distributional form between exponential, Weibull, and generalized gamma. There was also essentially no support for the lognormal distribution in this case. Parameter estimates for the best exponential model are shown in Table 3. In order to compare the relative importance of temperature and salinity and to visualize the interaction between the two covariates, we used the exponential model fit from the combined datasets to calculate expected survival probabilities at 100 days of winter for an average sized crab. The relationship between survival, temperature, and salinity is nonlinear at low salinity but linear at higher salinities (Fig 7). At low temperature, increases in salinity can drastically improve the probability of survival, while at higher temperatures, even large increases in salinity do not confer a major advantage. At low salinity, warming from 0-2˚C does not substantially change the probability of survival, but an additional 2˚C of warming to 4˚C does improve the survival rate from near zero to closer to 30%, suggesting a threshold effect at low temperature and low salinity. Contrarily, at higher salinity, the impact of warming on survival is linear, so the benefits of increasing temperature diminish as salinity increases. Overall, the increase in a crabs' chance of survival with warming is mitigated by salinity.
To understand the outcome of the interactions between temperature and size, and between temperature and salinity, which were both included in several of the top models, we used the exponential model to calculate expected survival probability at 100 days of winter across a range of temperatures and salinities for a range of crab sizes. Survivorship declines with both temperature and salinity for all size classes. However, survivorship for small crabs at low salinity increases with temperature much more quickly than it does for older crabs (Fig 8). Medium crabs are similar to small crabs at low salinity, although their sensitivity to both low temperature and salinity is more pronounced, but unlike the small crabs, at higher salinity, survivorship also strongly increases with temperature. Finally, larger crabs are the most sensitive to low temperature, displaying low survivorship across salinity at low temperature and strong increases in survivorship with temperature across all salinity levels (Fig 8). To compare models that were built from different data, we overlaid model fits from the exponential model based on the combined dataset (the best Table 2 [52] Weibull model (Fig 6). As might be expected, models fit best to the data with which they were created, and the exponential model based on all the data fell in between the other two models (Fig 6).
Discussion
Here we illustrate that in a widely distributed, commercially valuable marine decapod rates of overwintering mortality of blue crabs collected from two different estuaries were significantly different. We detected higher winter mortality rates at the same winter temperatures and salinities for crabs collected from a higher latitude estuary (Great South Bay) than those from a lower latitude estuary (Chesapeake Bay), even after controlling for size. Previous studies have shown that high latitude populations are often more tolerant of cold temperatures than their low latitude counterparts [37,65,66], but we found the opposite. This may be related to genetic divergence between these two populations, but genetic studies generally do not support the existence of distinct subpopulations in the US, instead most describe a well-connected, panmitic population with high diversity and gene flow [67,68]. However, conflicting results among studies have generated debate in the literature about the degree of connectivity within the US range and some have even claimed that for blue crabs in particular, it is difficult to detect these genetic differences [69,70]. When significant genetic differences have been found along the US coast, it is typically between northern populations near the range edge and Gulf of Mexico crabs. McMillen-Jackson and Bert [71] observed that northern (New York and New Jersey) blue crabs had lower haplotype diversity than southern (Gulf of Mexico) crabs, And Plough [72] detected significant genetic differentiation between southern (Gulf of Mexico) and northern (Massachusetts) crabs using a genotyping approach. If blue crab diversity does vary with latitude, then perhaps the higher genetic diversity in CB crabs allows more functional diversity to respond to a wider range of environmental conditions. In fact, for several North Atlantic species, leading edge populations are less diverse than at the rear-edge [73], and reduced resilience to abiotic stress has been observed in edge populations of other species, particularly when genetic diversity is low at the edge relative to the core [74]. Regardless of the underlying cause, it appears that crabs from New York have not adapted to cooler winter temperatures than conspecifics from CB. Therefore, blue crabs from higher latitudes along the US East Coast are likely to be more strongly limited by winter mortality than in other temperate estuaries further south.
An effect of the observed differences between populations is that the winter survival model developed by Bauer and Miller [52] for CB crabs did not fit well to our experimental data just as the model based on our results alone did not fit well to their data. It is generally understood that modeled relationships may not hold when they are extrapolated outside the data range [75]. In this case, using a winter mortality model that was developed in one region to make predictions in another, higher-latitude region was problematic because a CB-crab based model always overpredicted survival for GSB crabs. However, this result is not entirely surprising because thermal performance metrics and life history traits vary with latitude both between and within species [3,20]. For crustaceans, however, predicting the response to warming is not as straightforward as it might seem [22]. Our results do not support the climate variability hypothesis because northern crabs experienced lower survival at similar temperatures than their southern conspecifics, exemplifying the challenge of predicting climate responses for crustaceans, particularly for species with large geographic ranges. It is evident that latitudinal variation in mortality patterns inhibits the application of these parametric overwintering models to other estuaries. Therefore, caution should be taken in extrapolating models across populations within a species, especially for crustaceans near their range edges.
The observed latitudinal differences in overwintering mortality may reflect underlying life history trade-offs along latitudinal gradients. For ectotherms, the temperature-size rule, where individuals from colder temperatures reach maturity at a larger size is broadly supported despite there being exceptions to the rule [76,77]. In several crustaceans, this rule varies with ontogeny, such that earlier staged individuals are actually smaller when reared at lower temperatures [78]. However, this rule appears to apply to all stages of blue crabs; juvenile blue crabs are smaller at each instar at higher temperatures because they molt more frequently and grow less per molt [79], and adult blue crab size at maturity seems to correspond inversely with temperature [80]. The "bigger is better" hypothesis, that larger individuals would have lower overwintering mortality was also not universally supported in our study. Instead we observed that intermediate size may be ideal. In some cases, the relationship between survival and growth is dependent on resource availability, such that the nature of the growth mortality trade-off is regulated by food limitation [81,82]. Regardless of whether the reduced survivorship in northern crabs is related to a trade-off between survival and growth, we believe that our results confirm the need to replicate experiments at different geographic locations and validate the utility of common garden experiments.
While the importance of temperature on marine ectotherms is well-established, our results emphasize the importance of salinity and the interactive effects of temperature and salinity at environmental extremes. The two factors interact non-linearly; at low temperature and salinity, survival is especially low. The finding that salinity might be just as important as temperature in driving winter mortality patterns is especially interesting because winter salinity is highly variable spatially and temporally in estuaries. It may also signify that high salinity can provide a spatial refuge in winter. The importance of salinity for estuarine organisms in the context of climate change is often overlooked but climate-driven changes in precipitation and storm events may drastically change salinity patterns that affect marine populations [83]. This is particularly relevant in New York estuaries because Superstorm Sandy created several breaches along the barrier islands, increasing mean salinity in the central and eastern part of GSB and influencing circulation patterns and flushing times [84,85]. Since blue crabs are particularly susceptible to low salinity at low temperature, the effect of increased salinity in the bay may have promoted higher winter survival and potentially benefited this northern population.
Higher risk of winter mortality in low salinity conditions has important implications for blue crab population dynamics, due to the different winter migrations, habitats and environmental conditions experienced between the sexes and the age groups [49]. It has even been suggested that the diverse environments that crabs occupy throughout their life reflect differences in osmoregulatory abilities [50,86]. Mature females tend to overwinter in more saline environments while males and immature females primarily overwinter in more upstream habitats [87,88]. The energetically expensive migration that females undertake to the mouths of bays may be compensated by enhanced survivorship in higher salinity overwintering habitats. The nuances of juvenile habitat choice in winter are not as well understood; juveniles tend to move toward deeper channels likely because water temperature is higher and more stable, but it is not known whether smaller juvenile crabs also seek out nearby channels to overwinter or whether they continue to utilize vegetated shoal habitats throughout the winter [89]. Juveniles and adult male crabs are primarily found in the upper bay and tributaries in winter surveys in the Chesapeake [49,88]. These upstream habitats are typically fresher and subject to more severe winter temperatures than in the deeper channels of the main bay [52]. It is also generally true that crabs in low salinity waters are more susceptible to extreme temperatures [35]. Therefore, the preferred overwintering habitats of males and immature crabs are less ideal than the preferred habitats of mature females although they are potentially better suited for these conditions because they have better osmoregulatory abilities in dilute conditions [86]. While we do not have as much detailed data about the winter distributions of GSB crabs, if we assume these patterns are similar to those observed in CB, then, juveniles and adult males may be particularly at risk during harsh winters relative to the mature females, who primarily overwinter in more moderate habitats.
The observed lack of a statistical difference between sexes is consistent with the findings of Bauer and Miller [52]. While adult males and females may appear to select different habitats in the field, we were unable to detect a physiological difference in winter survival across a broad range of experimental salinities and temperatures in the lab. Perhaps this is because most of the crabs used in this study were immature juveniles whose preferred winter habitats are not well known, while the literature documenting prominent differences in winter habitat choice in CB focuses primarily on mature males and females. Despite an even sex ratio among the few mature adults in this study, had we acquired more mature adults in the experiments, we might have detected a sex effect. Although sex did not affect winter survivorship in the lab, it may still be important in the field if females preferentially overwinter in dense aggregations in areas that are highly exploited by the winter dredge fishery, which targets female spawning aggregations near the mouth of GSB, while males preferred overwintering habitats are less heavily fished on in the winter months. Further field studies are needed to quantify sex specific overwinter habitat for GSB blue crabs.
The negative relationship between size and survivorship initially appears to contrast with the positive relationship described by Bauer and Miller [52], but it is actually consistent with the findings of Rome et al. [51] that both the smallest recruits and large females are more susceptible to harsh winter conditions. While Bauer and Miller [52] observed that size and survivorship had a positive relationship, they only used crabs up to 68 mm, which are smaller than the subadult crabs used here that experienced elevated mortality risk. Therefore, while it may be true that within the juvenile size range, survival and size are positively related, it seems that mature females and larger crabs are also vulnerable. That the smallest juveniles in our experiments are also at higher risk is consistent with the work of Bauer and Miller [52]. The finding that both size extremes face a higher risk in adverse environmental conditions suggests a trade-off, whereby different mechanisms of mortality operate at the tail ends of the size distribution. That the primary cause of winter mortality may vary across size is further supported by the finding that the shape of the environmentally dependent survivorship surface was different for each size class. While the mechanism that underlies the cause of mortality in cold, harsh winters for blue crabs is unknown, we hypothesize that, for juveniles, osmoregulatory failure might be the proximate mechanism of mortality near their lower thermal limit [11]. In contrast, large crabs, especially mature females, have larger total energetic requirements and a lower scope for growth, which may make them more susceptible to mortality via starvation. Mature females that mated in the spring or fall may begin winter with depleted energy stores relative to their male conspecifics due to the energetic burden of egg production and the longdistance spawning migration.
The lack of statistical difference between wild and hatchery reared crabs is interesting. In their winter mortality study, Bauer and Miller [52] found a difference between wild and hatchery crabs, although they suggest this be interpreted with caution because the difference could be related to size or the condition of one or two broods of crabs rather than hatchery-raised crabs in general. Our finding that wild and hatchery crabs from CB are not different once adjusted for size supports the latter hypothesis. Survival and growth of hatchery and wild crabs released in the Chesapeake Bay were similar [54,59,60]. Hatchery crabs readily fed on natural prey and moved in the field similarly to wild crabs but some morphological and behavioral differences have been observed [90]. However, few of these studies have examined differences in field winter survival between wild and hatchery-reared crabs. It is possible that differences in the diversity of the broods used in our study compared to those used in Bauer and Miller [52] affected the different outcomes of the statistical tests. In summary, we do not feel that the inclusion of hatchery crab in this study skewed our results, but the observed lack of difference in winter survival supports the idea that hatchery reared crabs are suitable for mass stocking programs [54,90].
Since the accelerated failure models fit in this study can be generalized across sex, they can be used to predict winter mortality rates in the field for a crab of any size using in situ environmental data and an estimate of winter duration, although which model is used should depend on the latitude of the estuary for which mortality is being estimated. We did not account for other biotic factors that may affect winter mortality in the field, such as fishing, predation, starvation, and sediment type. Further study of the potential impact of these factors could help explain the patchy distribution of overwintering crabs. The sensitivity to salinity and temperature might influence overwintering habitat selection as crabs begin to settle into the sediment in late fall and early winter. Therefore, the environmental dependence of natural mortality in winter can be used to understand temporal patterns in distributions and abundance. Finally, the impacts of severe winters and even climate change on blue crab populations can be projected through careful application of the models.
If blue crab populations are indeed constrained or limited by winter temperature, then warming and the subsequent reduction in winter mortality rates could provide a potential mechanism for further poleward expansion of their range or increased population growth rates in range edge populations. Furthermore, in the northeast US, the warming trend is the most pronounced in winter [91], which could provide an opportunity for leading edge expansion. It has even been suggested that, in some temperate estuaries in the future, overwintering hibernation periods will be significantly shorter or eliminated entirely, which would significantly increase the length of growing seasons and would certainly affect population dynamics and the subsequent management of those fisheries [92]. However, the potential benefit of warming on northern blue crab populations will also depend on extreme events and the variability of winter conditions. Temperature aberrations, such as cold snaps, have often been overlooked in studies on the impacts of climate change on species distributions, but these episodic events can influence or limit range expansions of warm water species [93]. Although warming winters may be beneficial for blue crab population growth rates, the net impact of climate change will be determined by other environmental changes and factors that we did not consider, most importantly fishing [80] and species interactions. It is therefore imperative to continue to monitor and study blue crabs in range edge habitats that they currently occupy and to expand blue crab research into even further poleward estuaries to assess and manage the hypothesized distribution shift of this important and valuable species. | 2021-09-23T05:12:11.725Z | 2021-09-21T00:00:00.000 | {
"year": 2021,
"sha1": "801c9c67a2e132007e3b4d0de822e0529a63d633",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0257569&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "801c9c67a2e132007e3b4d0de822e0529a63d633",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
129329764 | pes2o/s2orc | v3-fos-license | A Right-of-Way Stormwater Low Impact Development Practice
Road right-of-way can provide a good retrofit opportunity to implement stormwater low impact development (LID) technologies. In Canada and the northern United States, stormwater management should focus not only in summer and fall but also winter and spring. A stormwater exfiltration system was designed to manage stormwater over four seasons by retrofitting roads with two 200 mm perforated pipes with ends capped below a storm sewer system. The design concept is to direct road runoff of up to 15 mm rainfall to these two perforated pipes and fill the void space of the sewer trench for exfiltration to the surround soil at all times (i.e. including snowmelts and rainfalls during winter and the early spring seasons). In 1993, 2.5 km of this exfiltration system was constructed in Etobicoke, Toronto, and subsequent monitoring results indicated that a long duration rainfall, ≤28 mm over 22.5 h, was almost completely captured without overflowing to the storm sewer above. This paper presents the planning and design criteria, construction and maintenance, performance evaluation, and costs of this LID based on the Etobicoke application.
Introduction
Road reconstruction or sewer replacement, continuously being done in municipalities, can provide an opportunity to implement LID at existing urbanized area.An innovative LID technology, termed the Etobicoke Exfiltration System (EES), was developed and implemented in Etobicoke, Toronto >20 y ago.Conceived to reproduce the infiltration and groundwater recharge of rainfall prior to urbanization, the EES consisted of two 200 mm perforated PVC pipes which were installed below the storm sewer as the road was reconstructed (Figure 1).
Figure 1 Concept of Etobicoke exfiltration system.
Both the perforated pipes and the storm sewer were encased in a granular stone trench.The perforated pipes are connected to both the upstream and downstream manholes below the storm sewers.At the downstream manhole, mechanical plugs were installed at the exfiltration pipes.Thus the exfiltration pipes were storage systems instead of conveyance systems.During a storm event, storm runoff from the upstream manhole enters the two perforated pipes and then exfiltrates firstly to the stone trench and subsequently to the surrounding soil.In order to prevent the perforated pipes and stone trench from clogging, sediment intrusion, or loss of granular materials, both the perforated pipes and the stone trench were wrapped with a filter fabric.
For large storm events, the runoff may exceed the exfiltration volume and the storage capacity of the perforated pipes and stone trench, resulting in an overflow to the storm sewer at the upstream manhole.Since EES was situated below the storm sewer (usually below the frost line in Canada), it would not be frozen in winter and early spring and continues to work during snowmelts throughout winter and early spring.
A demonstration project of 2.5 km of EES was implemented in 1993 at two existing residential areas and found to be effective in reducing storm runoff of up to 63 mm (18 h) of rainfall events.This paper presents the planning and design criteria, construction and maintenance, performance evaluation, and costs of EES based on the Etobicoke application.Although EES was constructed and monitored ~20 y ago (the only one in the world), the planning and design procedure was never summarized until the recent development of a planning and design manual by Li and Tran (2015).
Recent research works (Liu 2015) at Ryerson University, Toronto indicate that EES could potentially eliminate the requirement of stormwater quality ponds for new residential developments in Ontario.As a result, this paper contributes to the first knowledge transfer of EES and the future stormwater management across Canada.
Stormwater Management Objectives
In Ontario, the Ministry of the Environment and Climate Change's (MOECC) Stormwater Management Planning and Design Manual (MoE 2003) describes the following five stormwater management criteria: 1. Preserve groundwater and baseflow characteristics; 2. Prevent undesirable and costly geomorphic changes in watercourses; 3. Prevent any increase in flood potential; 4. Protect water quality; and ultimately; and 5. Maintain an appropriate diversity of aquatic life and opportunities for human use.Ternier (2013) developed a set of comprehensive stormwater management objectives and identified those that can potentially be addressed by EES.Tables 1 and 2 summarize the environmental and human habitat objectives that determine the suitability of EES.(A more detailed explanation of those objectives can be found in Ternier 2013.)
Suitability Screening
A two step screening procedure has been developed for the EES (Li and Tran 2015).The first step comprises the following most critical screening questions: 1. Is a water supply aquifer absent at the site of interest (reduced ground water contamination)?2. Is the site of interest a low density residential area (less polluted runoff )? 3. Is the site of interest served by local roads (reduced spill potential by industrial trucks and structural consideration of perforated pipes)? 4. Is the ground water table below the invert of the exfiltration pipes?All of the first step questions must be answered affirmatively without exception in order to continue to the second step.Regarding question 4, the EES can still be considered suitable even the ground water table is <1 m (typical for infiltration or exfiltration devices) because runoff will be exfiltrated horizontally along the storm sewer trench in addition to vertically downward.
The second step comprises the following secondary screening questions: 1. Are the roads or sewers in poor condition (increased retrofit potential and saved cost)? 2. Is the tree root problem absent at the site of interest (trees with deep roots need relocation to prevent roots from damaging filter cloth and perforated pipes)?3. Is the required maintenance equipment available at the municipality (reduced long term maintenance cost)?All of the second step questions should be answered affirmatively, either with or without implementation of engineering measures designed to remedy the associated environmental impacts.If there are additional environmental impacts associated with the engineering measures, then the EES is not suitable for the site of interest.
Design Criteria
In order to satisfy the municipal sewer design requirements, the design criteria of EES were selected as follows: storm sewers were sized to convey a typical 2 y or 5 y design storm; and the pipetrench system (the two pipe volumes plus the void space in the trench up to the storm sewer invert level) was designed to store the runoff volume of a 15 mm design storm (corresponding to the 90th percentile of storm event volume in Toronto; the application of EES at other locations should consider their corresponding 90th percentile of storm event volume).
Design Specifications
At the demonstration site, storm sewers were designed using the conventional Rational Method.Using a 1 h Canadian Atmospheric Environment Service (AES) design storm (15 mm rainfall event volume), the corresponding runoff hydrograph from the catchment was simulated using the MIDUSS program (Alan A. Smith Inc. 2009).The geotechnical investigation of the soil at the sites indicated a saturated hydraulic conductivity between 10 −9 m/s and10 −8 m/s (Candaras 1997).Thus, it was assumed that the exfiltration volume during the storm event was negligible.
The combined volume of the perforated pipes and the void space of the trench was then sized to accommodate the runoff volume.As a result, each section of EES might be different depending upon the drainage areas.At the demonstration sites (low density residential area), the average runoff volume of a 15 mm AES storm was estimated to be ~5 mm. Figure 2 shows the orientation of the 12 mm orifices while Figure 3
Maintenance
Post-construction maintenance includes the following.
A Regular Maintenance and Observation Program
A regular observation and maintenance program was conducted at the two demonstration sites.The working conditions of the EES were assessed periodically and after major storm events during the first 2 y.General observations included visual evidence of overflows at the sewer, water marks along the sewer and near manholes, and the integrity of the mechanical plug at the downstream end of the perforated pipes.If a small storm event had caused an overflow or a high water level at the upstream manhole, the EES might be plugged and needed cleaning.If the downstream mechanical plug had been pushed out, a short circuit of flow at that length of perforated pipes might have occurred.Additionally, minor deficiencies such as debris accumulation at catch basins were identified and repaired.Figure 11 shows the upstream and downstream water marks of the 1.3 km storm sewer system.Most of the upstream sections of the sewer had no overflow, indicating the EES had intercepted the runoff completely.In order to assess the sediment accumulation inside the perforated pipes, a video inspection was conducted in both the first and the second year after construction.It was observed (Figure 12) that a small amount of sediment accumulated at the downstream end of the perforated pipes, and some organic materials such as leaves clung to the obvert of the perforated pipes.The concentration of sediments at the downstream end of each section is attributed to the sloping exfiltration pipes (~0.5%) which pushed the sediments downstream.Thus periodic cleaning of the perforated pipes is required.Even if the accumulated sediment is not removed, the upstream section of EES will still exfiltrate runoff to the granular trench.In fact, the EES at the demonstration site has not been cleaned after >20 y and there have been no reports of significant overflows at the site.
Power Flushing the Perforated Pipes to Remove Accumulated Sediments
Although the sediment accumulated inside the perforated pipes was small, a demonstration of the potential cleaning techniques was conducted 1 y after the construction.The downstream mechanical plugs of an upper section of the EES were first removed and a highly pressurized water flusher was inserted at the downstream end.The flusher discharged pressurized jets of water that scoured the walls of the perforated pipes as it travelled upstream.The accumulated sediments were flushed to the downstream manhole of the section and were then pumped out using a vacuum truck.The sediments were then removed from the water using a treatment truck equipped with a shear drum separator and disposed of offsite.
Since the demonstration, there has been no report of maintenance of the EES at the site.However, this demonstration has served the purpose of showing potential cleanup technologies that can be employed by municipalities.Depending on the sediment accumulation observed, it is recommended that flushing be conducted once the exfiltration pipes are found to be half filled up with sediments.
Performance
Figure 13 shows the EES performed effectively for the 1994-05-26 event with a rainfall volume of 28.3 mm over 22.5 h.There was hardly any flow at downstream manhole 3 (green hydrograph) of an EES section, indicating the majority of the runoff event (red hydrograph) got captured by the perforated pipes and stone trench.Additionally, the stone filter head at manhole 3 was high compared to that at manhole 2, indicating runoff was trapped downstream at the stone trench.A summary of the monitoring results is shown in Tables 3 and 4. No monitored events caused an overflow to the storm sewer except the largest storm and the last two short duration high intensity storms.It is noted that the large event on 1995-10-05 (63 mm over 18 h) did not cause an overflow until 8 h had elapsed and 50% of the runoff volume occurred, while the smaller events on 1998-06-30 (26.8 mm over 2.5 h) and 1998-09-06 (15.3 mm over 0.75 h) caused overflows.Given enough times for exfiltration (e.g.long duration storms), even very large events (e.g.63 mm) can be substantially captured by EES.On the other hand, short duration storms can trigger overflow due to high runoff rates compared to the exfiltration rate along the EES.No winter and spring melt events were ever monitored at the demonstration site.Since the EES is below the sewer and the frost line, it should be able to intercept any snowmelt during winter and early spring.Compared to LID on the surface, which may be frozen during winter, EES will offer stormwater management over winter and early spring.
Cost
Table 5 shows the cost breakdown of the EES at Princess Margaret Street in 1993 dollars.The costs of road, drainage, and EES are 78%, 18%, and 4% respectively.If a stormwater quality pond was used to control the runoff from the site (30.5 hectares), the construction cost would be about $130 000 (about $30/m 3 in 1993 dollars excluding land cost).Compared to the cost of EES ($25 000), a saving of about 80% could be realized.It is noted that the reconstruction of the sewer at Princess Margaret Street involved open cut trenches.If trench boxes were used, the excavation cost might be reduced.Thus the cost saving of EES as a percentage of the total road reconstruction cost may be greater."
Conclusions
EES is a road right-of-water LID that can capture substantial runoff from small and medium events over four seasons.If it is located in good drainage soil, the captured runoff can be dissipated to the surrounding soil laterally along the storm sewer.It is particularly appropriate for roads or sewers which are scheduled to be rehabilitated because the additional cost (e.g.4% in the Toronto demonstration site) is relatively small compared to the road or sewer construction costs.While the design of the storm sewer above the EES follows a typical municipal standard, EES can actually reduce the flow to the storm sewer.As a result, the storm sewer can be reduced in size if agreed by a municipality.Ultimately, EES can reduce or even replace any stormwater quality pond downstream.Future works should be done to optimize the design of EES in terms of runoff capture, monitor its performance over winter and early spring seasons, and conduct a full life cycle cost reduction analysis.
Acknowledgment
This paper is dedicated to the original designer of EES, the late John Tran, whose revolutionary ideas have inspired a lot of stormwater professionals in Ontario and elsewhere in Canada.We would like to acknowledge the support and contribution from the following organizations and stormwater professionals: shows the cross sectional details of EES.Standard cutoff walls (OPSD 812.010 2006, https://www.raqsb.mto.gov.on.ca/techpubs/ops.nsf/OP-SHomepage),constructed laterally at the downstream manhole of each section of EES, force the exfiltration laterally to both sides of the stone trench.The material specifications of EES were: 200 mm Type PSM PVC pipes (ASTM: D3034-04a, http://www.astm.org/DATABASE.CART/HISTORICAL/D3034-04A.htm);nonwoven geotextile pipe socks; 19 mm clear aggregates (Granular B Type II); and construction rubber and ABS plastic pipe plugs, mechanical, nominal size 200 mm (maximum back pressure ≤40 ft, 12.19 m, head).
Figure 3
Figure 3 Cross sectional details of EES.
Figure 4
Figure 4 Exfiltration pipes wrapped with nonwoven filter pipe socks.
Figure 5
Figure 5 Laying the exfiltration pipes.
Figure 6
Figure 6 Backfilling the perforated pipes.
Figure 7
Figure 7 Laying the concrete storm sewers.
Figure 8
Figure 8 Backfilling the storm sewers.
Figure 9 Figure 10
Figure 9 Wrapping the granular trench with nonwoven filter cloths.
Figure 11 Figure 12
Figure 11 Water marks from the upstream to downstream sections of the storm sewer system. | 2019-01-08T21:18:01.461Z | 2015-08-21T00:00:00.000 | {
"year": 2015,
"sha1": "50e0bbb8f110b28a828b251d8ff91d13e6e349b1",
"oa_license": "CCBY",
"oa_url": "https://www.chijournal.org/Content/Files/C390.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "50e0bbb8f110b28a828b251d8ff91d13e6e349b1",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
259070461 | pes2o/s2orc | v3-fos-license | Determinants of woodcraft family business success
The woodcraft industry has been developing in Bali for more than half a century in the form of family business (SMEs) which is currently managed by the third generation or the transition from the second to the third generation, where this phase is the climax of the family business. Apart from contributing to tourism, this craft business also has cultural values. Moreover, the tourism situation and macroeconomic shocks have had an impact on business conditions. This research aims to analyze the performance of a woodcraft family business based on a family and financial approach, through a two by two matrix analysis as well as to analyze the determining factors of willingness to succession of woodcraft family business in Bali, with MICMAC analysis. The results show that the performance of the family business in this case is high emotional but low financial capital. There are 18 identified factors related to the willingness to succeed in the woodcraft family business, and the most influential factor (existing and forecasting) is the participative leadership style, while the most dependent is personal interest which is the involvement of the successor from an early age in family business activities.
Introduction
The Province of Bali is a popular tourist destination in the world, known for its natural beauty, culture and customs of its people (Mudana et al., 2018;Wiweka & Chevalier, 2022;Wijaya et al., 2020). International tourist trips to Bali have started since the early 20th century, around 1920 to be precise (Antara & Sumarniasih, 2017). The development of Bali tourism has so far been supported by the existence of the creative industry (Darma et al., 2019), and the creative industry is also driven by tourism activities (Hidayat & Asmara, 2017). The creative industry can be said to be a commercialized cultural industry (The Canadian Policy Research Group, 2013), so it is only natural that the creative industry is growing rapidly in Bali. Indonesia Agency for Creative Economy (a.k.a. Bekraf) has included 16 sectors in the creative industry category, i.e.: (1) games application and development, (2) architecture, (3) interior design, (4) visual communication design, (5) product design, (6) fashion, (7) film-animation-video, (8) photography, (9) crafts, (10) culinary, (11) music, (12) publishing, (13) advertising, (14) performing arts, (15) art, (16) television and radio content (Setiawan, 2018). Burns and Holden (1995) argue that tourism can lead to industrialization of local culture. This has happened in Bali, where Balinese culture, especially in the form of handicrafts, has been made into a commodity or has undergone an industrialization process for consumption by tourists (Sukarini et al., 2019). Handicraft is a type of creative industry that has been developing for a long time in Bali. Bali is culturally rich with Indonesian art and culture entwined with age-old traditions (Rachel, 2016). There are various handicraft industries in Bali, such as woven crafts, gold and silver crafts, woodcarving crafts, and others. Gianyar Regency is one of the areas with villages which is famous for having many villages with their own unique arts and crafts. For example Tegallalang Village with sculpture, Ubud Village with painting and wood carving, Celuk Village is famous for its gold and silver handicrafts. There is also Mas Village which is famous for its wooden carvings and statues, and most of the people have jobs related to this industry, both household, small, medium and large industries (Sukarini et al., 2019). Even Balinese wooden handicrafts, especially Gianyar Regency, have been exported to the United States, Germany, Sweden, Australia, France, Canada, England and several other countries (Gayatri & Setiawina, 2016). Mas Village is the center of the woodcraft industry in Gianyar, even in Ubud District, Mas Village is the village with the most business units, namely around 50 business units (41% of woodcraft business units) (Widyastiti & Karmini, 2021). The woodcraft industry in Mas Village is generally a cultural industry which is a family business and has developed for more than five decades, so that currently the woodcraft business units are generally led by the second or third generation.
The existence of the woodcraft business in Bali has had its ups and downs. If you make observations along the Mas Village main road, you can see rows of workshops and woodcraft artshops. However, currently there is a shift or shift in the business line of the woodcraft industry center in Mas Village. The results of initial interviews with several business people stated that this happened because the family business was not managed seriously by the next generation. Or the successor prefers other business lines that are considered more potential.
Craft businesses in various other countries are also used to support tourism, and are generally family businesses (Tatiyanantakul & Kovathanakul, 2014). Family business in general is a business that is managed by people with family ties, and passed on to the next generation. Indeed, until now there is no concise, measurable, and agreed definition of family business, so that experts often differentiate this business based on the percentage of ownership, strategic control, involvement of multiple generations, and the intention for the business to remain in the family (Poutziouris et al., 2006).
Family relationships in business are like a double-edged sword, on the one hand they can be a strength in interpersonal relationships and facilitate communication, but they can also trigger conflict, which is almost always present in family businesses (Memili et al., 2015;Muskat & Zehrer, 2017). Apart from conflicts in management, family businesses also often experience conflicts in leadership succession (Wibowo, 2018). Most family businesses cannot survive for the third generation, where the mortality rate becomes much higher during the owner-second generation transition (Poutziouris et al., 2006).
Likewise with the woodcraft family business in Bali, especially in Mas Village, which is currently in its third generation or second generation transition, so it is in the climax stage. Succession in the family business is a crucial issue (Wibowo, 2018). Therefore it is very important and interesting to study the factors related to the willingness to succession in the woodcraft family business. The purpose of this research is to analyze the current performance of the woodcraft family business in Bali in terms of family and business dimensions. Next is to map and analyze the factors related to the willingness to succession in the woodcraft family business in Bali.
Concept of family business
The field of study of family business aims to develop a 'theory of family firms' that takes into account the interrelationships between families and business systems (Poutziouris et al., 2006). Precisely is when organizational theories and family system theories are operated together.
Organizational theories
Family firm filter (When the two systems operate as one) Theories of family firms Family system theories In addition, over time, the complexity of family and business grows over time, so performance and succession measurement is important. Some researchers (Sharma et al., 2001) associate the succession of a family firm with two things, namely the company's results after succession, and family satisfaction with the succession process as a whole (Gimeno et al., 1997). So there are always non-economic factors that must also be considered in a family business.
Research design
Research on family business is often constrained in obtaining accurate information (Poutziouris et al., 2006). Therefore, this study will use FGDs involving experts, associations of wood craftsmen, including business actors in Mas Village, to obtain valid information. The FGD was conducted to collect information about the condition of the existing woodcraft family business, identify factors that influence the succession stage, and provide value input for data analysis through sustainability analysis.
Concepts and measurement of the performance of family business
Family business has a measure of success (performance) not only in the business/financial dimension, but also in the family dimension, so that a family business may be successful in one or both of these dimensions. To answer the first research question related to family business performance, position analysis is carried out in the two by two matrix from Poutziouris et al. (2006), which presents family business performance based on the business dimension and family dimension. So that a proposition can be arranged where the performance of the family business is as follows: Family business performance = f(financial, familiness) Each quadrant in Fig. 2 can be explained as follows (Poutziouris et al., 2006): • Warm hearts-deep pockets Firms in Quadrant I are the successful family firms; they experience profitable business as well as family harmony. In other words, they enjoy high cumulative stocks of both financial and emotional capital that may help sustain the family and business through turbulent economic and emotional times.
• Pained hearts-deep pockets Quadrant II firms are characterized by business success but also are tension prone or exhibit failed family relationships.
• Warm hearts-empty pockets Quadrant III firms enjoy strong relationships among family members, though their businesses are low performers. In other words, they are endowed with high levels of emotional capital but low financial capital.
• Pained hearts-empty pockets Quadrant IV firms are failed firms that perform poorly on both the family and business end. Although failure on the business dimension can be used as a learning experience that may even enable these family members to launch another venture in future.
The business dimensions and family dimensions that will be assessed according to the family business performance approach in Fig. 1 are measured through several indicators in Table 2. To determine performance, stakeholders provide an assessment of these indicators, with a choice of scores -2 (very bad), -1 (poor), 0 (neutral/constant), 1 (good), 2 (very good). Furthermore, after the score tabulation, the position in the two by two matrix is determined by the equation: where: X = score for the family dimension Y = score for the business dimension N = number of indicators
MICMAC analysis
This research also uses MICMAC sustainability analysis which is a structural analysis tool for determining essential variables in a system (Fauzi, 2019, Wijaya et al., 2020. The variable referred to in this study is related to the willingness to succession in the woodcraft family business in Bali. MICMAC will map each variable in the following quadrants. • Influence variables (key drivers) in Quadrant I are the most crucial variables and act as key factors.
• Relay variables in Quadrant II are variables that are influential but very dependent, so they are often categorized as factors that describe the instability of a system, • Depending variables in Quadrant III are variables with high dependency but have little effect. This variable tends to be sensitive to changes in variable influence and variable relay.
• Excluded variables or autonomous variables in Quadrant IV are variables with little influence and dependency, so they are said to be excluded because they will not stop a system from working.
Apart from that, there is also a middle area for regulating variables, which are variables that are adjustable and controllable, and usually don't need to be discussed or deal with their priorities (Hsiu et al., 2009).
Factors identification
Based on the literature review and FGD, it is possible to identify factors related to the willingness to succession in the woodcraft family business in Bali, as presented in Table 3.
Performance of woodcraft family business
Measuring family business performance certainly does not only pay attention to business aspects, and cannot be separated from traditional economics and kinship. Based on the results of data analysis related to performance measurement, it can be seen the trend of the existing condition of each indicator. (2022) In accordance with the results of the calculation of the indicator scores in each dimension, x,y = 0.5,-0.125 is obtained so that the performance position of the existing woodcraft family business is in quadrant III. The position in quadrant III means that the performance of the family business in the woodcraft industry is currently with high emotions (warm hearts) but low financial capital (empty pockets).
Mapping of factors related to successor willingness
Based on the results of the MICMAC analysis, it is possible to map the position of each factor related to the willingness to succession in the woodcraft family business, as shown in Fig. 4. Fig. 4. Direct Influence/Dependence Map Based on Fig. 4, it can be seen the position of each factor/variable on the direct influence/dependence graph. More clearly, the classification of the position of each variable is presented in Table 5. MICMAC also provides information about the strength of each effect between variables. Figure 4 presents the influence between variables, namely: 1) red line, indicated that the influence was very strong; 2) thick blue line, indicating that the influence was relatively strong; 3) a thin blue liner, indicating that the influence was moderate; 4) black line, indicated that the influence was weak; and 5) dashed line, indicating that the influence was very weak. The influence between variables is very important as a basis for determining decisions or policies for stakeholders. In addition, it is very important to obtain information about the stability of the system, namely the influential variables, so it is necessary to re-identify based on indirect influences. Fig. 6 presents a map of indirect influence/dependence map. If Fig. 6 is compared with Fig. 4 previously, it can be seen that there is no change in the variable position of the direct effect classification. This means that the cariables classified as direct effects are stable (Ariyani et al., 2018).
MICMAC also provides an analysis of potential direct influences which illustrates the possibility of changing the classification of variables if there are actions on the system. Fig. 7 presents the potential direct influence/dependence map.
Fig. 7. Potential Direct Influence/Dependence Map
Looking at Fig. 7, we can see that there are several variables that have changed their position, namely the relationship among family members and business formality, which were previously in the input variable (quadrant 1) and the relay variable (quadrant II), shifted to quadrant III (as depending variables). This shift shows that if there is a change in the family business, it will have an impact on the relationships among family members because each member has their own thoughts. Apart from that, the business formality chosen is also strongly influenced by developments in business conditions. This is consistent with the characteristics of depending variables (quadrant III) which have a high dependency. The direct effect on MICMAC refers to existing conditions, while the indirect effect refers to forecasting (future conditions). Forecasting itself is produced from iterations, where the fewer iterations needed (to obtain 100% results), the better. In this case, the iteration was only carried out twice, and the iteration results are presented in Table 6. As for changes in existing conditions and forecasting of future conditions as changes in priority levels (both in terms of influence and dependence) can be seen from a comparison of the list of variables sorted by influence and list of variables sorted by dependence which present position changes in direct and indirect influence.
Fig. 8. List of variables sorted by influence and dependence
As part of a policy review, it is very important to know the priority level of each variable. The proportion matrix in Table 7 below presents the ranking proportion which means the most important for the system being evaluated. Table 7 shows the priority variables in existing conditions, forecasting, and if there are actions in the system.
• Existing conditions are shown through direct influence and direct dependence. The most influential variable is participation leadership style, followed by industry specific skills and industry specific knowledge. While the most dependent variable is personal interest, followed by the variable of successor's working experience in family business, and industry specific skills variable.
• Forecasting is shown by indirect influence and indirect dependence, where the most influential variables are participation leadership style, followed by industry specific skill variables, and trust in the successor's. Meanwhile, the most dependent variable is not different from the existing condition, namely personal interest, followed by the variable of successor's working experience in family business, and industry specific skills variable.
• The ranking of variables if there are actions, can be seen from the potential direct influence and potential direct dependence (for existing conditions) as well as potential indirect influence and potential indirect dependence (for forecasting). Where the results on direct and indirect influence, as well as direct and indirect dependence are the same. The most influential variables were owner/predecessor characteristics, followed by the relationship between predecessors and successors, and participation leadership style. While the most dependent variables are personal interest, working experience in family business, and commitment to the company.
Conclusions
The woodcraft family business in Bali is currently experiencing a maturity and critical period, because the management is in the transition from the second to the third generation, or the third generation. The results of the analysis of the performance of the family business using the family and financial approaches show that currently the performance of the woodcraft family business on the familial side is high emotional (warm heart), but tends to be low on the financial side. This also triggers shifts or changes in business lines.
The most influential factor, both in existing conditions (direct influence) and forecasting (indirect influence) is participation leadership style, while the most dependent variable is personal interest which is the early involvement of successors in family business activities making it easier for successors to understand the value of family business. If there are actions or policies implemented to influence factors related to willingness to succession, there will be a shift in potential direct influence, where the most influential factor is the owner/predecessor characteristics, and the most dependent is the involvement of the successor from an early age in family business activities. makes it easier for successors to understand the value of the family business.
The woodcraft industry in Bali has cultural values, so considering its sustainability, several recommendations can be made according to the results of this research. The current owner needs to adopt a participatory leadership pattern by involving successors in the family business, both as employees and directly involved in management. Generally, in the transgeneration process, there are more potential successors than the previous generation, so that the owner can assess the interest and commitment of potential successors earlier. The goal is for the successor to have skills and knowledge related to this industry.
Following up on the results of the analysis regarding the performance of the woodcraft family business, where family relations are still well established, this can be used as social capital to jointly improve the business/financial performance of the business. Meanwhile with regard to the transgenerational process, where in this case the industry is in the transition from the second generation to the third generation, or in the third generation, where the risk of conflict will be higher, then further research can be directed to examine objectives and the interrelationships between actors/stakeholders in this industry. Scientifically, to understand the alignment of the definition of success in family business performance, it is also necessary to know the stakeholders of the family business (Poutziouris et al., 2006). Conversely, if there is inconsistency in defining success in a family business, it indicates a source of conflict (Astrachan & McMilan, 2003). So that prospective analysis through the MACTOR technique needs to be considered in future research. | 2023-06-05T15:02:26.208Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "f7cbf4fa621adee9f86dabb9da811337d575216f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5267/j.dsl.2023.4.002",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "42d86252370b42781107e7d9505c7189a643d77f",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
135409746 | pes2o/s2orc | v3-fos-license | Competition alters predicted forest carbon cycle responses to nitrogen availability and elevated CO2: simulations using an explicitly competitive, game-theoretic vegetation demographic model
Competition is a major driver of carbon allocation to different plant tissues (e.g., wood, leaves, fine roots), and allocation, in turn, shapes vegetation structure. To improve their modeling of the terrestrial carbon cycle, many Earth system models now incorporate vegetation demographic models (VDMs) that explicitly simulate the processes of individual-based competition for light and soil resources. Here, in order to understand how these competition processes affect predictions of the terrestrial carbon cycle, we simulate forest responses to elevated atmospheric CO2 concentration [CO2] along a nitrogen availability gradient, using a VDM that allows us to compare fixed allocation strategies vs. competitively optimal allocation strategies. Our results show that competitive and fixed strategies predict opposite fractional allocation to fine roots and wood, though they predict similar changes in total net primary production (NPP) along the nitrogen gradient. The competitively optimal allocation strategy predicts decreasing fine root and increasing wood allocation with increasing nitrogen, whereas the fixed strategy predicts the opposite. Although simulated plant biomass at equilibrium increases with nitrogen due to increases in photosynthesis for both allocation strategies, the increase in biomass with nitrogen is much steeper for competitively optimal allocation due to its increased allocation to wood. The qualitatively opposite fractional allocation to fine roots and wood of the two strategies also impacts the effects of elevated [CO2] on plant biomass. Whereas the fixed allocation strategy predicts an increase in plant biomass under elevated [CO2] that is approximately independent of nitrogen availability, competition leads to higher plant biomass response to elevated [CO2] with increasing nitrogen availability. Our results indicate that the VDMs that explicitly include the effects of competition for light and soil resources on allocation may generate significantly different ecosystemlevel predictions of carbon storage than those that use fixed strategies.
Introduction
Allocation of assimilated carbon to different plant tissues is a fundamental aspect of plant growth and profoundly affects terrestrial ecosystem biogeochemical cycles (Cannell and Dewar, 1994;Lacointe, 2000).Ecologically, allocation represents an evolutionarily honed "strategy" of plants that use limited resources and compete with other individuals and consequently drives successional dynamics and vegetation structure (De Kauwe et al., 2014;DeAngelis et al., 2012;Haverd et al., 2016;Tilman, 1988).Biogeochemically, allocation links plant physiological processes, such as photosynthesis and respiration, to biogeochemical cycles and carbon E. Weng et al.: Game-theoretic modeling of allocation in forest ecosystems storage of ecosystems (Bloom et al., 2016;De Kauwe et al., 2014).Thus, correctly modeling allocation patterns is critical for correctly predicting terrestrial carbon cycles and Earth system dynamics.
In current Earth system models (ESMs), the terrestrial carbon cycle is usually simulated by pool-based compartment models that simulate ecosystem biogeochemical cycles as lumped pools and fluxes of plant tissues and soil organic matter (Fig. 1a) (Emanuel and Killough, 1984;Eriksson, 1971;Parton et al., 1987;Randerson et al., 1997;Sitch et al., 2003).In these models, the dynamics of carbon can be described by a linear system of equations (Koven et al., 2015;Luo et al., 2001;Luo and Weng, 2011;Sierra and Mueller, 2015;Xia et al., 2013): where X is a vector of ecosystem carbon pools, U is carbon input (i.e., gross primary production, GPP), B is the vector of allocation parameters to autotrophic respiration and plant carbon pools (e.g., leaves, stems and fine roots), and A is a matrix of carbon transfer and turnover.In this system, carbon dynamics are defined by carbon input (U ), allocation (B), and residence time and transfer coefficients (A).The allocation schemes (B) are thus embedded in a linear system, or quasi-linear system if the allocation parameters in B are a function of carbon input (U ) or plant carbon pools (X).
The modeling of allocation in this system (i.e., the parameters in vector B) is usually based on plant allometry, biomass partitioning and resource limitation (De Kauwe et al., 2014;Montané et al., 2017).The allocation parameters are either fixed ratios to leaves, stems and roots, which may vary among plant functional types (e.g., CENTURY, Parton et al., 1987;TEM, Raich et al., 1991;CASA, Randerson et al., 1997), or are responsive to climate and soil conditions as a way to phenomenologically mimic the shifts in allocation that are empirically observed or hypothesized (e.g., CTEM, Arora and Boer, 2005;ORCHIDEE, Krinner et al., 2005;LPJ, Sitch et al., 2003).These modeling approaches either assume that vegetation is equilibrated (fixed ratios) or average the responses of plant types to changes in environmental conditions as a collective behavior.Thus, the carbon dynamics in these models can be constrained by selecting appropriate parameters of allocation, turnover rates and transfer coefficients to fit the observations (Friend et al., 2007;Hoffman et al., 2017;Keenan et al., 2013).
To predict transient changes in vegetation structure and composition in response to climate change, vegetation demographic models (VDMs) that are able to simulate transient population dynamics are being incorporated into ESMs (Fisher et al., 2018;Scheiter and Higgins, 2009).Generally, VDMs explicitly simulate demographic processes, such as plant reproduction, growth and mortality, to generate the dynamics of populations (Fig. 1b).To speed computations and minimize complexity, groups of individuals are usually mod-eled as cohorts.With multiple cohorts and plant functional types (PFTs), VDMs can bring plant functional diversity and adaptive dynamics into the system when explicitly simulating individual-based competition for different resources and vegetation succession and thus predict dominant plant trait changes with environmental conditions and ecosystem development (Scheiter et al., 2013;Scheiter and Higgins, 2009;Weng et al., 2015).
The combinations of plant traits represent the competition strategies at different stages of ecosystem development.Evolutionarily, a strategy that can outcompete all other strategies in the environment created by itself will be dominant.This strategy is called an evolutionarily stable strategy or a competitively optimal strategy (McGill and Brown, 2007).In VDMs, competitively optimal strategies can therefore be reasonably predicted based on the costs and benefits of different strategies (i.e., combinations of plant traits) through their effects on demographic processes (i.e., fitness) and ecosystem biogeochemical cycles (Fig. 1c) (e.g., Farrior et al., 2015;Weng et al., 2015).
The dynamics of plant traits can substantially change predictions of ecosystem biogeochemical dynamics since they change the key parameters of vegetation physiological processes and soil organic matter decomposition (e.g., Dybzinski et al., 2015;Farrior et al., 2015;Weng et al., 2017).Therefore, the key parameters that are used to estimate carbon dynamics in the linear system model (Eq.1), such as allocation (B) and residence times in different carbon pools (matrix A, which includes coefficients of carbon transfer and turnover time) become functions of competition strategies that vary with environment and carbon input.In addition, the turnover of vegetation carbon pools becomes a function of allocation, leaf longevity, fine root turnover and tree mortality rates, which change with vegetation succession and the most competitive plant traits.These changes make the system nonlinear and can lead to large biases within the framework of the compartmental pool-based models as represented by Eq. (1) (Sierra et al., 2017;Sierra and Mueller, 2015).Because of the high complexity associated with demographic and competition processes, the model predictions are usually sensitive to the parameters in these processes and are of high uncertainty (e.g., Pappas et al., 2016).
In contrast to their implementation in the more complicated VDMs discussed above, models of competitively dominant plant strategies using much simpler model structures and assumptions can sometimes be solved analytically (Dybzinski et al., 2011(Dybzinski et al., , 2015;;Farrior et al., 2013Farrior et al., , 2015)).Although simplified, such models can pinpoint the key processes that improve the predictive power of simulation models (Dybzinski et al., 2011;Farrior et al., 2013Farrior et al., , 2015)), allowing them to help researchers formulate model processes and understand the simulated ecosystem dynamics in ESMs.For example, the analytical model derived by Farrior et al. (2013) that links interactions between ecosystem carbon storage, allocation and water stress at elevated atmospheric CO 2 con- centration [CO 2 ] sheds light on the otherwise inscrutable processes leading to varied soil water dynamics in a land model coupled with an VDM (Weng et al., 2015).Recognizing the benefit, Weng et al. (2017) included both a simplified analytical model and a more complicated VDM to understand competitively optimal leaf mass per area, competition between evergreen and deciduous plant functional types, and the resulting successional patterns.
In this study, we use a stand-alone simulator derived from the LM3-PPA model (Weng et al., 2017(Weng et al., , 2015) ) to show how forests respond to elevated [CO 2 ] and nitrogen availability via different competitively optimal allocation strategies.The demographic processes of this model have been coupled into the land model of the Geophysical Fluid Dynamical Laboratory's Earth System Model (Shevliakova et al., 2009;Weng et al., 2015) and are being added to NASA Goddard Institute for Space Study's Earth system model, ModelE (Schmidt et al., 2014).Using this model, we simulate the shifts in competitively optimal allocation strategies in response to elevated [CO 2 ] at different nitrogen levels based on insights from the analytical model derived by Dybzinski et al. (2015).Dybzinski et al. (2015)'s model predicts that increases in carbon storage at elevated [CO 2 ] relative to storage at ambient [CO 2 ] are largely independent of total nitrogen because of an increasing shift in carbon allocation from long-lived, lownitrogen wood to short-lived, high-nitrogen fine roots under elevated [CO 2 ] with increasing nitrogen availability.Here, we analyze the simulated ecosystem carbon cycle variables (gross and net primary production, allocation, and biomass) of separate monoculture and polyculture model runs.In the monoculture runs, ecosystem properties are the result of the prescribed allocation strategies of a given PFT.In the polyculture runs, competition between the different allocation strategies results in succession and the eventual dominance of the most competitive allocation strategy for a given nitrogen availability and [CO 2 ] level.Since everything else in the model is identical, we are able to compare the predic-tions of single fixed strategies with competitively optimal allocation strategies by comparing the ecosystem properties of these two types of runs.
2 Methods and materials
BiomeE model overview
We used a stand-alone ecosystem simulator (Biome Ecological strategy simulator, BiomeE) to conduct simulation experiments.BiomeE is derived from the version of LM3-PPA used in Weng et al. (2017), and its code is available at Github (https://github.com/wengensheng/BiomeESS,last access: 27 November 2019).In this version, we simplified the processes of energy transfer and soil water dynamics of LM3-PPA (Weng et al., 2015) but still retained the key features of plant physiology and individual-based competition for light, soil water and, via the decomposition of soil organic matter, nitrogen (Fig. 2 and Supplement I for details).In this model, individual trees are represented as sets of cohorts of similarly sized trees and are arranged in different vertical canopy layers according to their height and crown area following the rules of the perfect plasticity approximation (PPA) model (Strigul et al., 2008).Sunlight is partitioned into these canopy layers according to Beer's law.Thus, a key parameter for light competition, critical height, is defined; all the trees above this context-dependent height get full sunlight and all trees below this height are shaded by the upper-layer trees.
Each tree consists of seven pools: leaves, fine roots, sapwood, heartwood, fecundity (seeds), and nonstructural carbohydrates and nitrogen (NSC and NSN, respectively) (Fig. 2b).The carbon and nitrogen in plant pools enter the soil pools with the mortality of individual trees and the turnover of leaves and fine roots.There are three soil organic matter (SOM) pools for carbon and nitrogen: fast turnover, slow turnover and microbial pools, along with a mineral ni- trogen pool for mineralized nitrogen in soil.The simulation of SOM decomposition and nitrogen mineralization is based on the models of Gerber et al. (2010) and Manzoni et al. (2010) and described in detail in Weng et al. (2017).The decomposition rate of a SOM pool is determined by the basal turnover rate together with soil temperature and moisture.The nitrogen mineralization rate is a function of decomposition rate and the C : N ratio of the SOM.Microbes must consume more carbon in the high C : N ratio SOM pools to get enough nitrogen and must release excessive nitrogen in the low C : N ratio SOM pools to get enough carbon for energy (Weng et al., 2017).
Plant growth and reproduction are driven by the carbon assimilation of leaves via photosynthesis, which is in turn dependent on water and nitrogen uptake by fine roots.The photosynthesis model is identical to that of LM3-PPA (Weng et al., 2015), which is a simplified version of Leuning model (Leuning et al., 1995).This model first calculates photosynthesis rate, stomatal conductance and water demand of the leaves of each tree (cohort) in the absence of soil water limitation.Then, it calculates available water supply as a function of fine root surface area and soil water content.The demand-based assimilation rate and stomatal conductance are adjusted if soil water supply is less than plant water demand.Soil water content is calculated based on the fluxes of precipitation, soil surface evaporation and plant water update (transpiration) in three layers of soil to a depth of 2 m (see Supplement I for details).
Assimilated carbon enters into the NSC pool and is subsequently used for respiration, growth and reproduction.Empirical allometric equations relate woody biomass (including coarse roots, bole and branches), crown area and stem diameter.The individual-level dimensions of a tree, i.e., height (Z), biomass (S) and crown area (A CR ), are given by empirical allometries (Dybzinski et al., 2011;Farrior et al., 2013): where Z is tree height, D is tree diameter, S is total woody biomass carbon (including bole, coarse roots and branches) of a tree, α c and α Z are PFT-specific constants, θ c = 1.5 and θ Z = 0.5 (Farrior et al., 2013) (although they could be made PFT-specific if necessary), π is the circular constant, is a PFT-specific taper constant, and ρ W is PFT-specific wood density (kg C m −3 ) (Table 1).We set targets for leaf (L * ), fine root (FR * ) and sapwood cross-sectional area (A * SW ) that govern plant allocation of nonstructural carbon and nitrogen during growth.These targets are related by the following equations based on the assumption of the pipe model (Shinozaki et al., 1964): where L * (D, p), FR * (D) and A * SW (D) are the targets of leaf mass (kg C per tree), fine root biomass (kg C per tree) and sapwood cross-sectional area (m 2 /tree), respectively, at tree diameter D; l * is the target leaf area per unit crown area of a given PFT; A CR (D) is the crown area of a tree with diameter D; σ is PFT-specific leaf mass per unit area (LMA); p(t) is a PFT-specific function ranging from zero to one that governs leaf phenology (Weng et al., 2015); ϕ RL is the target ratio of total root surface area to the total leaf area; γ is specific root area; and α CSA is an empirical constant (the ratio of sapwood cross-sectional area to target leaf area).The phenology function p(t) takes values 0 (nongrowing season) or 1 (growing season) following the phenology model of LM3-PPA (Weng et al., 2015).The onset of a growing season is controlled by two variables, growing degree days (GDDs) and a weighted mean daily temperature (T pheno ), while the end of a growing season is controlled by T pheno (see Supplement I for details of the phenology model).
Nitrogen uptake
The rate of nitrogen uptake (U , g N m −2 h −1 ) from the soil mineral nitrogen pool is an asymptotically increasing function of fine root biomass density (C FR,total , kg C m −2 ), following McMurtrie et al. ( 2012) where N mineral is the mineral nitrogen in soil (g N m −2 ), f U,max is the maximum rate of nitrogen absorption per hour when C FR,total approaches infinity, and K FR is a shape parameter (kg C m −2 ) at which the nitrogen uptake rate is half of the parameter f U,max .The nitrogen uptake rate of an individual tree (U tree , kg N h −1 tree −1 ) is calculated as follows: where, C FR,tree is the fine root biomass of a tree (kg C tree −1 ).The nitrogen absorbed by roots enters into the NSN pool and then is allocated to plant tissues through plant growth.
Allocation and plant growth
The partitioning of carbon and nitrogen into the plant pools (i.e., leaves, fine roots and sapwood) is limited by the allometric equations, targets of leaves, fine roots and sapwood cross-sectional area, and the stoichiometry (i.e., C : N ratios) of these plant tissues.At a daily time step, the model calculates the amount of carbon and nitrogen that are available for growth according to the total NSC and NSN and current leaf and fine root biomass.Basically, the available NSC (G C ) is the summation of a small fraction (f 1 ) of the total NSC in an individual plant and the differences between the targets of leaf and fine roots and their current biomass capped by a larger fraction (f 2 ) of NSC (Eq.6a).The available NSN (G N ) is analogous to that of the NSC and meets approximately the stoichiometrical requirement of plant tissues (Eq.6b).
where L * and FR * are the targets of leaves and fine roots, respectively (see Eq. 3); L and FR are current leaf and fine roots biomass, respectively; and N * L and N * FR are nitrogen of leaves and fine roots at their targets according to their target C : N ratios.The parameter f 1 is the fraction of NSC (or NSN) for normal growth after leaves and fine roots approach their targets, and f 2 caps the maximum daily availability of NSC (or NSN) during the period of leaf flush at the beginning of a growing season.The parameter f 1 is much smaller than f 2 .We let f 1 = 1/(365 × 3) and f 2 = 0.02 in this study.
The allocation of the available NSC (i.e., G C ) to wood (G W ), leaves (G L ), fine roots (G FR ) and seeds (G F ) follows the equations below (Eq.7).These equations describe the mass growth of plant tissues with nitrogen effects on the carbon allocation between high-nitrogen tissues and lownitrogen tissues (wood) for maximizing leaves and fine roots growth (G L and G FR , respectively), optimizing carbon usage at given nitrogen supply (G N ) and keeping the tissues at their target C : N ratios.
www.biogeosciences.net/16/4577/2019/Biogeosciences, 16, 4577-4599, 2019 where CN L,0 , CN FR,0 , CN F,0 , and CN W,0 are the target C : N ratios of leaves, fine roots, seeds, and sapwood, respectively; γ is specific root area (m 2 kg C −1 ); σ is leaf mass per unit area (kg C m −2 ); f LFR,max is the maximum fraction of G C for leaves and fine roots (0.85 in this study); v is the fraction of left carbon for seeds (0.1 in this study); and r S/D is a nitrogen-limiting factor ranging from 0 (no nitrogen for leaves, fine roots and seeds) to 1 (nitrogen available for full growth of leaves, fine roots and seeds).The parameter r S/D controls the allocation of G C and G N to the four plant pools (Eq.7a).It can be analytically solved as follows (Eqs.8 and 9).
where N is defined as the potential nitrogen demand for plant growth at r S/D =1 (i.e., no nitrogen limitation), when G N ≥ N (r S/D = 1), there is no nitrogen limitation, all the G C will be used for plant growth and the allocation follows the rules of the carbon only model (Eq.7d-f as r S/D = 1).The excessive nitrogen (G N −N ) will be returned to the NSN pool (as if they were never taken out).When G C /CN W,0 < G N < N (i.e., 0 < r S/D < 1), all G C and G N will be used in new tissue growth; however, the leaves and fine roots cannot reach their targets at this step (i.e. they are down-regulated).When G N ≤ G C /CN W,0 (r S/D = 0), all the G N will be allocated to sapwood and the excessive carbon (G C −G N CN W,0 ) will be returned to NSC pool.This is a very rare case since a low G N leads to low leaf growth, reducing G C before the case G N < G C /CN W,0 happens.Therefore, in most cases, Eq. ( 7a) is Overall, this strategy down-regulates leaf production under low nitrogen conditions while making use of assimilated carbon in height-structured competition for light.Allocation to wood tissues (G W ) drives the growth of tree diameter, height and crown area and thus increases the targets of leaves and fine roots (Eq.3).By differentiating the stem biomass allometry in Eq. ( 2) with respect to time, using the fact that dS/dt equals the carbon allocated for wood growth (G W ), we have the diameter growth: This equation transforms the mass growth to structural changes in tree architecture.With an updated tree diameter, we can calculate the new tree height and crown area using allometry equations (Eq.2) and targets of leaf and fine root biomass (Eq. 3) for the next growth step.
Overall, this is a flexible allocation scheme and still follows the major assumptions in the previous version of LM3-PPA (Weng et al., 2015(Weng et al., , 2017)).This allocation scheme prioritizes the allocation to leaves and fine roots, maintains a minimum growth rate of stems, and keeps the constant area ratio of fine roots to leaves.Based on these allocation rules, the average allocation of carbon and nitrogen to leaves, fine roots and wood over a growing season are governed by the targets for the leaf area per unit crown area (i.e., crown leaf area index, l * ) and fine root area per unit leaf area (ϕ RL ).Since the crown leaf area index, l * , is fixed in this study, ϕ RL is the key parameter determining the relative allocation of carbon to fine roots and stems.A high ϕ RL means a high relative allocation to fine roots and therefore low relative allocation to stems and vice versa.Note that here φ RL is fixed for each PFT and will remain so for all the model runs.
The process of choosing a context-dependent competitively dominant ϕ RL will take place after finding the fitness of each ϕ RL in monoculture and in competition with other PFTs (i.e., different values of ϕ RL ).The competitively optimal strategy is the one that can successfully exclude all others in the processes of competition and succession, but it is not necessarily the one that maximizes production in monoculture.For example, each ϕ RL creates an environment of light profile and soil nitrogen in its monoculture.Other ϕ RL PFTs may have higher fitness in this environment than the one that creates it.Only the competitively dominant strategy has the highest fitness in the environment it creates (Fig. 1c).
Site and data
Data pertaining to vegetation, climate and soil at Harvard Forest (Aber et al., 1993;Hibbs, 1983;Urbanski et al., 2007) were used to design the plant functional types (PFTs) and ecosystem nitrogen levels used in the simulation experiments, to drive the model and to calibrate model parameters.Harvard Forest is located in Massachusetts, USA (42.54 • , −72.17 • ).The climate of Harvard Forest is cool temperate with an annual precipitation of 1050 mm, distributed fairly evenly throughout the year.The annual mean temperature is 8.5 • C with a high monthly mean temperature of 20 • C in July and a low of −7 • C in January.The soils are mainly sandy loam with an average depth of around 1 m and are moderately well drained in most areas.In forest sites, soil carbon is around 8 kg C m −2 and nitrogen 300 g N m −2 (Compton and Boone, 2000).The vegetation is deciduous broadleaf (mixed) forest with its major species being red oak (Quercus rubra), red maple (Acer rubrum), black birch (Betula lenta), white pine (Pinus strobus) and hemlock (Tsuga canadensis) (Compton and Boone, 2000;Savage et al., 2013).The data used to drive our model runs are gap-filled hourly meteorological data at Harvard Forest from 1991 to 2006, obtained from North American Carbon Program (NACP) site-level synthesis datasets (Barr et al., 2013).
Simulation experiments
We set two atmospheric CO 2 concentration ([CO 2 ]) levels, 380 and 580 ppm, and eight ecosystem total nitrogen levels (ranging from 114.5 to 552 g N m −2 at the interval of 62.5 g N m −2 ) by assigning the initial content of the slow SOM pool for our simulation experiments (Table 2).This range covers the soil nitrogen contents across the plots at Harvard Forest with different species compositions and landuse history (200-300 g N m −2 ) (Compton and Boone, 2000;Melillo et al., 2011) and represents the range from infertile to fertile soils in temperate forests (Post et al., 1985;Yang et al., 2011).The nitrogen cycles through the plant and soil pools and is redistributed among them via plant demographic processes, soil carbon transfers and plant uptake.In all the simulation experiments, we assume the ecosystem has no nitrogen inputs and no outputs for convenience since we already have eight total nitrogen levels to represent the consequences of different nitrogen input and output processes at an equilibrium state.The PFTs were based on an evergreen needle-leaved tree PFT with different leaf to fine root area ratios, ϕ RL , in the range from 1 to 8 (Table 2).Simply stated, the PFTs we investigate only differ in parameter ϕ RL .
We define the model runs started with only one fixed-ϕ RL PFT as "monoculture runs", although the actual allocation of carbon to different plant tissues varies with [CO 2 ] and All the PFTs (ϕ RL = 1-8) used in the monoculture runs.
Polyculture run II One model run per combination of nitrogen level and CO 2 concentration.
Eight PFTs with ϕ RL ranging from 4.5-0.5i to 8.5-0.5i at the interval of 0.5, where i denotes the eight nitrogen levels from 114.5 to 552 g N m −2 .
ecosystem nitrogen availability.The model runs started with multiple PFTs are called "polyculture runs" (eight PFTs with different ϕ RL at the beginning, although many are driven to extinction during a given model run).We conducted one set of monoculture runs and two sets of polyculture runs (Table 2).
In the monoculture runs, we run the full combinations of eight PFTs with root/leaf area ratios (ϕ RL ) from 1 to 8, eight ecosystem total nitrogen levels and two CO 2 concentrations (380 and 580 ppm) (Table 2).For the eight PFTs, only those with ϕ RL ≤ 6 survived at ambient [CO 2 ] (380 ppm) because the carbon assimilated by leaves could not meet the demand by plant tissues at ϕ RL > 6.The monoculture runs are for exploring the model predictions of gross primary production (GPP), net primary production (NPP), and allocation and biomass at equilibrium with fixed ϕ RL at different total nitrogen levels.
In polyculture run I, we used the same PFTs as in those monoculture runs, where their ϕ RL varied from 1 to 8 at the interval of 1.0 and the ecosystem total nitrogen levels were the same as those used in the monoculture runs (Table 2).This set of polyculture runs was used to explore successional patterns at both ambient and elevated [CO 2 ] (380 and 580 ppm, respectively).However, this set of model runs could not show the details of equilibrium plant biomass and allocation patterns along the nitrogen gradient because of the large intervals between the ϕ RL values.
To achieve greater resolution in our competition predictions, we designed the polyculture run II using a dynamic PFT combination scheme, according to the ranges of ϕ RL obtained from the polyculture run I that could survive at a particular nitrogen level at both CO 2 concentrations.For each nitrogen level, we set eight PFTs with ϕ RL that var-ied in a range 3.5 (e.g., x ∼ x + 3.5) at the interval of 0.5, starting with the highest ϕ RL of 8.0 at the lowest N level (114.5 g N m −2 ) and decreasing 0.5 per level of increase in ecosystem total N.We used i = 1, 2, . . ., 8 to denote the eight N levels from 114.5 to 552 g N m −2 .The ϕ RL of the eight PFTs at each level were 5.0-0.5i,5.5-0.5i, . . ., 8.5-0.5i(Table 2).For example, at the nitrogen of 114.5 g N m −2 (i = 1), the ϕ RL of the eight PFTs were 4.5, 5.0, . . ., 8.0 and at 177 g N m −2 (i = 2) they were 4.0, 4.5, . . ., 7.5.
For both monoculture and polyculture runs, visual inspection indicated that stands had reached equilibrium after ∼ 1200 years.To be conservative, we present equilibrium data by averaging model properties between years 1400 and 1800.We compared simulated equilibrium GPP, NPP, allocation (both absolute amount of carbon and fractions of the total NPP) and plant biomass of the polyculture run II with those from the monoculture runs.We used the results from one PFT (ϕ RL = 4) to highlight the differences of plant responses with competitively optimal allocation strategies obtained from the polyculture run II.
Results
In the monoculture runs, GPP and NPP increase by a factor of 3 along the gradient of nitrogen used in this study (114.5-552g N m −2 ) at both ambient (Fig. 3) and elevated [CO 2 ] (Fig. S1 in the Supplement).The magnitude of differences in GPP and NPP due to differences in fixed allocation within a given nitrogen level is comparable to the magnitude of differences in GPP and NPP due to nitrogen level within a given fixed allocation strategy (Fig. 3a and b) when ϕ RL is in the range that allows plants to grow normally (1-5 in the case of ambient [CO 2 ]).As prescribed by the definition of ϕ RL , allocation of NPP to fine roots increases with ϕ RL in monoculture runs (Fig. 3c).As a consequence, allocation of NPP to wood decreases as ϕ RL increases (Fig. 3d).Allocation to leaves does not change much with ϕ RL (Fig. 3e, note differences in scale).Correspondingly, plant biomass at equilibrium decreases with ϕ RL (Fig. 3f).The effects of nitrogen on the allocation of carbon to fine roots and wood follow our allocation model assumptions because more carbon is allocated to low-nitrogen woody tissues in our model when nitrogen is limited.However, the amplitude of changes in GPP and NPP induced by nitrogen availability is lower than the amplitude of changes resulting from different values of ϕ RL in the monoculture runs.
We used two sets of polyculture runs to look for the ϕ RL that is closest to competitively optimal.In the polyculture run I, where ϕ RL ranges from 1 to 8 at all nitrogen levels, the winning strategy (ϕ RL ) increases from 5 to 2 as the total nitrogen increases from 114.5 to 489.5 g N m −2 at ambient [CO 2 ] (380 ppm) (Fig. 4a, c, g, e).Elevated [CO 2 ] (580 ppm) shifts the winning strategy to higher (ϕ RL ) at all the total nitrogen levels.As shown in Fig. 4, the winning strategy shifts from ϕ RL = 5 to ϕ RL = 8 at 114.5 g N m −2 and from ϕ RL = 2 to ϕ RL = 4 at 489.5 g N m −2 .In some situations (e.g., Fig. 4g and Figs.S2 and S3), it takes a long time for the most competitive PFTs to out-compete the previously dominant PFTs because of the sequential replacement of dominant PFTs during the course of succession and the slow growth rate of trees in understory.
Based on the shifts of the winning ϕ RL from ambient [CO 2 ] to elevated [CO 2 ] at the eight nitrogen levels, we designed the polyculture run II with a high resolution of ϕ RL and calculated their GPP, NPP, allocation and plant biomass at equilibrium state.The of ϕ RL of the winning PFTs decreases from 5.5 to 2 at ambient [CO 2 ] and from 8.0 to 3.0 at elevated [CO 2 ] as total nitrogen increases from 114.5 to 552.0 g N m −2 .The equilibrium GPP and NPP increase with total nitrogen at values similar to those of the monoculture runs (Fig. 5b and c).However, the CO 2 stimulation of NPP increases with total nitrogen in the polyculture runs more than it is in the monoculture runs.Elevated [CO 2 ] increases carbon use efficiency (defined as the ratio of NPP to GPP in this study, NPP/GPP) in both the monoculture and polycul- ture runs (Fig. 5d).Also, the dependence of NPP : GPP ratio on nitrogen is higher in the polyculture runs than it is in the monoculture runs (Fig. 5c).
Allocation of NPP to leaves increases with nitrogen in all conditions, i.e. both competition and monoculture at both ambient [CO 2 ] and elevated [CO 2 ] (Fig. 6a).Foliage NPP is similar in these four model runs when nitrogen is low.At high nitrogen (> 400 g N m −2 ), polyculture runs have higher foliage NPP than the monoculture runs generally.Allocation to leaves is relatively stable across the nitrogen gradient at the two [CO 2 ] levels (Fig. 6b).The fraction of NPP allocated to leaves changes little with nitrogen (Fig. 6b) and it is universally higher at ambient [CO 2 ] than it is at elevated [CO 2 ].
Fine root NPP does not significantly change with ecosystem total nitrogen in polyculture runs, whereas it increases monotonically with increasing nitrogen in monoculture runs (Fig. 6c).Elevated [CO 2 ] increases fine root allocation at low nitrogen in polyculture runs but decreases root allocation irrespective of nitrogen in monoculture runs (Fig. 6c).The fraction of NPP allocated to fine roots decreases with nitrogen at both CO 2 concentrations in polyculture runs, but it increases slightly in monoculture runs (Fig. 6d).In monoculture runs, elevated [CO 2 ] reduces the fraction of NPP allocated to fine roots at all nitrogen levels.In polyculture runs, fractional allocation to fine roots increases at elevated [CO 2 ] when nitrogen is low (e.g., 114.5-302 g N m −2 ) and decreases at elevated [CO 2 ] when nitrogen is high (e.g., 364-552 g N m −2 ).
In the reverse of the fine root response, NPP allocation to woody tissues increases with total nitrogen in both competition and monoculture runs (Fig. 6e).In polyculture runs, the fraction of allocation to woody tissues decreases at elevated [CO 2 ] when ecosystem total nitrogen is low (e.g., 114-245 g N m −2 ) and increases at elevated [CO 2 ] when ecosystem total nitrogen is high (e.g., 302-552 g N m −2 ).
As a result of the changes in competitively optimal ϕ RL , plant biomass increases dramatically with ecosystem total nitrogen in polyculture runs compared with that in monoculture runs (Fig. 7a).The effects of elevated [CO 2 ] on plant biomass increase with nitrogen in polyculture runs but are constant overall in monoculture runs (Fig. 7b).Compared with the full spread of monoculture runs with ϕ RL ranging from 1 to 6, polyculture runs have high root allocation at low nitrogen and low root allocation at high nitrogen due to changes in the dominant competitive allocation strategy, which amplifies plant biomass responses to elevated [CO 2 ] with increasing nitrogen (Fig. 7c and d).
Discussion
Our simulations show that the predicted responses of individual plants to elevated [CO 2 ] can be significantly changed by explicit inclusion of competition processes.Here, the major tradeoff for light-and N-limited trees is the relative allocation between stems and fine roots (Dybzinski et al., 2011).Although the wood allocation (and thus carbon sequestration potential) of every PFT used in this study increases under elevated [CO 2 ] at all nitrogen levels (e.g., Fig. 6e, dashed lines), only those PFTs that allocate more to fine roots (with lower carbon sequestration potential) can survive competition under elevated [CO 2 ] (Fig. 6c, solid lines).Put together, explicit inclusion of competition processes reduces the expected increase in biomass (and thus carbon sequestration potential) under elevated [CO 2 ] compared with simulations that do not include competition processes (Fig. 7b).
Since there is a lack of direct observations or experiments to quantitatively validate the long-term patterns predicted by our model, we did not calibrate it to fit observations at Harvard Forest.In the following section, we analyze the model processes in detail and validate our modeling approach by comparing the general patterns from observations and experiments with model predictions.These comparisons also shed light on the modeling of allocation and vegetation responses to elevated [CO 2 ].
Mechanisms of game-theoretic allocation modeling and simulation results validation
In our model, the allocation of carbon and nitrogen within an individual tree is based on allometric scaling (Eq.2), functional relationships (Eq. 3) and optimization of resource usage (Eq.7).Generally, the allometric scaling relationships define the maximum leaf and fine root surface area at a given tree size and the functional relationships define the ratios of leaf area to sapwood cross-sectional area and fine root surface area.These rules are commonly used in ecosystem models (Franklin et al., 2012) and have been shown to generate reasonable predictions (De Kauwe et al., 2014;Valentine and Mäkelä, 2012).These rules implicitly define the priority of allocation to leaves and fine roots but allow for structurally unlimited stem growth when resources (carbon and nitrogen in this study) are available (i.e., the remainder goes to stems after leaf and fine root growth) and NSC is not accumulated exaggeratedly when ecosystem nitrogen is limited (Fig. S6).
We used a tuning parameter, maximum leaf and fine root allocation, f LFR,max , to constrain the maximum allocation to leaves and fine roots in order to maintain a minimum growth rate of wood in years of low productivity.This is consistent with wood growth patterns in temperate trees, where new wood tissues must be continuously produced (especially early in the growing season) to maintain the functions of tree trunks and branches (Cuny et al., 2012;Michelot et al., 2012;Plomion et al., 2001).This parameter does not change the fact that leaves and fine roots are the priority in allocation, since allocation ratios to stems are around 0.4-0.7 in temperate forests (Curtis et al., 2002;Litton et al., 2007).With a value of 0.85, parameter f LFR,max seldom affects the overall carbon allocation ratios of leaves, fine roots and stems.If f LFR,max = 1 (i.e., the highest priority for leaf and fine root growth), simulated trunk radial growth would have unreasonably high interannual variation because leaf and fine root growth would use all carbon to approach to their targets, leaving nothing for stems in some years of low productivity.
The simulation of competition for light and soil resources is based on two fundamental mechanisms: (1) competition for light is based on the height of trees according to the PPA model, which assumes trees have perfectly plastic crown to capture light via stem (trunk) and branch phototropism (Strigul et al., 2008), and (2) individual soil N uptake is linearly dependent on the fine root surface area of an individual tree relative to that of its neighbors (Dybzinski et al., 2019;McMurtrie et al., 2012;Weng et al., 2017).These two mechanisms define an allocation tradeoff between wood and fine roots for carbon and nitrogen investment in different CO 2 concentrations and nitrogen environments.Including explicit competition for these resources to determine the dominant strategies results in very different predicted allocation patterns -and thus ecosystem level responses -than those of strategies in the absence of competition.For example, fractional wood allocation increases with increasing nitrogen availability under competitive allocation but decreases -the opposite qualitative response -under a fixed strategy (Fig. 6f).Consequently, equilibrium plant biomass is predicted to increase much more with increasing nitrogen availability under a competitive strategy (Fig. 4c, d).In nature, the effects of competition on dominant plant traits may occur through species replacement or community assembly (akin to the mechanism in our model) (e.g., Douma et al., 2012), but it may also occur through adaptive plastic responses or in-place subpopulation evolution of ecotypes (Grams and Andersen, 2007;McNickle and Dybzinski, 2013;Smith et al., 2013).
Generally, the predictions from competitively optimal allocation strategies predicted by our model can be found in large-scale forest censuses and site-level experiments, such as that (1) high-nitrogen environments (i.e., productive environments) favor high wood allocation and low root allocation (Litton et al., 2003;Poorter et al., 2012), (2) elevated [CO 2 ] increases root allocation (Drake et al., 2011;Iversen, 2010;Jackson et al., 2009;Nie et al., 2013;Smith et al., 2013), (3) low nitrogen availability limits vegetation biomass responses to elevated [CO 2 ] as a result of high root allocation or root exudation (Jiang et al., 2019a;Norby and Zak, 2011), and (4) increases in vegetation biomass at elevated [CO 2 ] are largely due to high wood allocation (Norby and Zak, 2011;Walker et al., 2019).These predictions emerge from the fundamental assumptions of our model without tuning parameters to fit the data, providing some confidence in the robustness of our approach.
The literature on experimental responses of plant community to elevated [CO 2 ] shows that the responses vary with site characteristics, forest composition, stand age, plant physiological responses and soil microbial feedbacks (Norby and Zak, 2011;Terrer et al., 2016Terrer et al., , 2018)).For example, in the Duke Free Air CO 2 Enhancement (FACE) experiment, where the major trees are loblolly pine (Pinus taeda), increases in root production at elevated [CO 2 ] stimulated increased nitrogen supply that allowed the forest to sustain higher productivity (Drake et al., 2011).However, in the Oak Ridge FACE, where the major trees are sweetgum (Liquidambar styraciflua), increased fine-root production under elevated [CO 2 ] did not result in increased net nitrogen mineralization and increases in root production declined after 8 years of CO 2 enhancement (Iversen, 2010;Norby and Zak, 2011).In Euc-FACE (Jiang et al., 2019a), where the major trees are Eucalyptus tereticornis and the soil is infertile, trees significantly increased their root exudation under limited nutrient supplies but had no significant increase in biomass in response to elevated [CO 2 ].The BangorFACE experiment (Smith et al., 2013) found that interspecific competition (Alnus glutinosa, Betula pendula and Fagus sylvatica) resulted in greater increases in root biomass at elevated [CO 2 ].Leaf area index (LAI) responses to elevated [CO 2 ] are also highly varied.As summarized by Norby and Zak (2011), low LAI (in et al., 2003;Norby and Zak, 2011).
The nature of developing a model with generic assumptions and balanced processes reduces its capability to predict all of these responses.For example, plants have a variety of physiological mechanisms to deal with excessive carbon supply when plant demand (i.e., "sink") is relatively low (Fatichi et al., 2019;Körner, 2006), such as down-regulating leaf photosynthesis rate by the accumulated assimilates (Goldschmidt and Huber, 1992) or respiring excessive carbohydrates to regenerate substrates for photosynthesis (Atkin and Macherel, 2009).But these mechanisms are short-term physiological responses (minutes to hours, sometimes days) for plants in situations of temporary nitrogen shortage, high irradiation or drought stress.It is not "economically" sustainable in an infertile environment to maintain highly productive leaves but often suppress their photosynthesis or respire a large portion of their assimilated carbon.
Root exudation is a critical process for plants.It can stimulate soil organic matter decomposition and nitrogen mineralization to facilitate soil nitrogen supply at the expense of carbon (Cheng, 2009;Cheng et al., 2014;Drake et al., 2011;Phillips et al., 2011).The process of root exudation has been adopted by many models to couple with microbial processes in the determination of soil organic matter decomposition (Sulman et al., 2014;Wieder et al., 2014Wieder et al., , 2015)).Some carbon-only models, e.g., LM3 (Shevliakova et al., 2009), the parent model of this one, and TECO (Luo et al., 2001), incorporate root exudation to put extra carbon into the soil in order to avoid down-regulating canopy photosynthesis or overestimating vegetation biomass, both of which had been tuned against data.However, in a demographic competition model like this one, individual plants cannot reap a reward from root exudation as they do in nature when the microbial activities are not fully coupled and the nitrogen in soil is assumed fully accessible by roots of all individuals.Therefore, root exudation is not a competitive strategy in the system defined by the assumptions of this model.
Since the purpose of this study is to explore long-term ecological strategies in different but relatively stable environments, we did not include these processes, especially since they present additional challenges in balancing the complexity of the tradeoffs between modeled demographic processes and plant traits.However, the lack of these processes does limit the predictions of instantaneous responses to variation in environmental conditions or resource supply and possibly of some long-term vegetation characteristics as well.For example, our model predicts reduced LAI under nitrogen limitation (Fig. S7) based on first principles, but it is incidentally the only mechanism that reduces the whole-canopy photosynthesis rate in our model.There are mechanisms that increase nitrogen use efficiency at the expense of carbon by increasing LMA and therefore leaf longevity to maintain high LAI and high canopy-level photosynthesis rates (Aerts, 1995(Aerts, , 1999;;Aerts and Chapin, 1999;Givnish, 2002).We did not include these mechanisms in our simulations, although they are well developed in this model (Weng et al., 2017), because we wished to focus on the strategy of allocation.The clear descriptions of our model's assumptions, its traceable processes, and inclusion of the tradeoffs involved in aboveground and belowground competition provide a useful benchmark from which to incorporate additional mechanisms and tradeoffs.
Root over-proliferation vs. wood allocation
The allocation strategy that maximizes site vegetation biomass allocates very little to fine roots (Figs. 3 and S1).In contrast, the competitively optimal strategy allocates more carbon to fine roots, termed "fine-root over-proliferation" in the literature (Gersani et al., 2001;McNickle and Dybzinski, 2013;O'Brien et al., 2005).It is the result of a competitive "arms race": while increasing fine root area under elevated [CO 2 ] does not result in more nitrogen for an individual, failing to do so would cede some of that individual's nitrogen to its neighbors.Because most nitrogen uptake is via mass flow and diffusion (Oyewole et al., 2017) and because both of these mechanisms depend on sink strength, individuals with relatively greater fine root mass than their neighbors take a greater share of nitrogen, as was recently demonstrated empirically (Dybzinski et al., 2019;Kulmatiski et al., 2017).Thus, fine roots may over-proliferate for competitive reasons relative to lower optimal fine root mass in the hypothetical absence of an evolutionary history of competition (Craine, 2006;McNickle and Dybzinski, 2013).This may also explain why root C : N ratio is highly variable (Dybzinski et al., 2015;Luo et al., 2006;Nie et al., 2013): a high density of fine roots in soil may be more important than the high absorption ability of a single root in competing for soil nitrogen in the usually low mineral nitrogen soils.
Root over-proliferation is still controversial in experiments.For example, Gersani et al. (2001) andO'Brien et al. (2005) found that competing plants generated more roots than those growing in isolation, whereas McNickle and Brown (2014) found that competing plants generated comparable roots to those growing in isolation.Compared to modeled roots, real roots are far more adaptive and complex at modifying their growth patterns in response to soil nutrient and water dynamics (Hodge, 2009).The root growth strategies in response to competition also vary with species (Belter and Cahill, 2015).The mechanisms of self-recognition of inter-and intra-roots can also lead to varied behavior of root growth (Chen et al., 2012).However, all of the aforementioned studies considered only plastic root over-proliferation, where individuals produce more roots in the presence of other individuals than they do in isolation, analogous to stem elongation of crowded seedlings (Dudley and Schmitt, 1996).A portion of root over-proliferation may also be fixed, analogous to trees that still grow tall even when grown in isolation.Dybzinski et al. (2019) showed that plant commu-nity nitrogen uptake rate was independent of fine root mass in seedlings of numerous species, suggesting a high degree of fixed fine root over-proliferation.To improve root competition models, more detailed experiments that control root growth should be conducted to quantify the marginal benefits of roots in isolated, monoculture and polyculture environments.
At high soil nitrogen, height-structured competition for light (also a game-theoretic response, Falster and Westoby, 2003;Givnish, 1982) prevails and trees with greater relative allocation to trunks prevail.The balance between these two competitive priorities (fine roots vs. stems) can be observed in our model predictions as a shift from fine root allocation to wood allocation as soil nitrogen increases.The increases in the critical height (i.e. the context-dependent height of the shortest tree in canopy layer in the PPA) from low nitrogen to high nitrogen indicates a shift from the importance of competition for soil nitrogen to the importance of competition for light as ecosystem nitrogen increases (Fig. S8).Because the most competitive type shifts from high fine root allocation to low fine root allocation as ecosystem total nitrogen increases, increases in NPP and plant biomass across the nitrogen gradient are greater than the increases in NPP and plant biomass assuming allocation strategies in the absence of competition (Fig. 3).This greatly reduces the carbon cost of belowground competition as ecosystem total nitrogen increases.The decrease in the fraction of NPP allocated to leaves at elevated [CO 2 ] (Fig. 6b) occurs because of increases in total NPP and nearly constant absolute NPP allocation to foliage (Fig. 6a).
Model complexity and uncertainty
Compared with the conventional pool-based vegetation models that use pools and fluxes to represent plant demographic processes at a land simulation unit (e.g., grid or patch), VDMs add two more layers of complexity.The first is the inclusion of stochastic birth and mortality processes of individuals (i.e., demographic processes).These processes allow the models to predict population dynamics and transient vegetation structure, such as size-structured distribution and crown organization (e.g., Moorcroft et al., 2001;Strigul et al., 2008).With changes in vegetation structure, allocation and mortality rates can change, generating a different carbon storage accumulation curve compared with those predicted by pool-based models where vegetation structure is not explicitly represented (e.g., Weng et al., 2015).The second is the simulated shift in dominant plant traits during succession due to the shifting of competitive outcomes among different PFTs, which changes the allocation between fast-and slowturnover pools and thus the parameters of allocation and the residence time of carbon in the ecosystem.
Together, these mechanisms may alter long-term predictions of the terrestrial carbon cycle due to changes in PFTbased parameters (Dybzinski et al., 2011;Farrior et al., 2013;Weng et al., 2015).As described in the Introduction, current pool-based models can be described by a linear system of equations characterized by the key parameters of allocation, residence time and transfer coefficients (Eq. 1) with the rigid assumption of unchangeable plant types (Luo et al., 2012;Xia et al., 2013).In VDMs, however, allocation, residence time, leaf traits, phenology, mortality, plant forms and their responses to climate change are all strategies of competition whose success varies with the environmental conditions and the traits of the individuals they are competing against.
Many tradeoffs between plant traits can shift in response to environmental and biotic changes, limiting the applicability of varying a single trait, as we have in this study.For example, allocation, leaf traits, mycorrhizal types and nitrogen fixation can all change with ecosystem nitrogen availability (Menge et al., 2017;Ordoñez et al., 2009;Phillips et al., 2013;Vitousek et al., 2013).The unrealistic effects of model simplification can be corrected by adding important tradeoffs that are missing.For example, the positive feedback between root allocation and SOM decomposition plays a role in mitigating the effects of tragedies of the commons of root over-proliferation (e.g., Gersani et al., 2001;Zea-Cabrera et al., 2006).High root allocation increases the decomposition rate of SOM and the supply of mineral nitrogen because of the high turnover rate of root litter, which favors a strategy of high wood allocation and reduces the competitive optimal fine root allocation.This negative feedback indicates that the model structure is flexible and that we can incorporate correct mechanisms step by step to improve model prediction skills.Testing single strategies is still a necessary step to improving our understanding of the system and prediction skills of the models, though it could lead to unrealistic responses sometimes.
We found that model predictions can differ significantly in response to seemingly small variations in basic assumptions or quantitative relationships.For example, our model predicts that the ratio of plant biomass under elevated [CO 2 ] relative to plant biomass under ambient [CO 2 ] should increase with increasing nitrogen due to the shift of carbon allocation from fine roots to woody tissues.In contrast, the analytic model of Dybzinski et al. (2015) predicts that the ratio of plant biomass under elevated [CO 2 ] relative to plant biomass under ambient [CO 2 ] should be largely independent of total nitrogen because of an increasing shift in carbon allocation from long-lived, low-nitrogen wood to short-lived, high-nitrogen fine roots under elevated [CO 2 ] and with increasing nitrogen.This significant difference between these two predictions traces back to differences in how fine root stoichiometry is handled in the two models.In the model of Dybzinski et al. (2015), the fine root C : N ratio is flexible and the marginal nitrogen uptake capacity per unit of carbon allocated to fine roots depends on its nitrogen concentration.Like the model presented here, the model of Dybzinski et al. (2015) predicts decreasing fine root mass with increasing nitrogen availability.Unlike the model presented here (which has constant fine root nitrogen concentration), the model of www.biogeosciences.net/16/4577/2019/Biogeosciences, 16, 4577-4599, 2019 Dybzinski et al. (2015) predicts increasing fine root nitrogen concentration with increasing nitrogen availability.As a result, there is less nitrogen to allocate to wood as nitrogen increases in the model of Dybzinski et al. (2015) than there is in the model presented here.These countervailing factors even out the ratio of plant biomass under elevated [CO 2 ] relative to plant biomass under ambient [CO 2 ] across the nitrogen gradient in Dybzinski et al. (2015), whereas their absence amplifies this ratio with increasing nitrogen in the model presented here.Our ability to diagnose and understand this discrepancy highlights the utility of deploying closely related analytical and simulation models (Weng et al., 2017).We conducted simulations only at one site for the purpose of exploring the general patterns of competitively optimal allocation strategies and their responses to elevated [CO 2 ] at different nitrogen availabilities.We can speculate about shifts in the competitively optimal allocation strategy in different forest biomes by considering the effects of temperature on soil nitrogen supply via the SOM's decomposition rate and its positive effect on net nitrogen mineralization.For example, the SOM decomposition rate is usually high in warm regions and low in cold regions (Davidson and Janssens, 2006) assuming there are no water limitations and SOM is equilibrated with carbon input.According to our model, allocation to roots is high in low nitrogen supply conditions (cold regions) and low in high nitrogen supply conditions (warm regions).This pattern can be found from temperate to boreal forest zones (Cairns et al., 1997;Gower et al., 2001;Reich et al., 2014;Zadworny et al., 2016).Temperature also alters NPP, i.e., carbon supply: as temperature goes down, NPP decreases and nitrogen demand decreases, alleviating nitrogen limitation and leading to shifts of allocation to stems.Therefore, the differences in temperature effects on photosynthesis and SOM decomposition will determine competitive allocation strategy.Since SOM decomposition is more sensitive to temperature than gross primary production is at longtemporal and large spatial scales (Beer et al., 2010;Carey et al., 2016;Crowther et al., 2016), our model suggests that allocation will shift to wood in a warming world.Whether the carbon stored in that wood is enough to offset the carbon released from increasing soil respiration is a critical question.
Water is also a critical factor affecting allocation and its responses to elevated [CO 2 ].Low soil moisture usually leads to high allocation to roots (Poorter et al., 2012).Elevated CO 2 can reduce transpiration (as found in our study as well, Figs.S9-S11) and therefore increase soil moisture, resulting in increases in allocation to stems and aboveground biomass (Walker et al., 2019).A game-theoretic modeling study using the PPA framework shows that the competitively optimal allocation strategy shifts to high wood allocation at elevated [CO 2 ] in environments with water limitation (Farrior et al., 2015).This is the opposite of the elevated [CO 2 ] effects on allocation in nitrogen-limited environments as simulated in this study.According to field experiments, fine root allocation is more responsive to nitrogen changes than it is to soil moisture changes (Canham et al., 1996;Poorter et al., 2012).Poorter et al. (2012) attribute the mechanisms to the optimal strategies in response to the relative stable nitrogen supply and stochastic water input in soil.The vertical distribution of roots and the contributions of roots in different layers to water and nitrogen uptake also suggest that the uptake of soil nutrients are dominant in shaping root system architecture (Chapman et al., 2012;Morris et al., 2017), though root growth and turnover are flexible and sensitive to nitrogen and water supply (Deak and Malamy, 2005;Linkohr et al., 2002;Pregitzer et al., 1993).
Common principles for allocation modeling and implications
As shown in model intercomparison studies, the mechanisms of modeling allocation differ very much, leading to high variation in their predictions (e.g., De Kauwe et al., 2014).Calibrating model parameters to fit data may not increase model predictive skill because data are often also highly variable.Franklin et al. (2012) suggest that in order to build realistic and predictive allocation models, we should correctly identify and implement fundamental principles.Our model predicts similar patterns to those predicted by the model of Valentine and Mäkelä (2012), which has very different processes of plant growth and allocation.However, these two models share fundamental principles, including (1) evolutionary or competitive optimization, (2) capped leaves and fine roots at given tree sizes, (3) structurally unlimited stem allocation (i.e., optimizing carbon use) because the woody tissues can serve as unlimited sink for surplus carbon, and (4) height-structure competition for light and root-massbased competition for soil resources.Principles 2 and 3 are commonly used in models (De Kauwe et al., 2014;Jiang et al., 2019b).However, the different rules of implementing them (e.g., allometric equation, functional relationships, etc.) lead to highly varied predictions (as shown in De Kauwe et al., 2014), though model formulations may be very similar.
In competitively optimal models, such as this study and also Valentine and Mäkelä (2012), the competition processes generate similar emergent patterns by selecting those that can survive in competition, regardless of the details of those differences.The competition processes also make the details of allocation settings for a single PFT and their direct responses to elevated [CO 2 ] less important because competition processes will select out the most competitive strategy from diverse strategies in response to changes in [CO 2 ] and nitrogen.Our study and that of Valentine and Mäkelä (2012) posit a fundamental tradeoff between light competition and nitrogen competition via allocation based on insights gained from simpler models (e.g., Dybzinski et al., 2015;Mäkelä et al., 2008) for predicting allocation as an emergent property of competition.One advantage of building a model in this way is that the vegetation dynamics are predicted from first principles, rather than based on the correlations between veg-etation properties and environmental conditions.With these first principles, the models can produce reasonable predictions, though the details of physiological and demographic processes vary among models.
For vegetation models designed to predict the effects of climate change, the important operational distinction is that the fundamental rules cannot or will not change as climate changes.Nor, presumably, will the underlying ecological and evolutionary processes change as climate changes.The emergent properties can change as climate changes, however, and the models built on the "scale-appropriate" unbreakable constraints and ecological and evolutionary processes will be able to accurately predict changes in emergent ecosystem properties (Weng et al., 2017).In our opinion, the scientific effort to build better models is better served by understanding unrealistic predictions than by "fixing" them with unreliable mechanisms when there is a lack of data or theory to make them consistent with observations.Validating assumptions and initial responses are critical, and the long-term responses can be validated via spatial patterns.
This modeling approach also demands improvement in model validation and benchmarking systems (Collier et al., 2018;Hoffman et al., 2017).As shown in this study, allocation responses to elevated CO 2 at different nitrogen levels in monoculture runs are opposite to those in competitive allocation runs.For example, in monoculture runs, elevated [CO 2 ] increases wood allocation and decreases fine root allocation at low nitrogen; whereas in competitive allocation runs elevated [CO 2 ] leads to low wood allocation and high fine root allocation.Simply calibrating our model against short-term observational data may improve the agreement with observations but would not change the model's predictions because the model's predictions emerge from its fundamental assumptions.
Conclusions
Our study illustrates that including the competition processes for light and soil resources in a game-theoretic vegetation demographic model can substantially change the prediction of the contribution of ecosystems to the global carbon cycle.Allowing the model to explicitly track the competitive allocation strategies can generate significantly different ecosystemlevel predictions (e.g., biomass and ecosystem carbon storage) than those of strategies in the absence of explicit competition.Building such a model requires differentiating between the unbreakable tradeoffs of plant traits and ecological processes from the emergent properties of ecosystems.Drawing on insights from closely related analytical models to develop and understand more complicated simulation models seems, to us, indispensable.Evaluating these models also requires an updated model benchmarking system that includes the metrics of competitive plant traits during the development of ecosystems and their responses to global change factors.
Figure 2 .
Figure 2. Structure of BiomeE.(a) Vegetation structure: trees organize their crowns into canopy layers according to both their height and their crown area following the rules of the PPA model, which mechanistically models light competition.(b) Biogeochemical structure and compartmental pools.The green, brown and black lines are the flows of carbon, nitrogen, and coupled carbon and nitrogen, respectively.The green box is for carbon only.The brown boxes are nitrogen pools.The black boxes are for both carbon and nitrogen pools, where X can be C (carbon) and N (nitrogen).The C : N ratios of leaves, fine roots, seeds and microbes are fixed.The C : N ratios of woody tissues, fast soil organic matter (SOM) and slow SOM are flexible.Only one tree's C and N pools are shown in this figure.The blue box and arrows are for water storage in soil and fluxes of rainfall, evaporation and transpiration.The model can have multiple cohorts of trees, which share the same pool structure.The dashed line separates the aboveground and belowground processes.
C : N ratio of leaves kg C kg N −1 76.5 (Function of LMA) Wright et al. (2004) CN FR,0 Target C : N ratio of fine roots kg C kg N
Figure 3 .
Figure3.GPP, NPP, allocation and plant biomass at equilibrium state simulated by monoculture runs.GPP: gross primary production; NPP: net primary production; f NPP,x : the fraction of NPP allocated to x, where x is root (fine roots), leaf (leaves in crown) or wood (including tree trunk, stems and coarse roots).The data are from the averages of the model run years from 1400 and 1800.Each model run is initiated with one PFT with a fixed ratio of fine root area to leaf area (ϕ RL ).
Figure 4 .
Figure 4. Successional patterns of polyculture run I at ambient and elevated [CO 2 ] concentrations.ϕ RL is the fixed ratio of fine root area to leaf area of a particular strategy.
Figure 5 .
Figure 5. Winning PFTs (ϕ RL , a) in polyculture run II and equilibrium gross primary production (GPP, b), net primary production (NPP, c) and carbon use efficiency (NPP/GPP, d) at two CO 2 concentrations (aCO 2 : 380 ppm; eCO 2 : 580 ppm).The closed symbols with solid lines represent polyculture runs.The open symbols with dashed lines represent monoculture runs (only ϕ RL = 4 shown in this figure).ϕ RL is the fixed ratio of fine root area to leaf area of a particular strategy.
Figure 6 .
Figure 6.Allocation to leaves, fine roots and wood tissues of the competition and monoculture runs at the eight total nitrogen levels and two CO 2 concentrations (aCO 2 : 380 ppm; eCO 2 : 580 ppm).Panels (a), (c) and (e) show the NPP allocated to the tissues and panels (b), (d) and (f) show the fractions of the allocation in total NPP.The closed symbols with solid lines represent polyculture runs (Poly).The open symbols with dashed lines represent monoculture runs (only ϕ RL = 4 is shown in this figure, Mono).ϕ RL is the fixed ratio of fine root area to leaf area of a particular strategy.
Figure 7 .
Figure 7. Plant biomass responses to elevated [CO 2 ] and nitrogen.Panel (a) shows the equilibrium plant biomass (means of simulated plant biomass from model run year 1400 to 1800) in polyculture runs and monoculture runs (only ϕ RL = 4 is shown as an example).Panel (b) shows the ratio of simulated plant biomass at elevated [CO 2 ] to ambient [CO 2 ] for both competition and monoculture runs.Panels (c) and (d) show the comparisons with monoculture runs with ϕ RL increasing from 1 to 6 at ambient (c) and elevated [CO 2 ] (d).The closed symbols with solid lines represent polyculture runs.The open symbols with dashed lines represent monoculture runs (ϕ RL ranges from 1 to 6).ϕ RL is the fixed ratio of fine root area to leaf area of a particular strategy.aCO 2 : 380 ppm; eCO 2 : 580 ppm.
this case, open-canopy) sites showed significant increases in LAI and high LAI (in this case, closed-canopy) sites showed low increases or even decreases in LAI.They concluded that LAI www.biogeosciences.net/16/4577/2019/Biogeosciences, 16, 4577-4599, 2019 in closed-canopy forests is not responsive to elevated [CO 2 ] (Norby | 2019-08-18T05:22:12.732Z | 2019-02-18T00:00:00.000 | {
"year": 2019,
"sha1": "c0709a502dba76bcb32f8a4432ef98aefceddf74",
"oa_license": "CCBY",
"oa_url": "https://bg.copernicus.org/articles/16/4577/2019/bg-16-4577-2019.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "feefaaa8d03b5a626864a3e53f65e28906d913b4",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
257104548 | pes2o/s2orc | v3-fos-license | Testing the efficacy of different molecular tools for parasite conservation genetics: a case study using horsehair worms (Phylum: Nematomorpha)
Abstract Abstract In recent years, parasite conservation has become a globally significant issue. Because of this, there is a need for standardized methods for inferring population status and possible cryptic diversity. However, given the lack of molecular data for some groups, it is challenging to establish procedures for genetic diversity estimation. Therefore, universal tools, such as double-digest restriction-site-associated DNA sequencing (ddRADseq), could be useful when conducting conservation genetic studies on rarely studied parasites. Here, we generated a ddRADseq dataset that includes all 3 described Taiwanese horsehair worms (Phylum: Nematomorpha), possibly one of the most understudied animal groups. Additionally, we produced data for a fragment of the cytochrome c oxidase subunit I (COXI) for the said species. We used the COXI dataset in combination with previously published sequences of the same locus for inferring the effective population size (Ne) trends and possible population genetic structure. We found that a larger and geographically broader sample size combined with more sequenced loci resulted in a better estimation of changes in Ne. We were able to detect demographic changes associated with Pleistocene events in all the species. Furthermore, the ddRADseq dataset for Chordodes formosanus did not reveal a genetic structure based on geography, implying a great dispersal ability, possibly due to its hosts. We showed that different molecular tools can be used to reveal genetic structure and demographic history at different historical times and geographical scales, which can help with conservation genetic studies in rarely studied parasites.
Introduction
In recent years, there has been a growing concern regarding the conservation of parasites that do not harm humans or do not impede conservation attempts (e.g. Dougherty et al., 2016;Carlson et al., 2020). This is because parasites are integral parts of the ecosystems from several points of view, e.g. species diversity, biomass, evolution of host immunity and importance in food webs (Dougherty et al., 2016). Moreover, they are affected by anthropogenic environmental changes and can go extinct together with their hosts (Carlson et al., 2020). However, tools for parasite conservation have rarely been tested (Carlson et al., 2020). Particularly, molecular tools (e.g. DNA barcoding and reconstruction of demographic histories through molecular data) have not been used extensively in non-medically relevant parasites and they can be useful for estimating general population parameters [e.g. genetic diversity and trends of effective population size (Ne); Criscione et al., 2005;Selbach et al., 2019;Strobel et al., 2019;Carlson et al., 2020]. Genetic diversity may be linked to local adaption or life history, and its estimation may help to understand ecological traits or even infer potential population bottlenecks (van Schaik et al., 2015;Tobias et al., 2017;Radačovská et al., 2022). Ne is known to greatly influence the genetic diversity and viability of populations (Criscione et al., 2005;Pérez-Pereira et al., 2022), but estimating it in parasites may come with some caveats depending on the studied system (Criscione et al., 2005;Strobel et al., 2019). Nevertheless, estimates of Ne can be helpful for considering the taxa's conservation status (Pérez-Pereira et al., 2022). Unfortunately, the current lack of genetic resources makes it difficult for researchers to enact timely conservation plans, putting some species at risk of extinction (Carlson et al., 2020).
Methods for estimating Ne trends from genetic data of non-model organisms are being developed (e.g. Liu and Fu, 2020). This has been possible thanks to protocols that allow researchers to generate genome-wide data without reference sequences (Peterson et al., 2012;Nunziata and Weisrock, 2018). For example, restriction-site associated DNA sequencing (RADseq) approaches have been shown to work well with non-model organisms and became popular in population and conservation genetic studies due to their relatively low cost, versatility and ability to generate data for up to tens of thousands of loci (Peterson et al., 2012;Nunziata and Weisrock, 2018). Furthermore, the produced data can be used with bioinformatic tools for estimating population trends at least 20 generations before a decline (Nunziata and Weisrock, 2018) and genetic structure (e.g. Dalapicolla et al., 2021). This is interesting from a molecular conservation parasitology perspective, given that non-medically relevant parasites often lack reference data (Selbach et al., 2019). Alternatively, even single-locus methods for Ne estimation are available, allowing researchers to use previously released data (e.g. sequences from GenBank) for demographic inferences, although having more loci leads to better parameter estimation (Ho and Shapiro, 2011).
In this study, we used multiple protocols and methods for generating molecular data to study population genetic structure (i.e. potential genetic differentiation based on host or geography: e.g. van Schaik et al., 2015 andDalapicolla et al., 2021) and the changes of Ne over time in freshwater horsehair worms, also called 'gordiids' (phylum Nematomorpha, class Gordioida; Sup. Info) in Taiwan. Such worms are regarded as one of the least studied animal group, especially from a molecular perspective (Bolek et al., 2015;Tobias et al., 2017) and they have interesting ecological and conservation characteristics (Sato et al., 2014;Bolek et al., 2015;Sup. Info). The methods used in this study can be applied to other parasitic groups for conservation and biodiversity-oriented studies. Specifically, a double-digest restrictionsite-associated DNA sequencing (ddRADseq) dataset including all 3 known horsehair species in Taiwan (Sup. Info) was generated. Additionally, DNA sequence data from the mitochondrial cytochrome c oxidase subunit I (COXI) region were generated and used for evaluating population and genetic status in combination with previously released data (Chiu et al., 2011(Chiu et al., , 2016(Chiu et al., , 2020. This mitochondrial marker was used due to the lack of any other previously released molecular data on Taiwanese horsehair worms. Although the use of mitochondrial data for genetic diversity studies can be controversial, COXI is regarded as a good proxy for evaluating the conservation status of species population for the first time (Petit-Marty et al., 2021 and references therein). Additionally, such marker was also utilized in a previous study about hairworms' population genetic structure (Tobias et al., 2017) and its availability in public repositories is expected to increase, given its use in barcoding (Selbach et al., 2019). With both datasets, population genetic structure and demographic history were estimated and compared. More focus was given to Chordodes formosanus, since they are relatively easier to sample and are more common (Chiu, 2017). A country-wide ddRADseq dataset was generated for the said species, allowing us to estimate Ne and population genetic structure at Taiwan main island-level. Additionally, the Taiwanese and Japanese populations' trends were compared using the COXI dataset.
Sampling
Gordiids were sampled over a span of 14 years, from 2007 to 2021 ( Fig. 1: Sup. Table 1). Chordodes formosanus, the most common species in Taiwan (Chiu, 2017), was sampled country-wide, while specimens of Acutogordius taiwanensis and Gordius chiashanus were collected from fewer localities ( Fig. 1; Sup. Table 1). Specimens were collected either by hand in bodies of water (i.e. streams and man-made ponds), by sampling the hosts and then immersing them until the adult worm came out, or by collecting the dead gordiids on the ground, which is the most common method for finding them in Taiwan (Chiu, 2017). The samples were fixed in 95% alcohol and stored at −20°C. Species recognition was based on DNA barcoding, given that the majority of the samples were too ruined for morphological identification. A total of 38 C. formosanus specimens, 10 A. taiwanensis and 3 G. chiashanus were collected (see SRA BioProject PRJNA914055); however, DNA extraction and sequencing was not successful on all of them (Sup. Table 1), probably due to the low quality of some of the specimens.
DNA extraction, COXI amplification and ddRADseq protocol
For DNA extraction, we used a modified version of the Qiagen DNeasy Animal Tissue Protocol (Qiagen, Hilden, Germany). We modified the original protocol by using double-distilled water (ddH 2 O) instead of elution buffer for dilution, and we eluted 60 μL of DNA, given the low concentration of DNA. Additionally, we left the tissue to incubate overnight at 60°C.
The amplification of the COXI sequences through polymerase chain reaction (PCR) was done using the universal primer set LCO1490 and HC02198 (Folmer et al., 1994) and was performed in a total volume of 25 μL using PuReTaq Ready-To-Go PCR Beads (Cytiva, formerly GE Healthcare, Marlborough, United States). The PCR was initiated at 95°C for 5 min, followed by 40 cycles at 95°C for 1 min, 50°C for 1 min and 72°C for 1 min, with a final extension at 72°C for 10 min. The PCR products were purified following the NautiaZ Gel/PCR DNA Purification Mini Kit protocol (Nautia Gene, Taipei, Taiwan). Sanger Sequencing was done by the DNA Sequencing Core Facility of the Institute of Biomedical Sciences, Academia Sinica, Taipei, Taiwan.
An edited version of the Peterson et al. (2012) protocol was used for generating the ddRADseq library. We modified the original method by using between 100 and 140 ng of genomic DNA per sample, 4 μL of ddH 2 O for bead clean-up and 60 μL of ddH 2 O for elution before measuring DNA concentration with Qubit. Single-end sequencing of 100-nucleotide base pair (bp) lengths were carried out using an Illumina HiSeq2500 system.
COXI analyses
The COXI forward and reversed sequences from each individual were visualized with MEGA X (ver. 10.1.8; Kumar et al., 2018). The alignments were done with MAFFT 7.471 (Katoh and Standley, 2013) using the L-INS-i algorithm. The sequences were trimmed manually and consensus sequences were saved from the alignments.
COXI sequences at least 400 bp long for each taxon from previous studies (Chiu et al. 2011(Chiu et al. , 2016(Chiu et al. , 2020; Sup. Table 2) were downloaded from GenBank (Clark et al., 2016: last assessed 4th November 2022, aligned with the newly generated sequences using MAFFT 7.471 with the L-INS-i algorithm and manually trimmed while being visualized with MEGA X. In the case of A. taiwanensis, an additional sequence of a hairworm from Myanmar (MF983649) was included for population genetic structure analyses; said sequence has been shown to fall inside the known COXI variability of A. taiwanensis in previous studies (Chiu et al., 2020) and therefore it should represent a sequence from the said species. In total, 433 bp were recovered for 85 specimens of C. formosanus (36 newly generated), 414 bp for 31 A. taiwanensis (3 newly generated) and 382 bp for 26 individual of G. chiashanus (1 newly generated). The newly generated sequences are available in GenBank (OQ121045-OQ121047 for A. taiwanensis, OQ121048-OQ121083 for C. formosanus, OQ121084 for G. chiashanus).
Possible population genetic structure, hypothetical haplotypes (i.e. unsampled sequences) and genetic diversity indices [nucleotide diversity (pi) and Tajima's D: Nei and Li, 1979;Tajima, 1989] were inferred by a haplotype network in PopArt (Leigh and Bryant, 2015), generated with the TCS 1.21 algorithm (Clement et al., 2000). Estimation of Ne trends was calculated by running Coalescent Bayesian Skyline plot analyses on BEAST2 (version 2.6.3: Bouckaert et al., 2019). Such analyses allow the estimation of trends from mitochondrial data, even if only one locus is available, and recover well with Pleistocene trends (Ho and Shapiro, 2011). Furthermore, previous studies were able to recover demographic trends with less than 300 bp of mitochondrial sequences (Ho and Shapiro, 2011 and references therein); therefore, the alignments should be long enough for allowing trends' estimations. The most likely nucleotide substitution model for each species was inferred using ModelTest, implemented in raxmlGUI (Edler et al., 2021) according to Akaike information criterion (AIC; Akaike, 1998). A Relaxed Clock Log model, with a clock rate of roughly 0.0013 per million years, was set. We estimated this clock rate by aligning full COXI sequences from the only available Chordodes and Gordius mitogenomes from GenBank (MG257764 and MG257767), dividing the number of variable sites by the total number of sites and then dividing again by 2 before dividing by 110, which represents the estimated age in million years of the oldest known gordiid fossil (Poinar and Buckley, 2006). Note that estimates for the appearance of crown group Nematomorpha range from around the mid-Cambrian to the mid-Cretaceous (Howard et al., 2022), but that these estimates include the possible divergence time of the Nectonema lineage from gordiids too. Therefore, we prefer to refer to the fossil datum, since it is the earliest possible divergence point for the Chordodidae/ Gordiidae lineage. Additionally, the C. formosanus Japanese and Taiwanese samples were split for estimations, while the Myanmar sequence was removed for A. taiwanensis.
ddRADseq analyses
Two different pipelines for processing the ddRAD data were used for C. formosanus: ipyrad (version 0.9.84: Eaton and Overcast, 2020) and Stacks (version 2.62: Rochette et al., 2019). For the ipyrad pipeline, we performed de-novo assembly with a Phred Qscore offset of 43, 90% clust threshold, 0.5 maximum of heterozygotes in consensus, and minimum sample locus at 80% of all the samples, while keeping other settings at their default values. Samples with less than 300 loci were discarded for further analyses. For Stacks, the protocol listed in Rivera-Colón and Catchen (2022) was followed and all the sites (variant and invariant) were outputted in the Variant Call Format (VCF) file. In total, 27 individuals were retained for C. formosanus from both pipelines. In the case of A. taiwanensis and G. chiashanus, only 2 individuals per species had at least 200,000 reads after trimming and therefore we only used Stacks for their assemblies, with the 'r' flag set at 1 instead of 0.8.
To check the genetic population genetic structure of C. formosanus, tools from ipyrad and R were used with both the ipyrad and Stacks output. The Stacks VCF file was filtered to remove the loci without SNPs and the SNPs with more than 20% missing data, using an edited version of previously released R scripts (Dalapicolla et al., 2021). Additionally, the filtered VCF file was converted to a HDF5 database using the converter inside the ipyrad-analysis toolkit (Eaton and Overcast, 2020) to make it compatible with the tools implemented inside ipyrad. After this, k-means principal component analyses (PCA) were run 100 times in the ipyrad-analysis tools suite. Moreover, the snapclust clustering method implemented into the 'adegenet' R package (Beugin et al., 2018) was used. The best number of clusters (k) was calculated by computing the AIC and the Bayesian information criterion (BIC; Schwarz, 1978), while picking the value for which both criteria were lower. The snapclust analyses were run with both the ipyrad full output STRUCTURE file and a STRUCTURE file generated from the filtered Stacks VCF thanks to PGDSpider (Lischer and Excoffier, 2012). Furthermore, for analysing possible admixture between geographically separated C. formosanus individuals, RADpainter and fineRADstructure (Malinsky et al., 2018) were used and the results were plotted with R. The RADpainter file from the Stacks output was used for such software, while the 'hapsFromVCF' command generated an input file with the ipyrad-outputted VCF.
For population demographic analyses of all the taxa, a sitefrequency spectra (SFS) file was generated using the easySFS script (https://github.com/isaacovercast/easySFS) by converting the Stacks VCF file with the invariant sites, choosing the number of individuals that maximize the number of segregating sites. Note that easySFS takes into consideration the fact that missing data are present, and therefore we used the full Stacks VCF output for SFS conversion. After that, Stairway Plot 2 (version 2.1.1: Liu and Fu, 2020) was used per species following the software's manual. Given the current knowledge on Taiwanese horsehair worms (Chiu, 2017;Chiu et al., 2020), generation time was set at 1 per year. As for the mutation rate, given the lack of wholegenome data for horsehair worms, a spontaneous mutation of rate of 2 × 10 −9 per site per generation, which is one of the nematode Pristionchus pacificus (Weller et al., 2014), was set. We argue that, even if nematodes and hairworms have been split for hundreds of millions of years (Howard et al., 2022), mutation rates in invertebrates tend to be around the same order of magnitude (10 −9 ; e.g. Konrad et al., 2019) and therefore should not influence the results in a significant way.
Coxi analyses
In total, there were 54 different mitochondrial haplotypes in C. formosanus, 9 of which were hypothetical and 28 of which were made up by a single individual ( Fig. 2A). Acutogordius taiwanensis presented 9 haplotypes, none of which were hypothetical and 5 of which were made up by a single individual; however, 5 haplotypes were separated from the most common one by just 1 mutation (Fig. 2B). 24 haplotypes, with 5 hypotheticals, were counted for G. chiashanus, and 15 of these were made up by a single individual (Fig. 2C).
For the Coalescent Bayesian Skyline plot analyses, the number of variable sites was 49 for the Taiwanese samples of C. formosanus, while it was 12 for the Japanese ones. After removing the Myanmar sequence, 7 variable sites were retained for A. taiwanensis. We did not change the alignment file for G. chiashanus, and therefore 30 variable sites remained (Sup. Data). The best substitution model for the Taiwanese samples of C. formosanus was TrN + G4, while TIM1 was the best for the Japanese ones. HKY + I and TPM3uf + I were the best models for A. taiwanensis and G. chiashanus, respectively (Sup. Table 4). The Bayesian Skyline Plot results show how both the populations of C. formosanus are having a small decline, while A. taiwanensis's population tends to be stable (Fig. 3). G. chiashanus's population size seemed to be increasing until recently (Fig. 3).
ddRADseq analyses
The ipyrad trimming step led to an average read number of 2195578.39 per C. formosanus sample (including samples that were discarded because of the low number of reads or low number of retained loci). The ipyrad final output for C. formosanus had a SNP matrix size of 2248, with 17.82% missing sites, while the total sequence matrix size was 52233, with 17.59% missing sites. The lowest number of loci in assembly per an individual was 331, while the highest one was 548 and the average was around 473.889 loci per samples (Sup. Data).
For the Stacks output, an average read number of 1672668.836 per sample was retained (including samples that were discarded because of low number of reads). The optimal settings for C. formosanus were obtained with both the number of allowed mismatches (M) and the number of allowed mismatches between individual loci and the catalogue of loci (n) set to 6. For A. taiwanensis and G. chiashanus, the best results were obtained with M and n set to 12 and 9, respectively. The number of R80 loci was 1876 for C. formosanus, 1388 for A. taiwanensis and 2484 for G. chiashanus. After removing the SNPs with 20% of the missing data, 4243 SNPs were retained for PCA and snapclust analyses for C. formosanus (Sup. Data).
Both PCA and snapclust analyses, conducted using both the output VCFs (ipyrad and filtered Stacks), failed to trace any structure whatsoever in C. formosanus, with both the AIC and BIC computed by snapclust agreeing that k = 1 was the optimal number of clusters for both datasets (Fig. 4; Sup. Figs 1 and 2). The same pattern was observed using fineRADstructure with both input files, given that no cluster formed based on shared coancestry ( Fig. 5; Sup. Fig. 3).
The Stairway Plot output for C. formosanus shows several population drops during the Pleistocene and an estimated Ne of around 1000 individuals in more recent times ( Fig. 6; Sup. Fig. 4). However, only 2 drops were detected for both A. taiwanensis and G. chiashanus, and their estimated Ne in recent times were larger (around 300,000 for the former and 150,000 for the latter; Sup. Figs 5 and 6).
Discussion
COXI and ddRADseq were used for evaluating population genetic structure and trends of Ne in the considered gordiids. The trends for the most sampled species, C. formosanus, were similar in the Pleistocene according to both datasets, although the Stairway Plot based on ddRAD was able to detect more changes in the last 100,000 years. For the other 2 species, the scale of the recorded events with COXI differed between each other and C. formosanus, while the ddRAD datasets were too small for detecting genetic structure and recent trends. In all the 3 species, no differentiation based on geography was found according to COXI. The results associated with each species and datasets are discussed below.
COXI diversity analyses
Both C. formosanus and G. chiashanus exhibit high levels of intraspecific diversity and do not exhibit a geographical genetic structure based on mitochondrial data. High mitochondrial diversity without population genetic structure in Nematomorpha species has been shown in taxa from New Zealand (Tobias et al., 2017), hinting at a high dispersal ability or a multigenerational dispersal process. In the case of the previous study, the authors were surprised to discover a lack of genetic structure in Euchordodes nigromaculatus, because the species was known to parasitize wētās (family: Anostostomatidae), which are definitive hosts that do not disperse well and exhibit a strong population genetic structure. Additionally, the known paratenic hosts (i.e. used for transport before the final infection and not for development: Criscione et al., 2005;Bolek et al., 2015) should not be able to disperse greatly in the mountain range inhabited by E. nigromaculatus (Tobias et al., 2017). However, new studies have revealed that this New Zealand species can also infect cockroach Celatoblatta quinquemaculata, a common insect in the mountains of central Otago (Doherty et al., 2022), which is the area considered by Tobias et al. (2017) for the study. This suggests that this cockroach shows more dispersal ability than the wētās. Therefore, it could be the case that dispersal is facilitated by definitive hosts. Note that dispersal by definitive hosts does not exclude the possibility of a multigenerational process, as highlighted by Tobias et al. (2017). Given the scarce knowledge of paratenic hosts used by Taiwanese horsehair worms, we should not exclude them as possible vectors also.
In the case of G. chiashanus, only one definitive host (a millipede species from the genus Spirobolus) and only one potential paratenic host [the mayfly Ephemera orientalis (Chiu et al., 2020), which is known to be present in different East Asian countries (Lee et al., 2008)] are known. The mayfly does not seem to be commonly infected (Chiu et al., 2020), which raises questions about both the paratenic and definitive host ranges of G. chiashanus: such ranges may include other species that are able to disperse in middle elevation areas, given the high diversity shown by the haplotype network. Another hairworm species with terrestrial adult ecology, Gordius terrestris, prefers earthworms as natural paratenic hosts (Anaya et al., 2021) and G. chiashanus might do the same. All the possible explanations for dispersal listed here (by paratenic hosts, by definitive hosts, by multigenerational process) do not exclude each other, and it is likely that hairworm dispersal is a complex process that may include all the options listed above.
While A. taiwanensis showed less intraspecific genetic diversity than the other 2 considered taxa, it must be noted that almost all the samples for this species came from a single region of Taiwan, Yilan County this study). Meanwhile, the COXI sequences of C. formosanus were generated from individuals from multiple localities in Taiwan and Japan (Chiu et al. 2011(Chiu et al. , 2016this study), and the ones for G. chiashanus came from multiple middle elevation mountain localities in central-south Taiwan (Chiu et al., 2020; this study). These differences in sampling areas can be explained by the varying difficulties encountered when sampling the adult horsehair worms (Sup. Info). Sampling is usually regarded as a significant issue while studying Nematomorpha, due to the lack of standardized collecting methods and the short lifespan of the adults (Bolek et al., 2015). Furthermore, biological differences among species may make some taxa easier to collect compared to others (Sup. Info). As a result of sampling difficulties, it is likely that the genetic diversity of A. taiwanensis is extremely underestimated, as already suggested by Chiu (2017) and Chiu et al. (2020), and the species may have gone unnoticed in many areas. Alternatively, A. taiwanensis may be the most vulnerable Taiwanese hairworm species to human actions (Chiu (Sato et al., 2014). Therefore, A. taiwanensis may have gone extinct in some areas in Taiwan due to human disturbance.
ddRADseq diversity analyses for Chordodes formosanus in Taiwan
Even with genome-wide data, C. formosanus did not show any sign of population genetic structure according to geographical origins (Fig. 4). Together with its known distribution and COXI haplotype, the results suggest a panmictic population with great dispersal ability. However, gordiids disperse poorly by themselves when they are at the larval stage (Bolek et al., 2015;Chiu et al., 2016;Chiu, 2017) and the river network in Taiwan prevents nonflying organisms associated with freshwater from dispersing, given its fragmentation (Shih et al., 2006). That would block potential dispersal by river flow at a country-wide level.
Furthermore, the presence of this species in Lyudao (Chiu et al., 2011), a volcanic island that has never been connected to the main island of Taiwan (Yang et al., 1996), implies sea crossing. Previous modelling efforts noticed a very high similarity between the mantis hosts' range and the range of C. formosanus, hinting at dispersal by definitive hosts, although the dispersal by paratenic ones cannot be excluded (De Vivo and Huang, 2022;Sup. Info). Therefore, dispersal by active dispersing flying insect hosts (paratenic and/or final ones) with or without a multigenerational process can be a possible explanation for this lack of geographic structure shown in our results.
Demographic history inferences
The demographic analyses using the COXI datasets were able to trace historical demography through 2 million years for C. formosanus (with the Taiwanese population going a little bit deeper in time). The same approach revealed Ne trends up to roughly 6 Figure 5. Co-ancestry matrix for Chordodes formosanus based on Stacks ddRAD-seq data. The one from ipyrad data is available as Supplementary Fig. 3.
848
Mattia De Vivo et al.
million years ago for G. chiashanus and 250,000 years of demographic history for A. taiwanensis. This difference in scale may have been caused by the lower variability of the A. taiwanensis sequences (7 variable sites in total), which in turn was caused by a limited geographic sampling. In general, it is known that mitochondrial sequence data usually tend to recover demographic histories in the Pleistocene (Ho and Shapiro, 2011); however, such data are less informative with recent history compared to RADseq (Nunziata and Weisrock, 2018). The COXI-based Bayesian Skyline Plots revealed recent drops in Ne for both Taiwanese and Japanese populations of C. formosanus over the last 250,000 years. Several drops in the same period were found by Stairway Plot using the ddRADseq data. Additionally, both approaches found a demographic increase a little bit before 2 million years ago for the Taiwanese population. That said, Stairway Plot analyses with ddRADseq data were able to detect more events in the last 100,000 years for the Taiwanese population of C. formosanus, compared to the Coalescent Bayesian Sky Plot based on COXI sequences, probably due to the differences in number of loci between the 2 datasets (e.g. Cristofari et al., 2018) and the different mutation rate in mitochondrial genes (Ho and Shapiro, 2011). The demographic histories reconstructed using genome-wide SNP data and the Stairway Plot were very large for A. taiwanensis and G. chiashanus, but at the same time, the software failed to recover any possible trends in recent years. This was caused by the very small sample size, which were 2 diploid individuals per species. Specifically, the number of historical events that can be estimated using Stairway Plot, and any programs that take site frequency spectrum as inputs and calculate composite likelihoods, is constrained to the number of site frequency categories. As a result, for a folded site frequency spectrum, as implemented here, the use of 2 individuals will only result in 2 site frequency categories, and thus only 2 historical demographic episodes can be estimated (Liu and Fu, 2020). The 2 species also have extremely large Ne estimated based on the Stairway Plot results. It is known that sample size can impact the estimated demographic parameters using RADseq data (Nunziata and Weisrock, 2018) and therefore we attribute the high Ne estimates to the sampling size, which makes the inferences unreliable.
For C. formosanus, however, more events were recorded from the Stairway Plot analyses. That said, it is surprising that this species, which is regarded as the most common in Taiwan, is showing signs of declinealthough the current Ne should still be large enough to sustain the population (Pérez-Pereira et al., 2022). The recent demographic decrease might be the result of extirpation events, possibly caused by changes in land use in Taiwan, hypothesized after niche modelling in De Vivo and Huang (2022). It is known that land-use changes can influence helminths, although the effects seem to depend on both host's and parasite's ecology (e.g. Chakraborty et al., 2019; Portela et al., 2020).
Future directions
In this study, we evaluated the use of different genetic and analytical tools for estimating demographic history and population genetic structure, which informed us about potential declines and dispersal traits. From a conservation perspective, the negative trend shown by C. formosanus would deserve more attention in the future, given its confirmation by both Coalescent Skyline and Stairway Plots. A. taiwanensis should also be monitored, due to its known smaller geographical range compared to the other 2 evaluated species and potential sensitivity to anthropic changes (Sup. Info). For G. chiashanus, additional samples should be collected for testing the inferred decline shown by COXI Coalescent Skyline Plots with genome-wide data too and see if recent (last 100,000 years) declines happened. In our datasets, sampling bias for some taxa was present and can influence the results. For possible future studies on the conservation genetics of parasites, we give 3 suggestions that can help with evaluating a parasite's conservation status: (i) consider the target species' ecologies, given that they can highly influence their population genetic structure (van Schaik et al., 2015;Radačovská et al., 2022), Ne estimates (Criscione et al., 2005;Strobel et al., 2019) as well as the sampling strategy, given that some parasites have life cycles that can influence when and how to collect them (e.g. van Schaik et al., 2015); (ii) try to get the geographically broadest and biggest sample size possible; for example, in this study a small sample size in the ddRADseq dataset did not allow us to estimate enough recent trends for 2 taxa, while the COXI dataset for A. taiwanensis was too biased to one area, which did not allow us to properly estimate the diversity of the species; (iii) define the molecular and bioinformatic tools used; a direct estimate of Ne for parasites can be tricky to calculate (e.g. Strobel et al., 2019;Carlson et al., 2020). Given this, we suggest focusing on trends instead of raw numbers. Additionally, previous studies showed possible limits of single-locus analyses (Ho and Shapiro, 2011). Therefore, we suggest using as many loci as possible (see Peterson et al., 2012 andNunziata andWeisrock, 2018 for potential protocols). However, single-locus data are often the kinds of data that are the most available for parasites (Selbach et al., 2019). Therefore, if budget and resources are limited, Sanger sequencing can be pursued (but see Radačovská et al., 2022 for caveats with mitochondrial sequences in polyploid species). The use of depositories such as GenBank can be useful for retrieving previously released sequences and therefore increase both sample size and loci used while utilizing software such as BEAST2 that can use such data. That said, it is crucial to check for a possible population genetic structure before demographic trend analyses, since it influences the results (Ho and Shapiro, 2011). | 2023-02-24T14:10:39.161Z | 2023-02-22T00:00:00.000 | {
"year": 2023,
"sha1": "da551f4b5d9d635c532720034838f0f08fa0f410",
"oa_license": "CCBYNCND",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/4E1ED3EFA5BD20C8D0CC3C38F1EB7CF2/S0031182023000641a.pdf/div-class-title-testing-the-efficacy-of-different-molecular-tools-for-parasite-conservation-genetics-a-case-study-using-horsehair-worms-phylum-nematomorpha-div.pdf",
"oa_status": "GOLD",
"pdf_src": "Cambridge",
"pdf_hash": "30d30a8ff943706946e9f76587a846f8afda2298",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
150367718 | pes2o/s2orc | v3-fos-license | Immune Cell Function Assay
The following protocol contains medical necessity criteria that apply for this service. The criteria are also applicable to services provided in the local Medicare Advantage operating area for those members, unless separate Medicare Advantage criteria are indicated. If the criteria are not met, reimbursement will be denied and the patient cannot be billed. Please note that payment for covered services is subject to eligibility and the limitations noted in the patient’s contract at the time the services are rendered.
I. Policy Description
Immune cell function assays involve measurement of peripheral blood lymphocyte response (intracellular ATP levels, proliferation) following stimulation to assess the degree of functionality of the cell-mediated immune response (Buttgereit et al., 2000).
For guidance on procedures utilizing flow cytometry, please refer to AHS-F2019 Flow Cytometry.
II. Related Policies Policy Number
Policy Title AHS-F2019 Flow Cytometry AHS-M2091 Transplant Rejection Testing
III. Indications and/or Limitations of Coverage
Application of coverage criteria is dependent upon an individual's benefit coverage at the time of the request. Specifications pertaining to Medicare and Medicaid can be found in Section VII of this policy document.
The following does not meet coverage criteria due to a lack of available published scientific literature confirming that the test(s) is/are required and beneficial for the diagnosis and treatment of a patient's illness.
V. Scientific Background
Primary immunodeficiencies are a group of rare disorders in which part of the body's immune system is absent or functions incorrectly. These disorders occur in as many as 1:2000 live births and are most often categorized according to a combination of mechanistic and clinical descriptive characteristics (Bonilla et al., 2015). Specific cellular immunity is mediated by T cells, and defects affecting these T cells underlie the most severe immunodeficiencies. As antibody production by B cells requires intact T cell function, most T cell defects lead to combined (cellular and humoral) immunodeficiency (Butte & Stiehm, 2019).
In vitro studies of T cell function measure peripheral blood T cell responses to several different types of stimuli (Bonilla, 2008): • Mitogens (such as the plant lectins phytohemagglutinin, concanavalin A, pokeweed mitogen, anti-CD3). • Specific antigens (such as tetanus and diphtheria toxoids or Candida albicans antigens).
Exposure of T cells to stimulus leads to their metabolic activation and polyclonal expansion (Fernandez-Ruiz et al., 2014). Response can be measured by indicators of proliferation, ATP synthesis and release, or expansion of specific subpopulations (Stiehm, 2017).
The evaluation of specific immune responses is essential for diagnosis of primary immune deficiencies. Screening tests used to evaluate patients with suspected primary immune deficiencies are relatively inexpensive, performed rapidly, and reasonably sensitive and specific (Notarangelo, 2010;Oliveira & Fleisher, 2010). Abnormal screening test results indicate the need for more sophisticated tests. This stepwise approach ensures an efficient and thorough evaluation of mechanisms of immune dysfunction that underlie the clinical presentation; this process includes the narrowing of diagnostic options before using costly sophisticated tests that might be required to arrive at specific diagnoses (Bonilla et al., 2015). Abnormal T cell counts measure T cell mitogen responses that are absent or extremely low; this is a crucial element in the diagnosis of several primary immune deficiencies, most notably, severe combined immunodeficiency (SCID) (Picard et al., 2015). Additionally, T-cell recognition of alloantigens is the primary and central event that leads to the cascade of events that result in rejection of a transplanted organ (Vella, 2020). Several commercial assays have been developed based on the traditional assessment of T-cell stimulation to predict or assess transplant rejection.
The ImmuKnow assay measures the ability of CD4 T-cells to respond to mitogenic stimulation by phytohemagglutinin-L in vitro by quantifying the amount of adenosine triphosphate (ATP) produced and released from these cells following stimulation (Zhang et al., 2016). Since the CD4 lymphocytes orchestrate cell-mediated immunity responses through immunoregulatory signaling, measurement of intracellular ATP levels following CD4 activation is intended to estimate the net state of immune system in immunocompromised patients (Chon, 2021) and one of the few well-established strategies for functional immune monitoring in solid organ transplant recipients (Sottong et al., 2000).
The Pleximmune™ blood test measures the inflammatory immune response of recipient T-cells to the donor in co-culture of lymphocytes from both sources (Ashokkumar et al., 2009;Ashokkumar et al., 2017;Sindhi et al., 2016). The Pleximmune test sensitivity and specificity for predicting acute cellular rejection was found to be 84% and 81%, respectively, in a training set-validation set testing of 214 children. Early clinical experience shows that test predictions are particularly useful in planning immunosuppression in the setting of indeterminate biopsy findings or in modifying protocol-mandated treatment when combined with all other available clinical information about an individual patient (Sindhi et al., 2016).
The iQue® Immune Cell Function Assay identifies immune cells based on cell surface markers or secreted soluble mediators. This assay quantifies cytokines, adhesion molecules, enzymes, and growth factors receptors and measures cell phenotypes, cell function markers, cell viability, cell count, proliferation and secreted effector cytokines in a single well. The iQue assay can be used to characterize T cells and measure various populations including memory T cells, cytotoxic T cells, and natural killer cells (Intellicyt, 2021).
Clinical Utility and Validity
A population-based study comparing the assay results in healthy controls and solid organ transplant recipients established three categories to define patient's cell-mediated immune response: strong (≥525 ng ml −1 ), moderate (226-524 ng ml −1 ) and low (≤225 ng ml −1 ) (Fernandez-Ruiz et al., 2014;Kowalski et al., 2006). Numerous authors have analyzed the predictive value of the ImmuKnow® (Viracor) assay for acute rejection, as recently summarized in a meta-analysis that found a relatively high specificity (0.75) but a low sensitivity (0.43), with significant heterogeneity across studies (Fernandez-Ruiz et al., 2014;Ling et al., 2012). The ImmuKnow® assay has been examined in clinical trials for its potential use in monitoring immunosuppression medication regimens in solid organ transplant patients. Kowalski et al. (2006) performed a meta-analysis of 504 solid organ transplant recipients (heart, kidney, kidney-pancreas, liver, and small bowel) from 10 U.S. centers. The authors found that "A recipient with an immune response value of 25 ng/ml adenosine triphosphate (ATP) was 12 times more likely to develop an infection than a recipient with a stronger immune response. Similarly, a recipient with an immune response of 700 ng/ml ATP was 30 times more likely to develop a cellular rejection than a recipient with a lower immune response value (Kowalski et al., 2006)." The authors also hypothesized an "immunological target of immune function," created by the intersection of odds ratio curves at 280 ng/ml ATP. The authors concluded "the Cylex ImmuKnow assay has a high negative predictive value and provides a target immunological response zone for minimizing risk and managing patients to stability (Kowalski et al., 2006)." Wang et al. (2014) performed a meta-analysis of six studies which found "The pooled sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR) of ImmuKnow for predicting the risk of infection were 0.51, 0.75, 1.97, 0.67, and 3.56, respectively. A DOR of 13.81, with a sensitivity of 0.51, a specificity of 0.90, a PLR of 4.45, and an NLR of 0.35, was found in the analysis of the predictive value for acute rejection." The authors concluded, "Our analysis did not support the use of the ImmuKnow assay to predict or monitor the risks of infection and acute rejection in renal transplant recipients. Further studies are needed to confirm the relationships between the ImmuKnow assay and infection and acute rejection in kidney transplantation (Wang et al., 2014)." Jo et al. (2015) analyzed CD4 T-lymphocytes ATP levels along with lymphocyte subsets in 160 samples from 111 post-allogeneic hematopoietic stem cell transplantation (alloHSCT) patients. In patients with stable status, the 6-month post-alloHSCT ImmuKnow® levels were found to be significantly higher than those tested within 6 months post-alloHSCT. ImmuKnow® results 6 months post-alloHSCT showed low positive correlation with natural killer cell count (r = 0.328) and the values tested later than 6 months post-alloHSCT were positively correlated with CD4 T cell count (r = 0.425). However, ImmuKnow® levels for acute graft-versus-host disease (GVHD) or infection episodes were not significantly different compared to those for stable alloHSCT. The authors concluded that "the combined test of ImmuKnow levels and lymphocyte subsets may be helpful for immune monitoring following alloHSCT." Ravaioli et al. (2015) aimed to "assess the clinical benefits of adjusting immunosuppressive therapy in liver recipients based on immune function assay results." A total of 100 patients received serial immune function testing via the ImmuKnow in vitro diagnostic assay (compared to 102 controls who received standard practice). The authors found that "based on immune function values, tacrolimus doses were reduced 25% when values were less than 130 ng/mL adenosine triphosphate (low immune cell response) and increased 25% when values were greater than 450 ng/mL adenosine triphosphate (strong immune cell response)" (Ravaioli et al., 2015). The authors also found that survival and infection rates were better in the treatment arm compared to the control arm. Overall, the investigators concluded "Immune function testing provided additional data which helped optimize immunosuppression and improve patient outcomes" (Ravaioli et al., 2015). Piloni et al. (2016) evaluated 61 lung recipients who underwent follow-up for lung transplantation between 2010 and 2014 in order to correlate ImmuKnow® values with functional immunity in lung transplant recipients. The authors found that 71 out of 127 samples (56%) showed an over-immunosuppression with an ImmuKnow® assay mean level of 112.92 ng/ml (SD ± 58.2) vs. 406.14 ng/ml (SD ± 167.7) of the rest of our cohort. In the overimmunosuppression group, the authors found 51 episodes of infection (71%). The mean absolute ATP level was significantly different between patients with or without infection (202.38 ± 139.06 ng/ml vs. 315.51 ± 221.60 ng/ml). The authors concluded that "the ImmuKnow assay levels were significantly lower in infected lung transplant recipients compared with non-infected recipients and in RAS patients" (Piloni et al., 2016). Chiereghin et al. (2017) evaluated symptomatic infectious episodes that occurred during the first year after an organ transplant. A total of 135 infectious episodes were studied with 77 of the infections bacterial, 45 viral, and 13 fungal. Significantly lower median ImmuKnow ® intracellular ATP levels were identified in patients with bacterial or fungal infections compared to infection-free patients, whereas patients with viral infection did not have a significantly different median ATP level compared to non-infected patients. The authors concluded that bacteria were responsible for most symptomatic infections post-transplant and that ImmuKnow measurements may be useful for "identifying patients at high risk of developing infection, particularly of fungal and bacterial etiology" (Chiereghin et al., 2017). Liu et al. (2019) studied the potential of the ImmuKnow assay to diagnose infection in pediatric patients who have received a living-donor liver transplant. A total of 66 patients participated in this study and were divided into infection (n=28) and non-infection (n=38) groups. The researchers report that the "CD4+ T lymphocyte ATP value of the infection group was significantly lower compared with that of the non-infection group" (Liu et al., 2019). This suggests that for pediatric patients who have received a living-donor liver transplant, low CD4+ T lymphocyte ATP levels may be related to infection rates. The ImmuKnow assay may be a helpful tool in this scenario to predict infection. Weston et al. (2020) used the ImmuKnow assay to adjust immunosuppression in heart transplant recipients with severe systemic infections. In particular, if a patient developed an infection, the ImmuKnow assay was used to recommend adjustments in immunosuppression. This assay was used on 80 patients; thirteen of these patients developed a more serious infection. The researchers conclude that "Heart transplant recipients with severe systemic infections presented with a decreased ImmuKnow®, suggesting over immunosuppression. ImmuKnow® can be used as an objective measurement in withdrawing immunosuppression in heart transplant recipients with severe systemic infections (Weston et al., 2020)." Ashokkumar et al. (2017) evaluated PlexImmune through the assessment of CD-154 T-cytotoxic memory cells. A total of 280 samples (158 training set, 122 validation) from 214 children were examined. Recipient CD-154 cells induced by stimulation with donor cells were expressed as a fraction of those induced by human leukocyte antigen (HLA) nonidentical cells, and a resulting immunoreactivity index (IR) ≥1 implied increased rejection-risk. The authors found that "an IR of 1.1 or greater in posttransplant training samples and IR of 1.23 or greater in pretransplant training samples predicted liver transplant (LTx) or intestine transplant (ITx) rejection with sensitivity, specificity, positive, and negative predictive values of 84%, 80%, 64%, and 92%, respectively, and 57%, 89%, 78%, and 74%, respectively (Ashokkumar et al., 2017)." The authors concluded that "Allospecific CD154+T-cytotoxic memory cells predict acute cellular rejection after LTx or ITx in children. Adjunctive use can enhance clinical outcomes (Ashokkumar et al., 2017)." However, at the present time, there is no consensus on the utility of these tests, despite the amount of literature devoted to determine its real value for predicting post-transplant complications (Clark & Cotler, 2020;Fernandez-Ruiz et al., 2014;Kowalski et al., 2006;Ling et al., 2012;Rodrigo et al., 2012). Monforte et al. (2021) studied the prognostic value of ImmuKnow® for predicting noncytomegalovirus (CMV) infections in lung transplant patients. 92 patients were followed for 6 to 12 months after their lung transplant and the assay was carried out at 6, 8, 10, and 12 months. 25% of the patients developed non-CMV infections between 6-12 months after the transplant. At 6 months, 15.2% of patients had a moderate immune response and 84.8% of patients had a low immune response to the infection. In the following 6 months, only one of the patients with a moderate immune response developed a non-CMV infection compared to the 28.2% of low immune response patients who developed a non-CMV infection. The ImmuKnow® assay had a sensitivity of 95.7%, specificity of 18.8%, positive predictive value (PPV) of 28.2%, and negative predictive value (NPV) of 92.9% in detecting a non-CMV infection. The authors conclude that "although ImmuKnow® does not seem useful to predict non-CMV infection, it could identify patients with a very low risk and help us define a target for an optimal immunosuppression" (Monforte et al., 2021).
In an open-label prospective cohort study, Xue et al. (2021) studied the use of the Cylex immune cell function assay for diagnosis of infection after liver transplant in pediatric patients. 216 infants with liver transplants were followed and Cylex ATP values were measured before and after the liver transplant at weeks 1, 2, 3, 4, 8, 12 and 24. After surgery, 74.1% of the transplant patients had a diagnosed infection, 20.4% were clinically stable, and 5.6% experienced acute rejection. The median Cylex ATP value in infant PLTs post-surgery reduced significantly in the infection group compared to stable group. ROC curve analysis determined that the cut-off value of Cylex ATP was 152 ng/mL for diagnosis of infection. The authors conclude "In this study, we demonstrated that low Cylex ATP represented partly over-immunosuppression and had diagnostic value in infant PLTs with infections, which might assist individualized immunosuppression in PLT patients" (Xue et al., 2021).
VI. Guidelines and Recommendations The American Academy of Allergy, Asthma & Immunology (AAAAI) and the American College of Allergy, Asthma & Immunology (ACAAI)
The American Academy of Allergy, Asthma & Immunology (AAAAI) and the American College of Allergy, Asthma & Immunology (ACAAI) published practice parameters for the diagnosis and management of primary immunodeficiency (Bonilla et al., 2015) which stated that: "Evaluation of specific immune responses is essential for diagnosis of PIDDs [primary immunodeficiency diseases]. Measurement of serum immunoglobulin levels and lymphocyte responses to mitogens are useful indicators of global B-and T-cell development and function." The guideline also lists "In vitro proliferative response to mitogens and antigens" as an advanced test used when "Abnormal screening test results indicate the need for more sophisticated tests" (Bonilla et al., 2015). The screening test indicated is flow cytometry to enumerate CD4 and CD8 T cells and NK cells.
Normal or abnormal T cell response to mitogen stimulation is listed in the diagnostic algorithm for the diagnosis of combined or syndromic immunodeficiencies. Specifically, it states that "Infants with low TREC counts should have secondary screening by using flow cytometry to enumerate T-cell numbers and the proportion of naive cells. T-cell counts of less than 1500/mm 3 or a proportion of naive cells of less than 50% should be followed up measuring the in vitro response to a mitogen, such as PHA." It is also listed as a characteristic laboratory finding for WAS, AT related disorders, Good syndrome, XLP1, MSMD, MyD88, WHIM, EV and in the management of DGS, and immuno-osseous dysplasias.
The International Society of Heart and Lung Transplantation (ISHLT)
Guidelines for the care of heart transplant recipients published in 2010 by The International Society of Heart and Lung Transplantation do not include ImmuKnow®.
An ISHLT consensus document for the management of antibodies in a heart transplantation was published in 2018. This document does not mention the ImmuKnow or Pleximmune assays, but does state that "Solid-phase assays, such as the Luminex SAB assay, are recommended to detect circulating antibodies" (Kobashigawa et al., 2018).
An ISHLT consensus document for the antibody-mediated rejection of the lung was published in 2016. This consensus document does not mention the ImmuKnow or Pleximmune assays (Levine et al., 2016).
The American Society of Transplantation (AST)
The American Society of Transplantation does not include the use of the ImmuKnow assay in its publication: "Recommendations for Screening, Monitoring and Reporting of Infectious Complications in Immunosuppression Trials in Recipients of Organ Transplantation" (Humar & Michaels, 2006).
Educational guidelines for the management of kidney transplant recipients in the community setting and for infectious diseases in transplant recipients published in 2009 by the American Society of Transplantation (AST) also do not include ImmuKnow® (AST, 2009).
Third International Consensus Guidelines on the Management of Cytomegalovirus in Solid-organ Transplantation
The International Cytomegalovirus CMV Consensus Group of the Transplantation Society published an international consensus statement on the management of CMV in solid organ transplant in 2018. In it, they note that "Clinical utility studies demonstrate that alteration of patient management based on the results of an immune-based assay is feasible, safe, and costeffective" (Kotton et al., 2018).
VII. Applicable State and Federal Regulation
DISCLAIMER: If there is a conflict between this Policy and any relevant, applicable government policy for a particular member [e.g., Local Coverage Determinations (LCDs) or National Coverage Determinations (NCDs) for Medicare and/or state coverage for Medicaid], then the government policy will be used to make the determination. For the most up-to-date Medicare policies and coverage, please visit the Medicare search website: https://www.cms.gov/medicarecoverage-database/search.aspx. For the most up-to-date Medicaid policies and coverage, visit the applicable state Medicaid website.
Food and Drug Administration (FDA)
ImmuKnow® (Viracor, previously, Cylex) is an immune cell function assay cleared for marketing by the U.S. Food and Drug Administration (FDA) in April 2002 to detect cell-mediated immunity (CMI) in an immunosuppressed patient population. Cylex obtained 510(k) clearances from the FDA to market the Immune Cell Function Assay based on substantial equivalence to two flow cytometry reagents. The FDA-indicated use of the Cylex Immune Cell Function Assay is for the detection of cell-mediated immunity in an immunosuppressed population. A subsequent 510(k) marketing clearance for a device modification was issued by the FDA for this assay in 2010. There were no changes to the indications or intended use.
In August 2014, Pleximmune™ (Plexision, Pittsburgh, PA) was approved by FDA through the humanitarian device exemption process. The test is intended for use in the pre-transplantation and early and late post-transplantation period in pediatric liver and small bowel transplant patients for the purpose of predicting the risk of transplant rejection within 60 days after transplantation or 60 days after sampling.
Many labs have developed specific tests that they must validate and perform in house. These laboratory-developed tests (LDTs) are regulated by the Centers for Medicare and Medicaid (CMS) as high-complexity tests under the Clinical Laboratory Improvement Amendments of 1988 (CLIA '88). LDTs are not approved or cleared by the U. S. Food and Drug Administration; however, FDA clearance or approval is not currently required for clinical use.
81560
Transplantation medicine (allograft rejection, pediatric liver and small bowel), measurement of donor and third-party-induced CD154+T-cytotoxic memory cells, utilizing whole peripheral blood, algorithm reported as a rejection risk score | 2017-08-27T09:07:01.267Z | 2020-02-07T00:00:00.000 | {
"year": 2020,
"sha1": "834b5db1d1eb7d4a4f828759295dc2f0fcdb511a",
"oa_license": "CCBY",
"oa_url": "https://www.qeios.com/read/63PNML/pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5835894c0c30d30fbbf5685b3d5d805933e5d577",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
222185560 | pes2o/s2orc | v3-fos-license | Integrated land use and transportation modelling and planning: A South African journey
Confronted by poverty, income disparities and mounting demands for basic services such as clean water, sanitation and health care, urban planners in developing countries like South Africa, face daunting challenges. This paper explores the role of Integrated land use and transportation modelling in metropolitan planning processes aimed at improving the spatial efficiency of urban form and ensuring that publicsector investments in social and economic infrastructure contribute to economic growth and the reduction of persistent poverty and inequality. The value of such models is not in accurately predicting the future but in providing participants in the (often adversarial) planning process with a better understanding of cause and effect between different components of the urban system and in discovering common ground that could lead to compromise. This paper describes how an Urban Simulation Model was developed by adapting one of the leading microsimulation models (UrbanSim) originating from the developed world to South African conditions and how the requirements for microscopic data about the base year of a simulation were satisfied in a sparse data environment by introducing various typologies. A sample of results from three case studies in the cities of Tshwane, Ekurhuleni and Nelson Mandela Bay between 2013 and 2017 are then presented to illustrate how modelling supports the planning process by adding elements of rational analysis and hypothesis testing to the evaluation of proposed policies.
Introduction
Confronted by poverty, income disparities and mounting demands for basic services such as clean water, sanitation and health care, urban planners in developing countries like South Africa (SA) face daunting challenges (Cervero, 2013). Given that urban planning can be intensely controversial due to the involvement of many institutional and non-institutional stakeholders with divergent values and mandates (Waddell, 2011), the notion of evidence-based planning has emerged as a means of avoiding some of the conflict. Based on the premise that policy decisions can be improved by rational analysis and hypothesis testing, evidence can take the form of surveys, analyses of past observations or using mathematical models to simulate the likely future outcome of proposed policy scenarios. Integrated Land Use and Transportation (ILUT) models represent a specific class of mathematical models that have over the last three decades contributed to policy formulation by providing a means of rational analysis and hypothesis testing in the evaluation of proposed policies. The models were initially focused exclusively on land use and transportation issues but are increasingly being used to assess potential economic, social, and environmental impacts in search of more sustainable cities. The relative success of a proposed policy scenario in achieving one or more objectives can be evaluated by comparing a few predefined indicators or measures of performance. A policy scenario is a unique combination of policy alternatives and assumptions, such as estimates of population and employment growth in the study area. Policy alternatives may include anything from growth boundaries to strategic decisions about transportation and other infrastructure systems. A comprehensive list of which policy alternatives might be considered to address various economic, social, and environmental issues was compiled by Wegener (2007) from a study of ILUT projects in Europe.
The value of the "evidence" obtained from such models is not derived from accurately predicting the future, but from providing participants in the hypothesis-testing process with a better understanding of interactions between different components of the urban system, which ranks among the most complex production systems ever built. Furthermore, it facilitates finding common ground between different interest groups that could lead to compromise.
In this paper we describe a decade long (2007 -2017) journey of the development, behavioral and empirical validation (defined by Waddell, 2011) and application of a South African ILUT model in 3 metropolitan municipalities to support evidence-based decision making. The model developed by the CSIR will be referred to as UrbanSim-SA to accentuate the resemblance with UrbanSim-E; developed as part of the much bigger EU-funded SustainCity project (2010 -2013) involving collaboration between 12 research institutions.
In Section 2 we describe material aspects of context and government policy in South Africa to provide non-South African readers with a better appreciation of the policy scenarios that UrbanSim-SA must be able to respond to. In Section 3 we consider related work from published applications of Ur-banSim outside the USA, where it was developed. In Section 4 we describe how UrbanSim was adapted to South African conditions and reasons for linking it with a modified version of OpenTripPlanner as transport model. Arguably the biggest challenges to applying microsimulation models anywhere in the world, are the availability and effort required to assemble detailed data for the base year of a simulation at the level of individual parcels, buildings, households, etc. In Section 5 we describe how this was achieved in the sparse data environment characteristic of developing countries. Section 6 describes how both the model and the data preparation processes were empirically validated by simulating a period in the past (2001 to 2011) and comparing the results with actual growth patterns that occurred during the same period according to two reputable sources. In Section 7 we provide examples of the results obtained from simulation projects in the Cities of Tshwane, Ekurhuleni, and Nelson Mandela Bay between 2013 and 2017, each with a description of local context and some of the policy scenarios that were simulated.
Context
In addition to the daunting challenges posed by poverty, income disparities and demands for basic services, planning in South Africa is also heavily influenced by history, specifically the ideology of apartheid introduced by the National Party government when it came to power in 1948. Apartheid called for the separate development of different racial groups in South Africa. On paper it appeared to call for equal development and freedom of cultural expression but forced different racial groups to live separately and develop unequally (South African History Online, 2016). It may come as no surprise then that current planning practices are aimed squarely at resolving the inherited spatial inefficiencies in urban form.
It is generally accepted that the biggest inefficiency is that poor households live on the periphery of towns and cities. Cervero (2013) notes that spatial mismatches between where the needy live and where formal jobs with livable wages are located, occur in all cities but that they are more pronounced in most of the developing world.
In these cities, the poor live mainly on the fringes and effectively trade off higher transportation costs against cheaper (and often illegal) housing, the opposite to what one would expect from traditional residential location theory, framed from a first-world perspective (Alonso, 1964). According to the National Household Travel Survey conducted by Statistics South Africa (Stats SA, 2013) almost all households (98.9%) from the lowest income quintile in South Africa spend more than 20% of wages per capita on public transport (Stats SA, 2015). For the very poor of the developing world, whatever savings accrue from illegally squatting and living in squalor, are often negated by the expenses incurred to reach job opportunities as well as essential medical, educational and retail destinations (Cervero, 2013). The implication of this for modelling is that for these communities, the monetary cost of transportation may be more important than the generalized cost (see Section 4.2) and that the model must be capable of dealing with informal dwellings that are not part of the property market in the conventional sense.
Another challenge to the Government of South Africa, is striking a balance between investment in "economic infrastructure," such as transportation and energy supply systems; and "social infrastructure and services," such as housing, water, sanitation, health and a variety of grants/subsidies aimed at alleviating the plight of the poor without expecting a monetary return on the investment. Public transportation has attracted massive investments since the promulgation of the National Land Transportation Act of 2009. Like in most cities around the world, the new public transportation services (mostly Bus Rapid Transit) are subsidized, but probably more so as a service to the poor. During the first 20 years of democracy , the government has supplied approximately four million "housing opportunities" -903,543 serviced stands and 2,835,275 houses or social housing units (Department of Human Settlements, 2019). Despite talks about more compact and sustainable cities, it seems that little has changed, because in the pursuit of helping as many households as possible, housing projects had to be built on affordable land, which could only be found on the outskirts. In a way, government has also made a trade-off between cheaper land and higher operating costs in the form of subsidies. This could still be mitigated by managing land use in such a way that densities are increased to achieve the necessary ridership levels to contain subsidies. The implication for modelling is that the government acts as a public-sector developer with important decisions to make about the location and size of proposed housing projects, with larger projects offering economies of scale but requiring tracts of land which will most likely only be found on the fringes. We revisit this as part of the Ekurhuleni case study (Section 7.2).
Yet another challenge is to ensure that public-sector investment contributes to economic growth and the reduction of persistent poverty and inequality. In 2011, the National Treasury introduced the Built Environment Performance Plan (BEPP) as a new statutory planning instrument aimed at achieving these elusive outcomes (National Treasury, 2015). In summary, the strategy is aimed at promoting economic activity in the urban hub of marginalized areas on the periphery, changing urban form by way of walkable precinct plans, revitalizing the main activity area(s) in the city, linking these to the hubs by public transportation and promoting higher density, mixed-use development along the public transport corridors.
Lastly, it is important to note that ranked by GDP per capita (expressed in USD), South Africa occupies position 31 of 126 developing countries (Developing Countries Population, 2019). With a GDP per capita of about $6 600 compared to $40 000 to $80 000 for USA/Europe, South Africa is relatively poor with roughly 65% of the population considered to have a low income (Waldeck & van Heerden, 2017). This is the primary cause for the existence of informal settlements and has several implications for modelling residential and employment location choices as well as transportation mode choices.
Related work
In this section we explore possible similarities to, and differences from, other applications of UrbanSim in regions outside of the USA, where it was originally developed and implemented in several cities. A literature search found most applications in Europe (Schirmer, Zöllig, Müller, Bodenmann, & Axhausen, 2011;Patterson, Kryvobokov, Marchal & Bierlaire, 2010;Kryvobokov, Mercier, Bonnafous & Bouf, 2015;Di Zio, Montanari & Staniscia, 2010;Picard, de Palma & Kiarash, 2015) and Asia (Felsenstein, Lichter & Ashbel, 2014;Jin & Lee, 2018;Joo, Mehedy Hassan & Jun, 2011;Shi, Tong, Zhang & Tao, 2013). According to the classification provided by Developing Countries Population (2019), all these applications, except the one from China, were done in the developed world. The comparisons made to these application projects are done under the headings of Country context and Data preparation, which are for UrbanSim-SA further unpacked in Section 5.
Country context
Based on a substantive body of work following an informal European UrbanSim Users group meeting at ETH, Zurich in 2008, Felsenstein, Axhausen, and Waddell (2010) identified the following five differences in the prevailing land-use transportation environment between the USA and Europe (listed verbatim with comments regarding the situation in South Africa appended in italics). First, a very different land-use environment exists in Europe. This makes for greater government regulatory controls over urban development at all levels. Because of its political history, SA sees itself as a "Developmental State" with the need for statutory planning requirements such as the Built Environment Performance Plans, briefly described in Section 2. The government also acts as a property developer in the provision of free/subsidized housing to the poor (about 60% of all households (Rust, 2012)).
Second, a very different attitude exists in European cities towards car ownership and dependence on public transport. In this respect, SA is probably closer to the USA with 92% of white households owning one or more cars . Amongst black households, the figure is much lower at about 20%, mostly due to affordability rather than lack of aspiration. Public transport has received a lot of attention in the last decade but apart from Gautrain (fast intercity rail) and some new Bus Rapid Transport routes, public transport is viewed as something for those without other options.
Third, Europe has a very distinctive ethos with respect to housing tenure and property rights. This results in levels of homeownership, composition of housing stocks and "acceptable" patterns of residential density, very different from those prevailing in the United States. In South Africa most households aspire to own their homes, but ownership statistics vary between roughly 30% at age 30, to 70% at age 65 (Africa.com, 2018). Development densities have increased over time, but sentiment is very much against multi-story housing. While some of the policy alternatives presented in Section 7 have set targets for transport corridor densities as high as 400 housing units per hectare (hu/ha), corresponding actual densities at present are closer to 25hu/ha. With average (city wide) densities as low as 10 hu/ha, South African cities are even less dense than cities in the USA. The composition of housing stock also differs because about 18% of housing units in the metropolitan areas are informal (Stats SA, 2019b). Informal dwellings (sometimes referred to as "shacks") are mostly temporary, built from rudimentary materials and never compliant with building regulations. These can occur as one or more "backyard shacks" on a formal property (by arrangement with the owner) or as densely populated settlements, mostly on land that the occupants have no legal claim to (referred to as informal settlements).
Fourth, the major United States urban land-use issue, that of residential sprawl, features much less prominently in Europe where commercial (or employment driven) sprawl is high on the urban agenda. In this respect, South Africa is more like the USA with residential sprawl high on the agenda, especially from the point of view of attempts to increase corridor densities to achieve the required ridership levels to contain public transport subsidies.
Data preparation
Given that UrbanSim is a microscopic model that works with the attributes of every person, household, job and building in a city, it is well known that the process of preparing the input data for the base year of a simulation can be very time consuming. One publication from the list of applications in Europe and Asia (Schirmer, Zöllig, Müller, Bodenmann, & Axhausen, 2011) describes this process in great detail, pointing to difficulties associated with the collection of data from different sources and covering different periods of time, linking attributes to a common spatial unit for simulation, cleaning (for example by deleting duplicates) and imputing missing values. Surprisingly, this project (Zürich case study of SustainCity) was the only one based on the parcel geometry of UrbanSim, with all the rest based on grid cell and zone geometries. One of the disadvantages of zone/grid cell geometries is that they are oversimplified representations of the real world, mostly containing multiple cadastral parcels and buildings, the attributes of which must then be aggregated or somehow accounted for. A common approach to achieve this is to create up to four synthetic buildings (residential, government, industrial and commercial) per zone/grid cell, for example (Felsentein et al., 2014) and (Patterson et al., 2010). The households obtained from a population synthesizer or small area statistics are then allocated to a zone/grid cell and from there to a single residential building. Even though this does not in itself aggregate or average household attributes like age of head, income or persons, the mere existence of households of any income group in the same building type is a form of behavioral aggregation. This could not possibly work in South Africa with one of the highest GINI coefficients in the world. For UrbanSim to succeed in South Africa, we adopted the parcel geometry from the outset and defined 5 income categories for sub-model segmentation of the Household Location Choice Model (see 4.1.3). We eventually used 10 residential building types (derived from settlement typology shown in Figure 2). The behavioral aggregation could, of course, be mitigated by creating more synthetic buildings (of different types) but it is not clear why one would introduce this additional complexity when parcel data is one of the most readily available datasets of all.
The literature review of related work has shown that while there are many similarities between the data preparation for applications in Europe/Asia and South Africa, there are two significant differences. Firstly, in South Africa we have found it necessary to avoid behavioral aggregation by using the parcel geometry, whereas all, except the Zürich case study in Europe/Asia, used zone or grid cell geometries. Secondly, all the applications in Europe/Asia, except for three (Kryvobokov et al., 2015), (Patterson et al., 2010) and (Shi et al., 2013), had ready access to buildings data from some agency of government. In South Africa this was not the case and we had to acquire data from a private-sector company who derived it from satellite imagery (Geotera Image, 2013). Even if buildings data were available from an official source, it would not include informal buildings, which per definition bypass all municipal development approval processes. These buildings (approximately 15% of all buildings) would still have to be obtained from private-sector sources.
4
UrbanSim-SA The development of an Urban Simulation Model for SA started with a review of the state of the art in 2007. At the time, the objective was to develop a model that would be capable of representing cities as complex, adaptive and self-organizing systems, as described by Batty (2005). The study concluded that approaches based on understanding the behavior of agents, which closely resemble actors in everyday life and relying on the behavior of the overall system to emerge from the interactions between agents, had the best chance of achieving the objective. UrbanSim was thus selected as the core of the Urban Simulation Model. A proof of concept phase then followed between 2009 and 2012 in which the Department of Science and Technology funded the development and application of the model in the cities of eThekwini, Nelson Mandela Bay and Johannesburg. This experience provided an affirmative answer to the biggest question at the time: whether it would be possible to collect the attributes of the 5 main tables required by UrbanSim (parcels, buildings, households, persons and jobs) in a sparse data environment, with no building data available from any of the cities. This, together with some behavioral validation, was enough to secure further funding to address outstanding issues such as the place of work (see Section 5.6) and to complete UrbanSim-SA.
For readers interested in the progression of ILUT models since 1960, a paper by Moeckel (2017) provides an excellent overview. Although his literature review was done from the perspective of determining how ILUT models deal with constraints that may influence household location choices (such as price of dwelling, travel time to work or monetary commute cost) it provides a comprehensive list of references, including those considered by the CSIR in 2007. Just out of interest, it seems that apart from SILO (Ziemke, Nagel, & Moeckel, 2016), no completely new models were developed since 2007when the CSIR selected UrbanSim.
UrbanSim
A very brief description of UrbanSim is provided in this section to define the context against which it was adapted as described in the next section. For more comprehensive descriptions, please refer to Waddell (2002Waddell ( , 2011. UrbanSim is a microscopic land-use model in which the attributes of every person, job, household and building in the city are represented by a separate record in the database. The persons, jobs, and households for the base year of simulation (which must be a census year) are generated by a population synthesizer, as described in Section 5.1. During the simulation, the base year population and jobs are adjusted for each subsequent simulation year by the Household and Employment Transition Models, respectively. These models grow the population and jobs each year according to city-wide control totals. Other options are available for simulating various life-stage events. The spatial resolution can also be microscopic, at individual property boundaries, but zone-based geographies can also be used.
UrbanSim recognizes the property market as an important construct of the urban system and ac-counts for the current stock of real estate (and its spatial distribution) as well as the supply and demand of new stock. Households and businesses represent the consumers of residential and non-residential stock, respectively. The choice of location by these agents is accounted for by the occupation of space in a building. Developer agents, representing the supply side, use occupancy rates as a feedback signal to determine the rate at which new stock can be built and sold. Supply and demand are mediated by price, another feedback signal that influences the return on investment calculation done by developers when considering/choosing between potential developments.
Governments set policies that either regulate the use of land or influence development through pricing policies such as developer contribution fees. Governments also build infrastructure and provide social facilities such as libraries, clinics, community halls, parks, sports fields, and fire stations. Transportation infrastructure and the quality of social facilities influence the attractiveness of locations for different consumers. In UrbanSim-SA, we treat government housing projects as development events, each representing a project built on a specific parcel of land. The location and number of housing units per project are the product of a comprehensive planning process (not market forces) but the total number of units delivered per year is invariably limited by budget constraints.
Households have characteristics that influence their preferences for housing of different types at different locations. Similarly, businesses have preferences that vary by industry/economic sector and number of employees for alternative building types and locations. Two models are responsible for modelling these choices, the Household Location Choice Model and the Employment Location Choice Model. Both are discrete choice models based on the pioneering and Nobel Prize (2000) winning work of Mc-Fadden (1974) on Random Utility Maximization Theory. This approach determines the probability of a choice from a set of available alternatives based on the characteristics of the chooser and the attributes of the alternatives, depending on the relative utility that an alternative offers the chooser.
The systematic component of a utility is expressed as a linear combination of estimable coefficients multiplied by independent alternative-specific variables that may be interacted with the characteristics of the agent making the choice. The coefficients are estimated using maximum likelihood methods built into UrbanSim. For choice models, the estimation can be based on the stated or observed behavior of agents. All our work so far has been based on observed behavior derived from sources such as the census and household travel surveys.
Examples of alternative-specific variables often used in utility calculations include the distance, travel time and generalized cost of commuting. The term generalized cost refers to the sum of the monetary cost and the value of time spent undertaking the trip, which varies according to the traveler's income and the purpose of the trip.
The process of selecting explanatory variables usually starts by including some variables that are consistent with urban economic theory. Guided by various indicators which estimate the relative importance and statistical significance of each variable, variables are added or deleted to maximize the overall explanatory power of the model. Some judgement may be required to avoid situations like the following: If distance to employment is used as a variable in the Household Location Choice Model, then estimating the sub-models for low-income groups in a South African city will probably show that the variable is significant and that these households prefer to live far from work. Since this is more likely a remnant of apartheid than an actual observed preference, the simulated future city might be no different from the present. In such cases it may be necessary to omit the variable, define a more appropriate variable or estimate the model on a subset of data that will not perpetuate the situation.
Adapting UrbanSim
For this paper, adaptation is defined as additions or modifications to/of the source code or configuration files that were necessary for UrbanSim to succeed in South Africa.
Data required for base year
By far the bulk of the effort was consumed by the development of 6 completely new models needed to generate microscopic building data and to place households and jobs into specific buildings for the base year of the simulation. The most important of these are discussed in Section 5.
Affordability constraints
In a review of ILUT models, Moeckel (2017) found that most models do not explicitly account for affordability constraints in the location choices made by households. While this also applies to UrbanSim, users around the world have partially solved the problem by introducing an interaction variable based on the product of household income and the unit price of dwellings. This raises the probability of lower income households choosing cheaper dwellings (and vice versa) but is not quite the same as a constraint. From the outset, the Household Location Choice Model was set up to have 5 sub-models, one for each household income category. The advantage of such a configuration is that it allows completely different explanatory variables to be used for each income category, something that was expected to be important in South Africa with one of the highest GINI Coefficients in the world (World Bank, 2011).
Despite these configuration changes, early results indicated that there were far too many inappropriate choices of rich households living in shacks and poor households living in mansions. The problem was then solved by coding a filter, which ensured that the 30 alternatives, from which a household would make a choice, were all sampled from buildings that were affordable to households belonging to a specific income category.
Proximity to employment
Most UrbanSim models around the world are configured to use various measures of proximity to employment as explanatory variables in their Household Location Choice Models. The measures could be based on travel distance or time/cost of travel by various modes of transport. The explanatory variables are usually expressed as a weighted average or logsum of these measures from the location of a dwelling being considered by the household to all jobs. By this definition, any dwelling close to an area with a high job density, will present with a relatively high utility. If the household chooses one of these dwellings, which could be far from the current place of work, the plan is probably to then look for a job closer to home. In South Africa, where employment is scarce (with official unemployment close to 30% (Stats SA, 2019c)) fewer households would relocate before first securing a new job. A more realistic representation of this behavior in the model would therefore require the measure of proximity to be based on the travel time/cost from the dwelling being considered to the existing job rather than the travel time/cost from the dwelling being considered to all jobs.
The implementation of this change was complicated by the fact that there could be more than one employed person per household. Although not ideal, it was decided to take a first step by calculating a travel_cost_to_work variable (by mode of transport) for the head of the household.
Demographic transition
Once the travel_cost_to_work variable had been implemented, a problem with the demographic transition models became apparent. The Household Transition Model was found to maintain the relationship between new households and new persons created every year, but the Employment Transition Model created new jobs with the correct employment sector but with a person_id of zero (meaning that the job was not yet linked to a person).
Since none of the new jobs (which eventually become most jobs) could be linked to a household via the person_id, it was not possible to calculate the travel_cost_to_work variable for many households. The solution was to code a new model, which randomly linked the new jobs to new persons with an employment status of employed or unemployed but seeking employment. All unassigned jobs were then deleted because the Employment Transition Model would maintain the correct number of jobs per employment sector in the next simulation year. None of this affected the work-location of the new jobs.
Excess capacity
A last feature that required modification is that when development project proposals (for construction of new buildings) are evaluated, large projects tend to yield the highest return on investment and often get selected, leading to land parcels being completely built up in one year. This is no problem in proclaimed townships where farms/smallholdings have already been subdivided. Several cities, however, contained massive farm parcels on the periphery. If these parcels were built up, it had a substantial impact on future development because the high vacancy rate for that type of development inhibited similar developments from taking place elsewhere in the city (possibly for many years). The fact that the parcel was fully built up also prevented buildings of a different type from being constructed on the parcel in future years. An elegant solution to this problem would have been to subdivide the large parcels into something that resembles proclaimed townships, based for example on procedural city modelling (Lechner, Watson, Tisue, Wilensky, & Felsen, 2004) or example-based texture synthesis (Vanegas et al, 2009). While this would be desirable for 3D visualization of growth, another solution was adopted for computational benefits, which simply involved demolishing the excess capacity at the end of every simulation year and releasing the associated land for future development.
Transport model
The transport model used with UrbanSim-SA is described in detail in Waldeck & van Heerden (2017). For this paper we only provide a short summary of how an adapted version of OpenTripPlanner (OTP) can be used to calculate the lowest monetary cost of commuting between pairs of origin-destination zones for any of the available modes of transport during the morning peak. Readers interested in the rationale behind using monetary cost and distance variables as proxies for generalized cost variables (which depend on the value of time and congested state of the network) are referred to section 4.3 of Waldeck & van Heerden (2017).
Various data sources are required to run OTP, including a road network, origin-destination zones, public transport data, and the cost of private vehicle usage. A short description of each follows.
Road network
The road network was generated from OpenStreetMap (OSM), using the Geofabrik GmbH (2018) servers to obtain the file. In the case of simulating scenarios that involved future changes to the road network, typically driven by the Provincial Department of Transport, the OSM network was edited to reflect the planned changes; new roads were drawn in using JOSM. A new network file was provided as an input to OTP and the output, in the form of lowest cost commute options by mode, provided to UrbanSim for use in simulation years following the network change.
Origin-destination zones
To simplify matters, the origin-destination zones were identical to the analysis zones introduced later in Section 6. The lowest-cost trip for any pair of origin-destination zones (with area of about 3km2) were based on the network distance between the centroids.
Public transport network
The public transport network was created from public transport itinerary data, which were obtained from the respective municipal transport departments. The raw data, in various formats ranging from Excel sheets to PDF files, were converted into the General Transit Feed Specification (GTFS) format and subsequently used in OTP. OTP combines the public transport network with the road network and builds a directed graph to be used in subsequent routings. Due to data limitations and the complexity of the minibus taxi industry, minibus taxis were included as a means of public transport with stops distributed at 2km intervals along their routes, since these vehicles tend to stop anywhere along a route to pick up or drop off passengers. Taxis are licensed to operate on one or more routes, which were available from all 3 cities. The routes were not changed during the simulation period because route-planning is known to occur ad-hoc. The schedules, which include departure times and frequencies, were also obtained from the cities.
Private vehicle usage
Private vehicle usage (also referred to as the drive alone option) was included by means of a fixed, perkilometer cost, based on rates published by the Automobile Association. The calculations are described in Waldeck & van Heerden (2017).
Determining lowest-cost commute trips
Given the input data, the final step was to determine the lowest-cost commute trips between pairs of zones. This was achieved by adapting the Batch Processor in OTP to calculate the cost between origin-destination pairs rather than using shortest travel time, as is the case in most developed countries. A walking radius of 2km was used for people to access public transport and walking and cycling were modelled as not incurring a monetary cost.
Update frequency
In most cases OTP was run in the base year (2011), and then re-run for a later year only if there were substantial changes to the road or public transport networks that needed to be taken into account in future simulation years. In such cases, OTP was run again and the output provided to UrbanSim in the form of the usual travel_data table but in a separate folder from the base year so that parameterized variables would read from the correct folder for the year being simulated. Note that even in the simplest case of using the same travel_data for all years, the proximity to employment measures discussed in Section 4.1.4 are different for each year because the variables are updated according to the constantly changing location of households and jobs.
Data preparation
One of the challenges that come with using the parcel version of UrbanSim is that it requires microscopic data about every person, household, job, building and land parcel in the city for the base year of a simulation. Much of the effort in preparing the base year data goes into establishing the relationships between these entities. For example, households, persons and jobs are obtained from the most recent census. A household contains one or more persons, some of which may be employed (have a job). These relationships are easy to establish because households, persons and jobs share the serial number of the same enumeration form. But which households live in which (mostly residential) buildings and which jobs take place in which (mostly office, retail or industrial) buildings?
The following sections describe the most important attributes of these entities and how the relationships were established in a sparse data environment characteristic of developing countries.
For readers that may be embarking on a journey with UrbanSim, several publications point to the fact that the data preparation phase is iterative. It is not uncommon to discover incorrectly classified parcels during the validation phase, taking one back almost to the start of the workflow. For this reason, it pays to automate as much of the process as possible, see for example (Schirmer et al., 2011). In our case we achieved similar objectives by extensive use of SQL and ArcGIS ModelBuilder scripts.
Synthetic population
The household and job agents used by UrbanSim were derived from a 10% sample of enumeration forms from the last census (2011) by a technique known as Iterative Proportional Updating (Ye, Konduri, Pendyala, Sana, & Waddell, 2009). This explains why the base year must always be a census year. The Population Synthesizer supplied with UrbanSim, required minor modification because the enumeration form used by Stats SA differs substantially (in format and content) from the form used by the US Census Bureau. The modifications were limited to renaming data fields and the order/format in which they were read in.
In terms of spatial resolution, Stats SA's sub place geography was found to provide the closest match to the average population of the BLOCK GROUP geography of the US Census Bureau, which was in turn used for the development and testing of the Iterative Updating Algorithm.
During the simulation, the population and jobs are adjusted by the Household and Employment Transition Models, which grow the households and jobs each year according to a set of exogenous control totals. The city-wide control totals for every simulation year were provided by IHS Markit (formerly known as IHS Global Insight) from their Component Cohort Model. Household and population totals were classified by race, income, and age groups. Employment totals were provided by one-digit Standard Industry Classification.
Parcel attributes
To use UrbanSim at a resolution of property boundaries, one needs to categorize the following attributes for each land parcel: land-use type, development template, improvement value and land value. Several other attributes are required to restrict development where parcels are in wetlands, on steep slopes, etc. These are simple to apply except if restrictions such as flood lines cut through a parcel so that half of the parcel can be developed and the other half not. To avoid this, the data preparation workflow, at an early stage involves an overlay of the parcel layer with restriction layers such as flood lines, hydrology and geology. This allows any part of a parcel within a flood line or pond/wetland to be labelled as "nondevelopable" by a script in the GIS. Municipal valuation rolls in South Africa provide a market-related value for parcels including all buildings on the parcel, without distinguishing between the value of the land and the value of the buildings (the so-called improvement value). This was resolved by introducing the notion of a "development cost," which includes the improvement value and the value of the land associated with each building on the parcel, as an attribute of buildings rather than parcels.
A critical step in enumerating the remaining attributes was the selection of a settlement typology based on a cluster analysis done by the Knowledge Factory (2006) on factors including socio-economic rank (income, property value, education and population group), life stage (age, household and family structure) and dwelling type (size, type and age of structure). The analysis, originally aimed at geodemographic segmentation for micro-marketing purposes, identified 10 clusters comprising 38 classes, represented in Fig. 1 on axes of income and development density. From the outset, the 38 classes of the settlement typology were used as development templates. The constructs of development templates and corresponding development template components allow UrbanSim to configure virtually any development proposal for construction in the future. The template structure is robust enough to cater for anything from a single house on an infill lot to a mixed-use project with retail on the ground floor and apartments above. Backyard dwellings were for example represented by one formal building and one or more informal buildings on a parcel with a formal development template. These were limited to land-use types on which backyard dwellings have been observed in the past.
While the development templates worked well, it proved exceptionally difficult to find suitable predictors of the market value of a property until the clusters were introduced as land-use typology. This was inspired by the observation that the cluster analysis done by the Knowledge Factory was based on factors which included household income, property value and dwelling type, so for residential properties at least one would expect to find a correlation between the newly defined land-use type and property value, which turned out to be true.
A potential disadvantage of using the clusters as a land-use typology was that it deviated from what municipal officials are used to. Fortunately, UrbanSim allows for the definition of several typologies in addition to development templates and land-use types. The plan type is one such typology that is used to define land use in the way that planners are accustomed to but in interactions with metros it seemed that most were indifferent to the definition of land use while one even welcomed it as a new way of thinking about their own data and processes.
Building attributes
The number of jobs that can be accommodated in a building depends on the floor space available in the building and the floor area required per job per sector of the economy. The total floor area is related to the total parcel area through the so-called "floor area ratio." The number of households that can be ac-commodated on a parcel, which determines the density typically expressed as housing units per hectare, also depends on the floor area per residential unit and the floor area ratio. The market value per residential unit may also depend on the floor area and parcel area. Unlike in the developed world where much of this information is available from municipal building records, we have to obtain the type of building from one of 70 classes of the "GTI Building based land-use type" dataset (Geoterra Image, 2001, 2011 and the rest from observed average densities per building/land use/development template type. This involves a fair amount of analysis, for example, to exclude outliers caused by building projects in progress from the calculations.
Business entities
Information about business entities in South Africa is severely lacking and the best that can be done is to use jobs as a proxy for the behavior of businesses. Pseudo-buildings are created with enough floor space to accommodate the estimated number of jobs per economic sector but there is no way for example to distinguish between businesses trading in clothing from businesses trading in fast foods, which prevents the modelling of agglomeration behaviors.
Relationships
The most challenging relationships to establish are those between households, jobs, and buildings. In the case of households, the synthesizer specifies the sub place where every household resides. The households within each sub place, about 1600 on average for the three major metropolitan municipalities in Gauteng according to the 2011 Census (Stats SA, 2019a), are allocated to specific buildings in that sub place on affordability considerations derived from the income of the household and the municipal valuation of the pseudo property.
In the case of jobs, the relationship between jobs and households is easily obtained by linking jobs to all the employed persons in a household. Since households are located, from census information, this linkage provides the place of residence of the person holding the job but the census (since 2011) provides no information whatsoever on the place of work of the person.
Place of work
The place of work issue was resolved by a deceptively simple method based on using building valuations as a proxy for the number of jobs that a building can accommodate combined with allocating jobs to buildings in such a way that the home-to-work travel distance distribution for the city as a whole matches the last municipal travel survey. Valuation rolls are generally of high quality because they are required by law as the basis for taxation and open to dispute by any resident. The job capacity of a building is assumed to be proportional to its valuation by a factor used to compensate for the number of jobs per million of the valuation for different types of buildings. This factor serves as compensation for differences in the contribution of land and buildings to the overall valuation, different methods of construction and different job densities resulting from different activities in the building. Considering the method of construction, a typical industrial building with steel frame is much faster and cheaper to build than a multi-story concrete building with luxury finishes characteristic of shopping malls. As an example of how business activities affect job density, consider a warehouse that should have a comparatively low job density because most of the space in the building is used for the storage of goods.
The building-type correction factors were determined empirically through 3 to 4 iterations of the following process: 1. Multiply a copy of the municipal building valuations by the building-type correction factors for the following types of buildings: retail, commercial, industrial, institutional (mostly government, correctional services, defense, emergency services and police) and other (utilities, community services, health care, education, recreation). The correction factors are set to unity for the first iteration. 2. Disaggregate the total number of jobs in the city (from exogenous employment forecasts provided by IHS Markit, 2012 to individual buildings in proportion to the building valuation divided by the total valuation of all non-residential buildings in the city. 3. Compare the total number of jobs by building type to the employment forecasts by job sector, adjust the correction factors and iterate. The last step requires the highest-level building-type classification to match the employment sectors, achieved through combining sectors such as "agriculture" and "electricity, gas and water supply" into "other." In a few cases it was necessary to introduce correction factors at sub-classifications of building type. Regional malls for example were found to have unrealistically high job densities probably due to the cost of prime land, method of construction (multi-story concrete) and luxury finishes. Another example is sports stadia, which are extremely expensive to build, yet used infrequently with very low job density. Sports facilities at universities and schools were also found to contribute significantly to the valuation but detract from job density because they employ relatively few people for maintenance purposes.
Once the building-type correction factors have been determined, the relationship between the place of work and the place of residence of the person holding a job is determined by first placing homebased jobs (about 12% for the City of Tshwane, according to 2011 Census (Stats SA, 2019a)) in the same building as the place of residence. The remaining jobs are then placed so that the overall commutedistance distribution (including home-based jobs) has a median as close as possible to the median distance of work-related travel from the last household travel survey commissioned by the municipality (about 14km in the City of Tshwane).
Model validation
Empirical validation was done by simulating a period in the past (2001 to 2011) and comparing the growth forecasts with the actual growth that occurred during the same period according to two reputable sources, Stats SA and Geoterra Image (GTI). The analysis zones used for the comparison were derived from the 2001 sub places published by Stats SA by retaining or merging smaller sub places in built up areas and subdividing larger sub places into areas of about 3km 2 . This was necessary to avoid complications that arose during the proof of concept phase from the Modifiable Areal Unit Problem when growth comparisons were based on areas that differ significantly in size. The solution required the household counts (2001 and 2011) to be adjusted by dasymetric mapping using the dwelling counts of GTI as a proxy for how the population is distributed within each sub place. A similar correction would have been required anyhow because Stats SA changed the sub place boundaries between the 2001 and 2011 censuses. The resulting analysis zones (varying in number between about 800 and 1000 for the three case studies) are shown in Figure 3. The results of the empirical validation are shown in Figure 4 as a comparison of the growth in households forecast by the Urban Simulation Model to the actual growth in households according to GTI between the 2001 and 2011 censuses as well as a comparison of the actual growths according to Stats SA and GTI (right). Upon investigation of the top outliers produced by UrbanSim, very few were found to not have logical explanations. Even with the outliers, Figure 4 suggests that the predictive accuracy of the model is comparable to a comparison of the actual growth measured by different methods employed by two reputable organizations. While it is conceded that there is a 1-year difference between the observations of Stats SA and GTI (2013 release based on 2010 remote sensing), we regard the results as a satisfactory overall validation not only of the Urban Simulation Model but of the preceding data preparation processes.
Results
A sample of results obtained from simulations done for the Cities of Tshwane, Ekurhuleni, and Nelson Mandela Bay between 2013 and 2017 is presented next. In each case a brief description is provided as context, followed by a list of the policy scenarios that were simulated (in addition to the Trend Scenario defined as what most stakeholders in the participation process would regard as a given), followed by sample results for one of the scenarios and a short discussion.
Context
For the City of Tshwane, the objective of simulating various policy scenarios was to inform the development of their Capital Investment Framework (CIF) with specific reference to quantifying the future demand for services rendered by government through social facilities as a means of determining the capital budget requirements of such facilities.
Scenarios
In this instance, potential disagreements about what constitutes each of the scenarios were avoided by not attempting to specify all the scenarios at the outset, but to first focus on the Trend Scenario. This scenario involves the fewest assumptions because it simply represents what most stakeholders in the participation regard as a given, for example that the principles outlined in the Regional Spatial Development Framework (RSDF) will be adhered to and that all phases of the Integrated Rapid Public Transport Network (IRPTN) will be implemented. The other scenarios were developed later from the interactions that followed the presentation of results obtained from simulation. In this way it was not too difficult to specify and simulate the following scenarios: 1. Trend scenario with most likely city-wide demographic and employment growth projections provided by IHS Markit in 2012 (since used extensively by various departments) 2. Pessimistic socio-economic growth scenario with higher population and lower employment growth projections provided by IHS Markit in 2016 (taking a dimmer view of regional migration and of the South African economy) 3. Priority areas scenario advocated by engineering groups based on the elevated cost of providing wastewater infrastructure in some catchments.
Model estimation
In this section we provide the results of estimating 3 of the 5 sub models of the Household Location Choice Model (HLCM) for the Priority areas scenario to provide some insight into the explanatory variables that were found to be most significant. In Table 1 above, all variable names starting with "b_" are building variables (returning a onedimensional array with a value for each building) while the remaining two are interaction variables returning a two-dimensional household x building array. A brief definition of the variables follows: • b_ln_nhb_jobs_within_20_km: Natural logarithm of the number of non-home-based jobs within a 20km network distance of the building. • b_ln_proxim_ind: Measure of proximity based on the aggregated area of industrial land within the analysis zone that the building is located in. • b_logsum_access_da: Logsum of travel cost by "drive-alone" mode of transport between the zone that the building is located in and all other zones. • b_hwy_3000: The building is located within 3km from a highway. In this case not a network distance, obtained from simple GIS calculation. • b_ln_avg_inc_zone: Natural logarithm of the average income of households in the analysis zone in which the building is located. • b_ln_empden_zone_6: Natural logarithm of the density of jobs in the trade and financial services sectors of the economy within the analysis zone in which the building is located. • h_x_b_da_travel_cost_to_work: Natural logarithm of the monetary cost of travel between the analysis zones representing the place of residence and place of work of the head of a household for the "drive-alone" mode of transport. See section 4.1.4 for further discussion. • h_x_b_transit_travel_cost_to_work: Same as above but for the "transit" mode of transport. A comparison of Figures 5 and 6 (showing only the growth in households between 2011 and 2030)) illustrates just how important the densification zones, as an expression of strategic intent, are in shaping the spatial form of the city. The two scenarios used identical demographic and employment projections and no changes whatsoever were made to the model specification that could have influenced where households choose to live or work. Yet the Priority areas scenario ( Figure 6) anticipates a more concentrated development pattern that is better aligned with public transport corridors than the linear densification zones promoted by the RSDF in the Trend scenario ( Figure 5). Consider for example the absence of development in the area corresponding to marker 2 in Figure 5. The absence of growth in the Winterveldt area (marker 1 in Figure 5) can be ascribed to the priority areas including a servicesprovision boundary, beyond which it becomes too expensive for the city to provide bulk wastewater services. The growth at marker 7 can be attributed to the release of a large tract of land for green-fields development in proximity to the N1 freeway. Figures 5 and 6 also illustrate the impact of government-funded housing projects with significant growth projected at these sites (markers 3, 4, 5 and 6). In this case study, the location of the housing projects resulted from a prior BEPP planning process (alluded to in Section2) with the number of housing units built per year limited by the available budget. In the next case study, we show how such location decisions have been investigated to inform future BEPP planning processes.
Context
For the City of Ekurhuleni, the objective of simulating spatial growth patterns was to understand the impact of various interventions. These included catalytic projects, earmarked to stimulate economic growth; to determine how to best unlock the economic benefits from implementing the Aerotropolis Master plan, and to determine the trade-off between large government-funded housing projects and smaller in-fill development.
Scenarios
As was the case in the City of Tshwane, the first instance involved only the principles set out in the Municipal Spatial Development Framework (MSDF) and only projects that were certain to be implemented. These projects included both housing projects and new roads and an expansion of the public transport network. The other scenarios were again developed from the interactions that followed the presentation of results obtained from the trend scenario simulation. The following scenarios were identified: 1. Trend scenario with most likely city-wide demographic and employment growth projections provided by IHS Markit in 2016. 2. Aerotropolis scenario with the same demographic projections as the Trend scenario, but higher employment projections as indicated in the aggressive option of the Aerotropolis Master Plan. 3. Housing projects scenario that sought to quantify the impact of accommodating about 100 000 households qualifying for free government housing in smaller projects located as close as possible to employment opportunities. The location of the 139 best vacant parcels to accommodate the 100 000 households on the basis of their accessibility to employment are indicated in red in Figure 7 in relation to the number of jobs per analysis area in 2011 and the location of the proposed large projects (yellow in Figure 7 and orange in Figure 8).
Results
As a first step towards facilitating debate between departments, impact was defined simply as the average monetary commute cost of the 100 000 affected households in 2030, based on the actual place of residence and place of work for each household. A more comprehensive definition of impact would have to consider many other factors such as the cost to government and the cost to the environment. The cost to government would have to at least consider the capital and operating cost of bulk infrastructure and transit.
Based on the simple definition the smaller projects option only reduced the average commute cost by an insignificant 5%. Upon investigating this counter-intuitive result it was found that the apparently remote larger projects also had good accessibly to employment due to the well-connected and subsidized public transport network illustrated in Figure 8.
Context
The objective was to analyze and compare long-term growth patterns that could result from the implementation of specific spatial planning scenarios and how the resulting demand for services would impact on the Long-Term Financial Sustainability Plan being developed by the city.
Scenarios
Developing the spatial planning scenarios was simplified considerably by the fact that the project started shortly after the city had gone through a scenario development process through which a "Walking Together for Growth" scenario was adopted as the preferred future for the city. Recognizing that financial sustainability involves much more than spatial planning and demand forecasting, the following scenarios were simulated to enhance the city's prospects of achieving the desired future state: 1. Trend scenario with most likely demographic and employment growth projections provided by IHS Markit in 2012. 2. Trend scenario with more optimistic demographic and employment growth projections which would be more characteristic of the "Walking Together for Growth" scenario. The degree of optimism, expressed as a percentage increase in growth between 2011 and 2030, amounted to 10% for population, 20% for employment in manufacturing and trade and 10% for employment in most other sectors. Some of this optimism has since materialized in the form of foreign direct investment in motor vehicle manufacturing. 3. Two other scenarios were simulated which only differed in respect of which urban node would be prioritized for investment in bulk-infrastructure and catalytic projects to attract external investment, Coega (marker 1 in Figure 9) or Jachtvlakte (marker 2 in Figure 9). Both scenarios were based on the optimistic demographic and employment growth projections. The spatial distribution of growth is not proportional to the aggregate growth for the city. This is evident from a comparison between Figures 9 and 10, where the higher growth projections resulted in a visibly different distribution between nodes. This is simply a function of what residential and non-residential land is available for green-field and infill development. This not only confirms the value of urban simulation to quantify growth in spatially explicit terms but hints at the considerable influence that municipalities have on future urban form through the strategic release of land and the provision of bulk infrastructure. Although one would expect a take-up of residential development near employment opportunities, especially by less affluent households wanting to spend as little as possible on transportation to work, this is not always the case. A comparison between Figures 11 and 12 shows that while the industrial development in the Coega node (marker 1) did indeed result in a take-up of residential development in the area, there was also take-up in the N2 node (marker 3). This might be an unintended consequence of the comprehensive transportation system that the model was based on, in the sense that it enables more households to commute over greater distances and thus greater choice in deciding where to live and where to work. Given such a choice, many households (especially those with higher incomes) could well prefer not to live close to industrial areas. The N2 node is favored by property developers because of its accessibility to the N2 freeway and clean industry character. There was some confirmation of this in the observation that the average household income for zones that grew in the N2 node was indeed higher than those in the Coega node.
Conclusions
Given that this work represents the first fully interactive land use transportation modelling done in South Africa, Urbansim-SA has by no means been institutionalized in the sense of being formally adopted by the cities to support the planning processes alluded to in Section 2. The objective was to take some early adopters in each of the cities through the simulation process to prove the concept. Rather than using the model in the preparation of MSDF, CIF & BEPP as suggested in the introduction, various aspects of plans already prepared without the model were investigated/validated. A surprising finding of the literature review of UrbanSim applications in Europe/Asia was that all except one used zone/grid cell geometries. For the work reported herein we used the parcel geometry from the outset to avoid behavioral aggregation as discussed in Section 3.2 and because it provided us with a mechanism to model backyard dwellings by combining formal and informal template components into a new template for the parcel. Another factor that contributed to this approach was that the drivers/sponsors of the case study projects were invariably urban planners (known as Town and Regional Planners in South Africa) with a live interest in the classification and consumption of land at parcel level.
The results obtained from all three case studies confirmed that policy decisions can be influenced by using ILUT modelling to add elements of rational analysis and hypothesis testing to a controversial decision-making process. In the City of Tshwane case study, "rational analysis" was for example introduced simply by comparing the increase in density (hu/ha) within the densification zones proposed by the Trend and Priority Areas scenarios over the simulation period. The results clearly showed that the Priority Areas scenario had a much better chance of increasing public transport ridership levels to contain subsidies. This, together with the projected savings in the cost of providing bulk waste-water infrastructure, provided invaluable support to proponents of the Priority Areas scenario. The Ekurhuleni case study probably provides the best example of hypothesis testing where different institutional stakeholders held opposing views about the best location for government funded housing projects. Some supported large projects, motivated by the economies of scale that could be achieved in management and construction, even though such projects would require tracts of land, which would most likely only be found on the fringes. Another department promoted smaller infill projects as close as possible to employment opportunities. By considering the two views as different hypotheses, the modelling concluded that the expected benefit to the 100 000 affected households would not nearly be as significant as one might intuitively think. The fact that this could be attributed to the availability of subsidized public transport (especially rail) in proximity to the large projects not only demonstrated cause and effect to the participants but also surfaced preconditions for the success of policy decisions. For example, if the decision went the way of large projects but commuters were to lose the benefit of subsidized transport, for example due to concerns about personal safety on trains, history would show that infill projects might have been the better policy decision.
Based on the sample results from the three case studies as well as the results that were not presented here, cities in South Africa were found to be unique from an ILUT modelling point of view. The model specification for one city (including the selection of explanatory variables) will simply not work in another city even though the context appears to be similar. Calculated densities for development templates of the same Knowledge Factory typology differed substantially and new sub classes even had to be created in some cities to resolve anomalies. What did, however, consistently stand out is the considerable influence that municipalities have on the future urban form through the strategic release of land and provision of bulk infrastructure. If this is used as a tool to guide development, municipalities should guard against creating artificial shortages that will inflate the price of land, except if it serves a valid policy goal and land for lower-income housing can be provided by different means, for example state-owned land.
While it has been possible to adapt UrbanSim to succeed in South Africa, the applicability of the work to other developing countries will depend entirely on the availability of a good census (preferably two) and finding substitutes for the Knowledge Factory and Geoterra Image datasets, without which the work in South Africa would not have been possible. If this data can be collected, the South African journey has shown that ILUT modelling adds an element of rational analysis and hypothesis testing that can contribute to policy formulation and analysis.
Data
Three of the data sources used in our journey are subject to license agreements. These include the Knowledge Factory Cluster Plus product, the Geoterra Image Building-Based Land-Use product and the IHS Markit Population and Employment projections. The details of the three companies are provided in the references to assist researchers that may want to reproduce our results. | 2020-10-08T08:26:57.717Z | 2020-10-04T00:00:00.000 | {
"year": 2020,
"sha1": "4655a380630e7c9057a6f18f146e39fa366654cc",
"oa_license": "CCBYNC",
"oa_url": "https://www.jtlu.org/index.php/jtlu/article/download/1635/1487",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4655a380630e7c9057a6f18f146e39fa366654cc",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Geography"
],
"extfieldsofstudy": [
"Business"
]
} |
2589710 | pes2o/s2orc | v3-fos-license | Current status of low dose multi-detector CT in the urinary tract Multi-detector row computed
Over the past several years, advances in the technical domain of computed tomography (CT) have influenced the trend of imaging modalities used in the clinical evaluation of the urinary system�� Renal collecting systems can be illustrated more precisely with the advent of multi-detector row CT through thinner slices, high speed acquisitions, and enhanced longitudinal spatial resolution resulting in improved reformatted coronal images�� On the other hand, a significant increase in exposure to ionizing radiation, especially in the radio-sensitive organs, such as the gonads, is a concern with the increased utilization of urinary tract CT�� In this ar-ticle, we discuss the strategies and techniques available for reducing radiation dose for a variety of urinary tract CT protocols with metabolic clinical examples�� We also reviewed CT for hematuria evaluation and related scan parameter optimization such as, reducing the number of acquisition phases, CT angiography of renal donors and lowering tube potential, when possible��
INTRODUCTION
In the past decade, developments in computed tomography (CT) technology have changed the trend of imaging modalities used in the evaluation of the urinary system. The introduction of multi-detector row CT (MDCT) allows us to depict the renal collecting systems accurately through thinner section imaging, faster scanning, improved longitudinal spatial resolution, and the subsequent better quality of reformatted coronal images. With these advances, MDCT has largely replaced plain film radiography, excretory urography and tomography for a variety of urinary tract disorders such as urolithiasis, renal masses and mucosal abnormalities of the renal collecting system, ureters and bladder [1][2][3][4] .
However, an exponential increase in use due to the dramatic evolution of CT over the past decade has also resulted in a substantial increase in exposure to ionizing radiation. From the urinary tract evaluation standpoint with CT scanning, it is important to bear in mind that follow-up or recurrent CT imaging (kidney stones) as well as multiphase contrast-enhanced CT increases radiation dose to patients. Leusmann et al [5] estimated a 35% relative recurrence rate of urinary calculi over 10 years. Kaltz et al [6] reported that 4% (176/4562) of patients who underwent CT examinations for renal colic had three or more CT examinations, resulting in radiation exposure of 20-154 mSv. In addition, most CT protocols for evaluation of the urinary tract such as CT urography or CT for renal donors consist of the acquisition of two or more phases of contrast enhancement, which increases radiation dose to patients.
It is, therefore, important to initiate strategies and efforts to reduce radiation dose associated with CT scanning of the urinary tract. In this article, we discuss the strategies and techniques available for reducing radiation dose for a variety of urinary tract CT protocols with metabolic clinical examples.
URINARY TRACT
In order to maintain a favorable risk vs benefit, it is necessary to ensure that CT is indeed indicated for the clinical information desired. Table 1 summarizes the major clinical indications for CT examinations of the urinary tract. Once CT scanning has been justified, efforts should be made to keep the radiation dose as low as reasonably achievable (ALARA principle) while maintaining the diagnostic confidence of the interpreting radiologists. There are two clinical scenarios for dose reduction in urinary tract CT. Firstly, the presence of high inherent contrast between radio-opaque renal calculi and urinary tract soft tissue allows radiologist to diagnose urinary tract calculi at a lower dose and much higher noise. Similarly, the presence of higher contrast between contrast opacified urinary tract and the adjacent soft tissue can also allow lesion detection and characterization at lower dose noisy images. Secondly, dose reduction is also important for CT performed for the evaluation of healthy and younger renal donors.
Urolithiasis
Following its introduction by Smith et al [1] , unenhanced helical CT has become the preferred diagnostic method for the evaluation of urolithiasis in patients with acute flank pain referred for urinary stone disease. Patients with urinary stone disease may undergo multiple CT examinations due to recurrence of stone disease, which increases their cumulative radiation dose. Since urolithiasis is a mostly non-fatal disease which is common in younger patients, radiation dose should be reduced. Previous studies have shown that a low dose CT protocol may be suitable for the evaluation of patients with suspected urinary stone disease due to the high contrast between urinary tract stones and the adjacent relatively lower attenuation of soft tissue [7][8][9][10][11][12][13][14] . Table 2 shows radiation dose, scanning parameters, and diagnostic performances of CT examinations using low dose protocols in patients with urinary tract calculi. The protocol used at our institute for the evaluation of renal calculi is shown in Table 3 ( Figures 1 and 2). Katz et al [6] reported that the mean effective doses for a single conventional stone protocol using singledetector row CT and MDCT were 6.5 and 8.5 mSv, respectively, which is 1.8-17 times higher than those used in the low dose protocol.
Adjusting CT scanning parameters
Tube current is the most commonly adjusted scanning parameter for reducing radiation dose in CT. There is a direct linear relationship between tube current and radiation dose. Reduction of tube current by half cuts the radiation dose associated with CT by half. Tube current for dose reduction can be adjusted with the manual selection of a lower fixed tube current or with automatic exposure control.
Previous studies have reported the usefulness of low dose CT protocols with substantial reductions in tube current, which showed sensitivities and specificities close to those of standard dose CT in assessing urolithiasis [11][12][13][14] . Tack et al [11] and Poletti et al [14] evaluated low dose CT examinations using a tube current of 30 mA to decrease radiation dose for stone protocol CT. They demonstrated that low dose CT at 30 mA had 90-97% sensitivity and 94-100% specificity, similar to standard dose CT at 120 or 180 mA for correctly identifying renal stones as well as alternative diagnoses. Kluner et al [13] documented 97% sensitivity and 95% specificity for the detection of urinary calculi using an ultra low dose protocol performed at a mere 6.9 mA. Recently, Jellison et al [15] reported the use of a very low dose protocol CT performed at 7.5 mA for the detection of distal ureteral calculi in a cadaveric model, with a decrease of up to 95% in radiation dose to a level equivalent to the dose of a single kidney-ureterbladder radiograph.
The limitation of the initial low dose CT studies using a fixed tube current was that a single lower tube current is not appropriate for obese patients with the possibility of missing an alternative clinical diagnosis because of insufficient image quality. The introduction of automatic tube current modulation techniques helps in these circumstances [16][17][18] . These techniques modulate tube current on the basis of the size, shape, geometry and attenuation of the body region being scanned, while preserving im- Table 1 Major indications for evaluation of the urinary tract using computed tomography age quality. Three types of automatic tube current modulation methods have been described for non-cardiac CT scanning; based on projection angle-based modulations (angular or, xy-axis modulation), modulation along the longitudinal direction or along patient length (z-axis or longitudinal modulation), or in both angular positions and longitudinal directions (combined or xyz-axis modulation) [16] .
Kalra et al [17] found that use of the z-axis modulation technique (Auto mA, GE Healthcare, Milwaukee, Wisconsin, USA) (noise indices of 10.5-12, 10-380 mA) resulted in a 43%-66% reduction in radiation dose without compromising stone depiction in patients undergoing follow up CT for kidney stones when compared with a previous fixed tube current (200-300 mA) technique. Mulkens et al [18] reported on the usefulness of low dose CT using a combined modulation technique (CareDose 4D, Siemens Medical Solutions, Forchheim, Germany) for the evaluation for urolithiasis, even in overweight and obese patients. In this study, low dose CT examinations (6-MDCT, 51 effective mA at 110 kV; 16-MDCT, 70 effective mA at 120 kV) had a sensitivity of 96%-99%, specificity of 92%-94%, and accuracy of 94%-95% for the detection of kidney stones with a 51%-64% reduction in radiation, compared to standard dose CT exami- [13] 2006 16MDCT 120 20 (6.9) 1.43 5 97 95 NA 0.5-0.7 Poletti et al [14] 2007 4MDCT 120 74 ( In single-detector row helical CT, as pitch increases the radiation dose decreases if all other parameters are held constant. Each reconstructed image is taken from a wider section of the patient giving equal noise at the expense of z-axis resolution. Diel et al [19] reported that increasing the pitch on unenhanced helical CT for suspected renal colic to 2.5:1 or 3.0:1 was an effective method of reducing radiation dose with high diagnostic accuracy and acceptable image quality for suspected renal colic. Modern MDCT scanners take the reconstructed image data from a fixed "slab" of the patient. As pitch is increased, fewer projections make up the image, increasing the noise. To compensate for this increase in pitch, most modern scanners automatically increase the tube current to maintain constant image noise, and therefore, a relatively constant radiation dose. Conversely, when the pitch is decreased, tube current is automatically decreased too. Most vendors also let their automatic tube current modulation techniques increase or decrease the tube current automatically according to any change in the pitch in order to maintain constant image noise and radiation dose.
Simulated dose reduction technique
Most studies on low dose CT protocols have been limited in assessing the effect of multiple levels of tube current reductions on overall diagnostic accuracy, as repeat CT images cannot be acquired in the same patient at a variety of radiation exposures due to ethical considerations. Previous studies have reported the use of a simulated dose reduction technique to obtain simulated low dose or lower tube current images at multiple dose levels in the same patient [20][21][22][23] . This technique, therefore, helps avoid repeat exposure of patients in dose-related research. Mayo et al [20] introduced the use of this technique to modify the noise and simulate reduced tube current images in single detector helical chest CT. This software operates by add-ing simulated noise to source raw data of CT acquisition. Frush et al [21] adopted this in abdominal CT of pediatric patients and showed that abdominal MDCT using computer-simulation techniques in the urinary tract resulted in 33%-67% reduction in radiation dose. Karmazyn et al [22] reported the usefulness of computer-simulated dose reduction techniques for determining the diagnostic threshold for MDCT detection of renal stones in children. In their study, use of the 80 mA setting for all children and 40 mA for children weighing 50 kg or less did not significantly affect the diagnosis of renal stones. Ciaschini et al [23] recently assessed the use of simulated low dose images at 100% (177 mA), 50% (88 mA), and 25% (44 mA) of the original tube current by using simulation software. These authors demonstrated sensitivities of 92%, 83%, and 67% for the 100%, 50%, and 25% tube current reconstructions, respectively, for the detection of all calculi. There was no difference between the full dose CT images and the 50% and 25% lower dose images for the detection of urinary stones greater than 3 mm. However, sensitivity for 3 mm or smaller calculi, which were deemed as not clinically important due to their high probability of spontaneous passage, was reduced in the 25% radiation dose CT images.
CT FOR EVALUATION OF HEMATURIA
Multi-detector computed tomography urography (MDC-TU) offers considerable advantages in the evaluation of the upper urinary tract compared to excretory urography due to higher contrast resolution and ability to perform high quality three dimensional rendering of the urinary tract [24] . A variety of CT urography techniques have been evaluated for producing adequate opacification of the urinary tract at the lowest radiation exposure [25][26][27][28][29][30][31][32][33][34][35][36][37] . Due to multiphase scanning, with some CT urography protocols patients undergoing MDCTU may receive a radiation dose as much as three or four times higher than that with a single phase abdominal CT examination. Nawfel et al [25] reported a mean effective dose of 14.8 ± 3.1 mSv with three phase MDCTU protocols, which was about 1.5 times higher than the conventional excretory urography dose of 9.7 ± 3 mSv. Table 3 summarizes the effective radiation dose for different MDCTU protocols. A twopronged strategy is applied to reduce radiation exposure with MDCTU protocols, which includes reducing the number of acquisitions and optimizing scan parameters.
Reducing the number of acquisitions
There is no consensus on the optimal protocol for MDC-TU. Previous studies have reported the use of 2-4 phase scanning for MDCTU as different components of the urinary tract opacify with contrast at different time points following administration of iodinated contrast agents ( Table 4). The most commonly described MDCTU protocol comprises a three-phase protocol, which typically consists of non-contrast (for the detection of hemorrhage and stones), nephrographic (for renal parenchymal evaluation), and excretory phases (for assessing the collecting system, ureters and urinary bladder). The double excretory or corticomedullary phase is optionally acquired instead of the nephrographic phase. Some studies have described acquiring arterial phase images for patients who may require surgery [31] . As some CT protocols for renal or urinary tract evaluations require acquisition of two or more scan series, they are associated with higher radiation dose. In such circumstances, modifications of the contrast injection protocol or scanning parameters may be required to reduce radiation dose. For example, for certain scan series, scan length can be reduced or confined to the most important region of interest.
A major disadvantage of the multi-phase scanning techniques most commonly performed in many institutions is high radiation exposure as well as increased time required to interpret a large number of images. In an effort to overcome these important issues, Chai et al [32] proposed the use of a split-bolus technique that allows the reduction of radiation dose by reducing the total number of scanning phases in a single intravenous injection. The authors described the use of a small bolus of intravenous contrast medium (30 mL) after acquisition of images in the non-contrast phase. After 5 to 10 min, a larger bolus of contrast medium (100 mL, 2 mL/s) was administered and images were acquired 100 s after the administration of the second contrast bolus. This split-bolus technique allowed the combination of two phases of information, that is, the nephrographic phase from the larger second bolus and the excretory phase from the smaller initial bolus into one set of two phases. Raptopoulos et al [33] proposed a modification of the split-bolus multi-detector CT urographic approach, combining arterial and excretory phases using 30 mL of contrast material for urinary tract opacification for the excretory phase, and reinjecting 70-100 mL contrast approximately 2-3 min later and scanning for the corticomedullary phase, 60 s after the start of the last contrast injection. In our institute, 260 November 28, 2011|Volume 3|Issue 11| WJR|www.wjgnet.com [28] 2007 4MDCT Excretory 120 (70) 1.25 2.7 (male)/4.1(female) Yanaga et al [29] 2009 40MDCT Excretory 80 (300) 0.781 2.9 Adaptive noise reduction filter Kekelidze et al [30] 2010 this split-bolus technique is used for MDCTU (Table 5), in which a bolus of 40 mL contrast medium (Iopamidol 370 mg% Bracco Diagnostics, Princeton, NJ, USA) is injected at a rate of 3 mL/s after acquisition of images in the non-contrast phase. Then, 250 mL of saline infusion is given to the patient over 10 min. After 10 min, 80 mL of contrast medium is injected at a rate of 3 mL/s and scanning is start at administration of the second contrast medium.
Although split-bolus MDCTU has led to reduced scan series, this technique has been criticized as opacification of the kidneys and the urinary tract can be diminished due to the lower volumes of contrast medium used [34] . Kekelidz et al [30] recently described a triple-bolus protocol designed to combine all renal contrast-enhancement phases in a single phase post-contrast CT. The authors used a triple-bolus protocol in which 30 mL of contrast medium was administered at 0 min followed by an injection of 50 mL at 7 min, then 65 mL of contrast was injected at 8 min, with CT scanning starting at 8.5 min. In their study, triple-bolus MDCTU allowed visualization of renal parenchymal, excretory, and vascular contrastenhancement phases in a single phase. The radiation dose for triple-bolus acquisition (9.8 mSv) was 44% less than that for conventional CT urography composed of the three-phase protocol (23.4 mSv).
A hybrid technique has also been considered to reduce radiation exposure, particularly for the excretory phase, if more than a single excretory-phase acquisition is required for complete opacification of the urinary tract. This technique is a combination of CT and conventional excretory urography or CT digital radiography during the excretory phase in a single imaging session. It requires only a single intravenous injection of contrast material for both parts of the examination with hybrid imaging techniques. CT images can be acquired after a conventional urography or, more frequently, conventional urographic images are obtained subsequent to a CT examination. Sudakoff et al [35] reported that hybrid imaging accomplished with a series of three enhanced CT digital radiography images delivers an effective radiation dose of only 1.6 mSv.
Optimization of scan parameters
Attempts to reduce radiation dose have been proposed which modify scanning parameters for the individual phases like low dose CT protocols for the evaluation of urolithiasis [28,29] . Kemper et al [28] demonstrated that the low dose MDCTU protocol using 70 effective mA at 120 kVp can provide acceptable image quality for the excretory phase with a 64% dose reduction compared to standard dose CT in their 75-kg porcine model.
One strategy for reducing the radiation dose during MDCTU includes lowering the tube voltage. Use of low tube voltage CT reduces the radiation dose as the tube output is proportional to the square of the tube voltage. In addition, iodine attenuation increases as tube voltage decreases because the energy in the X-ray beam moves closer to the k-absorption edge of iodine. The use of low tube voltage techniques has been described for contrastenhanced CT examinations such as CT urography and CT angiography [36,37] .
However, a reduction in tube voltage also results in a large increase in image noise. Yanaga et al [29] assessed the feasibility of MDCTU using a combination of low tube voltage of 80 kVp and an adaptive noise reduction filter. In this study, the quality of post-processed filtered 80-kVp images was comparable with that of 120-kVp images for evaluation of the upper urinary tract with a 59% reduction in the mean effective dose using this technique. The authors reported that evaluation of the pelvic ureter and urinary bladder was not sufficient, and a compensatory increase in tube current is necessary to allow 80-kVp scanning. Furthermore, automatic exposure control techniques help to decrease radiation dose by 20%-45% without compromising the image quality in the abdomen and pelvis [38][39][40] (Figure 3).
CT ANGIOGRAPHY FOR RENAL DONORS
MDCT is now routinely used in the preoperative evaluation of living renal donors for transplantation. The multiphase scanning protocol usually includes non-contrast, arterial, venous and excretory phases which are generally obtained to assess renal and abdominal vasculature, and the urinary excretory tract to exclude urinary tract and retroperitoneal disorders. Given the fact that living renal donors are relatively young, "disease free and healthy" individuals, radiation dose reduction assumes a more critical dimension. Strategies for reducing radiation dose with renal donor protocol CT have been proposed for preoperative CT examinations and include a reduction in the number of acquired series or phases and use of lower kilovoltage settings [41][42][43][44] . Table 6 summarizes radiation doses for different CT protocols in the evaluation of living renal donors. The CT protocol used for living renal donors at our institute is shown in Table 7 (Figure 4).
Reducing the number of phases
The appropriateness of clinical indications for the multiphase renal donor protocol should be monitored closely to reduce the radiation dose. Previous studies have assessed the feasibility of reducing the number of acquired phases [41,42] . Caoili et al [26] reported a radiation dose reduction by omitting the non-contrast series or replacing the After arterial phase scanning, plain radiograph image (kidney-ureterbladder) is acquired at 6 min delay after the injection of contrast medium starts. Excretory phase images are acquired immediately.
Sung MK et al �� Current status of urinary tract CT excretory phase with a localizer radiograph or an abdominal radiograph. Namasivayam et al [41] reported that venous phase MDCT acquisition is not necessary for the evaluation of renal vein anatomy as the arterial phase can provide information on renal vein anomalies and help identify small left renal veins. Zamboni et al [42] proposed combined vascular-excretory phase imaging with a split-bolus contrast injection technique. In their study, the CT protocol was comprised of low dose non-contrast phase scanning of the abdomen and pelvis (120 kVp and 50-150 mA with the automatic exposure control technique), and a combined vascularexcretory phase after split-bolus contrast medium injection, with 50 mL of contrast material injected at 2.5 mL/s followed by a 100 mL bolus at 4-6 mL/s with bolus tracking. Arterial phase scanning of the abdomen started 5 s after CT attenuation in the abdominal aorta reached a predetermined threshold, and venous phase scanning of the abdomen and pelvis was initiated at a 20 s delay from the predetermined threshold. An additional localizer or conventional radiograph was acquired if there was an abnormality in the excretory system on the vascular phases. Namasivayam et al [43] also reported on the use of a low kilovoltage triple bolus single phase CT protocol for the evaluation of renal donors, in which non-contrast images are excluded and excretory phase images are replaced with localizer radiography acquired at 80-kVp. Furthermore, use of the arterial and venous phases is also combined into one phase acquired with differential contrast enhancement in the arteries and veins.
Lowering tube potential
The presence of high inherent tissue contrast allows the use of higher background noise without affecting the diagnostic quality of the images. As described earlier, low peak kilovoltage techniques also increases iodine attenuation in vessels and consequently improve vessel conspicuity [37,44] . Based on this rationale, lower kVp values have been used in the evaluation of renal donors undergoing renal CT angiography, while decreasing the amount of iodinated contrast medium. Sahani et al [45] reported no difference in image quality between 120 and 140 kVp, whereas use of 100 kVp resulted in greater noise although with diagnostically acceptable images and substantial radiation dose reduction compared with CT at 120 or 140 kVp in the evaluation of renal donors. When the tube potential is decreased to reduce radiation dose, it may be necessary to increase the tube current to obtain acceptable image quality. On the other hand, with increased image contrast at lower tube potential, images with greater noise may be acceptable for diagnostic evaluation. A reduction in tube potential in adults, particularly in those with large body habitus, should be performed carefully because increased image noise and streak artifacts can have an adverse affect on the diagnostic acceptability of CT images.
MISCELLANEOUS
Scan length is another important determinant of radia-tion dose to patients undergoing CT scanning. Larger scan length delivers radiation dose to larger areas of the body, thus increases radiation dose to patients. Radiation dose can be reduced by restricting the scan length to the region of interest, for example, scanning from the top of the kidneys instead of the top of the liver for evaluation of kidney stones or for CT urography. Scanning only the upper abdomen or the kidneys for the corticomedullary and nephrographic phases and scanning from the kidneys to the bladder for the excretory phase can help reduce radiation dose with multiphase imaging. It is also imperative that all efforts are made to determine prior scanning in order to minimize repetition of CT scanning for CT urography, and it is important to assess complete opacification and adequate distension of the collecting system combining oral or intravenous hydration, diuretics, and abdominal compression devices and ensure optimal timing of image acquisition. Likewise, appropriate triggering of CT following contrast injection is important for CT angiography in the evaluation of renal donors. In the case of non-opacification of a portion of the ureter or remaining collecting system, localizer radiography or even conventional radiography can be obtained instead of transverse CT images. In instances where repeat transverse CT images are deemed "unavoidable" or "necessary", radiation dose should be reduced with the use of substantially smaller scanning length at reduced radiation dose.
Use of noise reduction filters to improve image quality of "noisier" images can also allow radiation dose reduction. Singh et al [46] showed that two-dimensional adaptive noise reduction filters (SharpView CT, Linkoping, Sweden) can allow 25%-30% reduction in tube current or radiation dose while maintaining the detectability of small urinary tract calculi. Similarly, recent publications on nonfiltered back projection reconstruction techniques such as adaptive statistical iterative reconstruction (ASiR, GE Healthcare) allow reduced image noise and thus enable scanning at a reduced radiation dose [47,48] . At our institution, we reduce the radiation dose for patients undergoing CT scanning of the urinary tract by 30%-40% on an ASiR enable CT scanner compared to older non-ASiR capable CT equipment. Such a dose reduction is generally accomplished with the use of a 30%-40% lower tube current.
CONCLUSION
MDCT has virtually replaced conventional imaging techniques for the evaluation of urinary tract abnormalities. This is partly due to impressive improvements in CT technology which allow isotropic resolution with faster scan coverage in a single, short breath-hold, and high diagnostic performance. However, increasing use of CT necessitates the assessment, and if necessary, reduction of radiation dose. Therefore, all efforts should be made to optimize the radiation dose necessary for adequate imaging through the collaboration of all parties including the radiologist, medical physicist, technologist, and manufacturers.
Sung MK et al �� Current status of urinary tract CT | 2018-04-03T03:20:41.853Z | 2011-11-28T00:00:00.000 | {
"year": 2011,
"sha1": "7013a06f3fe56861a284bc461f78fffe6571ee62",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4329/wjr.v3.i11.256",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "e5f98b6f16462fe2941b5363c4feed349767e883",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
155144781 | pes2o/s2orc | v3-fos-license | An Algorithm for Producing Fuzzy Negations via Conical Sections
In this paper we introduced a new class of strong negations, which were generated via conical sections. This paper focuses on the fact that simple mathematical and computational processes generate new strong fuzzy negations, through purely geometrical concepts such as the ellipse and the hyperbola. Well-known negations like the classical negation, Sugeno negation, etc., were produced via the suggested conical sections. The strong negations were a structural element in the production of fuzzy implications. Thus, we have a machine for producing fuzzy implications, which can be useful in many areas, as in artificial intelligence, neural networks, etc. Strong Fuzzy Negations refers to the discrepancy between the degree of difficulty of the effort and the significance of its results. Innovative results may, therefore, derive for use in literature in the specific field of mathematics. These data are, moreover, generated in an effortless, concise, as well as self-evident manner.
Introduction
Fuzzy negations play a crucial role in the creation of De Morgan triples and also in the construction of fuzzy implications (c.f.[1][2][3]).As a general admission, the construction of fuzzy negations is the foundation stone of the triples De Morgan construction and fuzzy implications.Gottwald in [4] presented a theoretical approach to fuzzy negations production and one important result ( [4], Theorem 5.2.1).A constructive geometrical method of generating strong fuzzy negations, via conical sections has been presented in this paper, followed by the proof of the method and several final remarks, as well as some examples for the strong negations produced via conical sections.It should be particularly emphasized that the specific manner of geometrical construction of strong negations, covers the already known strong negations, such as the Sugeno negations, as well as the classical negation It should be noticed that Proposition 3 in paragraph 3 Main Results and particularly Equation ( 14), constitute new results that might be used in the production of implications, merely from negations (c.f.[2,3]), and by extension in reaching conclusions.In general, the algorithmic approach of negations is virtually non-existent in the applications of fuzzy sets and implications, without any reasons explaining such a gap.This specific new process for producing negations is to address the present gap, since this production is simple and easily understandable.Changing just one parameter in Equation ( 14), as demonstrated in the examples in paragraph 4. Special Cases, is sufficient to produce a negation which is different from the negations already known, such as the Sugeno or Yager negations.The aforementioned fact led the writers to the extension of their scientific research interests and to the algorithmic approach of their subject matter, namely of the means of application of the present method to real data.The writers were looking forward to a future study on the specific issue.
The key to this effortless production lies in the symmetry of a strong negation, such as the straight line y = x, which is an essential attribute of the functions satisfying the condition N(N(x)) = x.All conical sections with their center at the beginning of the axis, satisfy this symmetry under the condition that they are directed towards π/4 or 3π/4.
The presentation of the authors line of reasoning, along with their methods of proof and their conclusions, follow the aforementioned pattern.The principle definitions and theorems needed to understand the basic theory and proof are set out in paragraph 2 Preliminaries.Then, in paragraph 3 Main Results, the basic propositions and the respective results leading to Proposition 3 are stated and in particular Equation ( 14) presents the new result.Finally, specific examples of negations through conical sections are presented in the next paragraph, while the study concludes with the remarks and conclusions, as well as the possible applications of the method presented above.
Fuzzy Negation
A fuzzy negation N is a generalization of the classical complement or negation ¬.The fuzzy negation truth table consists of the two conditions: ¬1 ≡ 0 and ¬0 ≡ 1.The fuzzy negation comes to cover the gap between 0 and 1, thus, maintaining the intuitive perspective of negation, as well as the concept of complementarity.The definitions and theorems laid out in this paragraph aim to present the insofar foundation of the fuzzy negation on axioms, by establishing, at the same time, the background of the tools used to support this study.The following definitions and theorems can be found in any introductory text book on Fuzzy logic (see [1,[4][5][6][7]).
N is a strictly decreasing, N is a continuous.(N4) (2) A fuzzy negation N is called strong if the following property is met, (3) The dual negation based on a fuzzy negation N is given by (see [2], p. 124) Remark 1.Both the theorems mentioned above directly lead to the consequence that the N5 property is equivalent to the equation N(x) = N −1 (x).The latter equation leads to the fact that the graphics of the functions N και N −1 identify with each other, namely in other words, this means that the N graphic is symmetrical towards the straight line y = x.( Following Theorem 3, the construction of a strong fuzzy negation is a simple case by using the ϕ function.Nevertheless, a different mode of construction is attempted in the context of the present study, which involved avoiding the use of the ϕ function.The main reasons for adopting the specific procedures were-first and foremost, the creation of negations from parts of the conical sections, the fact that the ϕ function might not easily ensure, and secondly the aim to ensure that the mode of constructing the negations shall be based on geometrical methods.
Production of Fuzzy Negations via Conical Sections
Consider the following special form of conical sections: Equation ( 1) should satisfy the main properties of the negation, namely that N(0) = 1 and N(1) = 0.This means that Equation ( 1) is verified by points A(1, 0), B(0, 1), while point O(0, 0) should not verify it.These considerations led to the following proposal.Proposition 1.If (1) satisfies the condition (N1) then takes the form: Proof of Proposition 1.Since (1) satisfies the basic property of the negation, the conical section of Equation ( 1) should pass from points A(1, 0), B(0, 1), thus, the following relations result Furthermore, f 0 since point O(0, 0) does not verify (1).Simple calculations generate Equation ( 2).
An essential element for a part of a conical section to constitute a strong negation pertains to the fact that the basic property N5 should be satisfied, namely N(N(x)) = x, x ∈ [0, 1] should be valid.In order to achieve the latter, the conical section defined from Equation (2) should have as an axis of symmetry with the straight line y = x, see Remark 1.This statement is used in order to prove the following proposition.Proposition 2. The equation: is a conical section which has as an axis of symmetry with the straight line y = x passing through points (1, 0), (0, 1).
Proof of Proposition 2. In fact, if the conical section of the Equation ( 2) is to be symmetrical to the straight line y = x the coefficients of the variables x, y should remain the same.This leads to the conclusion that a = b, thus Equation (2) becomes: which is Equation (5).
Purely for reasons of simplicity and without imposing any restriction on the generalization, in the division of Equation ( 5) to f , the equation transforms to an equivalent one given below in the following form where k = a f , m = c f .For the remaining part of the paper, Equation ( 6) shall be used instead of (5).Two cases are derived from Proposition 2 and Equation (6).The first case is for k = 0. Consequently Equation ( 6) transforms into 2mxy If Equation ( 7) is solved to y it results, through simple calculations, in the following relation Due to symmetricity, the solution towards x is given by the same equation The procedure stated above leads to the Sugeno negation, which, as it is known, constitutes a strong fuzzy negation and a general formula, which is The Sugeno negation has been used in many articles (c.f.[8]) and in many applications, a fact that renders its explanation redundant, while the interest it has for the present study refers to the fact that the Sugeno negations form part of the geometrical construction of the conical sections.
The second case is that when k 0. In this case, only the symmetry of the conical section with regards to the straight line y = x is to be proved and the central symmetry with regards to O(0, 0), namely there is no parallel movement for the axes.The aforementioned are valid only in case that k = −1, namely Equation (6) acquires the following form: Through easy calculations, Equation ( 11) becomes, in an equivalent manner for Equation (15) see Remark 2 below.Once more, due to the symmetry of the conical section to the straight line y = x, if Equation ( 11) is solved for variable x, it will generate the same formula of function, namely According to Equations ( 14), και (16), and Remark 1, the equation 14), and is a strong fuzzy negation.
In an overview of the results stated above, the following proposition is ensured as a reference to Equation ( 6).
Proposition 3. Equation (6) namely
expresses conical sections, where k = 0 produces strong fuzzy negations with the formula N(x) = 1−x 1+mx , m > −1 which are known as the Sugeno negations, while for k = −1 it expresses conical sections, which produce strong fuzzy negations with the formula N(x) = (m 2 − 1) Remark 2. As far as the N(x) of the aforementioned proposition is concerned, since it forms a part and segment of the conical section, with the beginning of the axes serving as the center of the symmetry, its symmetrical segment for the center of symmetry belongs to the same conical section.The symmetrical segment with the appropriate movement (a single unit upwards and a single unit towards the right) equally constitutes a strong negation and is particularly symmetrical to N(x), with the straight line y = 1 − x as an axis of symmetry.Therefore, it has the following form: N d (x) = 1 − N(1 − x) and by Definition 2, it is the dual negation.For instance, if N(x) = (m 2 − 1)x 2 + 1 + mx, x ∈ [0, 1], m ≤ 0. Then by Equation (15), we conclude that the function f (x) = − (m 2 − 1)x 2 + 1 + mx, x ∈ [0, 1], m ≤ 0, equally constitutes a segment of the conical section, like N(x), then the 1 + f (x − 1) is nothing more than the dual negation, namely Certain examples are given below for the explanation and the verification of the aforementioned theoretical results.
Strong Negation via Circle
For m = 0, Equation (11) takes the form: which is a negation belonging to the Yager class and is a segment of the unit circle, see Figure 1.
Strong Negation via Circle
For = 0, Equation ( 11) takes the form: which is a negation belonging to the Yager class and is a segment of the unit circle, see Figure 1.
Strong Negation via Line
If = −1 then Equation ( 11) takes the form: which is the classical negation, see Figure 2.
Strong Negation via Line
If m = −1 then Equation ( 11) takes the form: which is the classical negation, see Figure 2.
Strong Negation via Circle
For = 0, Equation ( 11) takes the form: which is a negation belonging to the Yager class and is a segment of the unit circle, see Figure 1.
Strong Negation via Line
If = −1 then Equation ( 11) takes the form: which is the classical negation, see Figure 2.
Strong Negation via Ellipse
If −1 < m < 0, Equation (11) takes the form: which represents a segment of an ellipse, for example for m = − 1 2 , the below negation is the result, see Figure 3.
Strong Negation of Hyperbola
If m < −1, Equation ( 11) takes the form: which represents a segment of the hyperbola, for example for m = −2, the below negation is the result, see Figure 4.
which represents a segment of the hyperbola, for example for = −2, the below negation is the result, see Figure 4.
Discussion
The purpose of this paper was to propose an algorithm for the production of fuzzy negations, based on which, algorithms for the production of implications might also be proposed.Indeed, an algorithm of producing negations via conical sections has already been found.The last developments in the theory of fuzzy implications (if…X…then…Y…) indicate that fuzzy negation is enough to generate an algorithmic process of production for fuzzy implications, (c.f.[2,3]).For example, supposing that in the context of an application, the fuzzy implication Yager is selected, which is generated in the following way: (, ) = ( • ()), where is a decreasing function; if is replaced by a fuzzy negation, then an algorithm for producing fuzzy implications is automatically
Discussion
The purpose of this paper was to propose an algorithm for the production of fuzzy negations, based on which, algorithms for the production of implications might also be proposed.Indeed, an algorithm of producing negations via conical sections has already been found.The last developments in the theory of fuzzy implications (if . . .X . . .then . . .Y . . . ) indicate that fuzzy negation is enough to generate an algorithmic process of production for fuzzy implications, (c.f.[2,3]).For example, supposing that in the context of an application, the fuzzy implication Yager is selected, which is generated in the following way: I(x, y) = f −1 (x• f (y)), where f is a decreasing function; if f is replaced by a fuzzy negation, then an algorithm for producing fuzzy implications is automatically generated.If in the place of f we place the negations of Equation ( 14), then we get the following algorithm,
Conclusions
A generation of a new class of fuzzy negations has been enabled by the above procedure.The importance of this relies on the fact that the reasoning process has improved, since for a given application, it is possible to choose the most appropriate implication from a wider class.In most study-applications of fuzzy sets or fuzzy negations, e.g., in the prediction of future events, as well as in the application of fuzzy negations in neural networks, (c.f.[9][10][11]), the classic negation N(x) = 1 − x is implicitly used.The replacement of this negation with the negations produced via conical sections offers a new approach in the application of fuzzy implications.It would be important to find out in future-both for the authors and the readers of this paper-whether the aforementioned negations, produced via conical sections and by the extension of the implications introduced, correspond to any kind of application.
is a continuous fuzzy negation if and only if N 1 is a strictly decreasing fuzzy negation.In both cases N | 2019-05-17T14:13:39.707Z | 2019-04-27T00:00:00.000 | {
"year": 2019,
"sha1": "d54949b16d97c33f6558733284edef36418b60ec",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4893/12/5/89/pdf?version=1556676359",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d54949b16d97c33f6558733284edef36418b60ec",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
220490563 | pes2o/s2orc | v3-fos-license | Global targetome analysis reveals critical role of miR-29a in pancreatic stellate cell mediated regulation of PDAC tumor microenvironment
Background Pancreatic ductal adenocarcinoma (PDAC) is one of the most aggressive forms of malignancies with a nearly equal incidence and mortality rates in patients. Pancreatic stellate cells (PSCs) are critical players in PDAC microenvironment to promote the aggressiveness and pathogenesis of the disease. Dysregulation of microRNAs (miRNAs) have been shown to play a significant role in progression of PDAC. Earlier, we observed a PSC-specific downregulation of miR-29a in PDAC pancreas, however, the mechanism of action of the molecule in PSCs is still to be elucidated. The current study aims to clarify the regulation of miR-29a in PSCs and identifies functionally important downstream targets that contribute to tumorigenic activities during PDAC progression. Methods In this study, using RNAseq approach, we performed transcriptome analysis of paired miR-29a overexpressing and control human PSCs (hPSCs). Enrichment analysis was performed with the identified differentially expressed genes (DEGs). miR-29a targets in the dataset were identified, which were utilized to create network interactions. Western blots were performed with the top miR-29a candidate targets in hPSCs transfected with miR-29a mimic or scramble control. Results RNAseq analysis identified 202 differentially expressed genes, which included 19 downregulated direct miR-29a targets. Translational repression of eight key pro-tumorigenic and -fibrotic targets namely IGF-1, COL5A3, CLDN1, E2F7, MYBL2, ITGA6 and ADAMTS2 by miR-29a was observed in PSCs. Using pathway analysis, we find that miR-29a modulates effectors of IGF-1-p53 signaling in PSCs that may hinder carcinogenesis. We further observe a regulatory role of the molecule in pathways associated with PDAC ECM remodeling and tumor-stromal crosstalk, such as INS/IGF-1, RAS/MAPK, laminin interactions and collagen biosynthesis. Conclusions Together, our study presents a comprehensive understanding of miR-29a regulation of PSCs, and identifies essential pathways associated with PSC-mediated PDAC pathogenesis. The findings suggest an anti-tumorigenic role of miR-29a in the context of PSC-cancer cell crosstalk and advocates for the potential of the molecule in PDAC targeted therapies.
(Continued from previous page)
Conclusions: Together, our study presents a comprehensive understanding of miR-29a regulation of PSCs, and identifies essential pathways associated with PSC-mediated PDAC pathogenesis. The findings suggest an antitumorigenic role of miR-29a in the context of PSC-cancer cell crosstalk and advocates for the potential of the molecule in PDAC targeted therapies.
Keywords: Pancreatic cancer, PDAC, PSCs, microRNA, miR-29a, Protein interaction network, RNAseq, Desmoplasia, Tumor microenvironment, ECM Background Despite considerable advancement in the knowledge of pathogenesis and therapeutics of pancreatic ductal adenocarcinoma (PDAC) in recent years, the disease continues to remain as one of the deadliest malignancies. PDAC ranks as the seventh leading cause of cancerrelated deaths worldwide [1] and the fourth in the United States [2]. This rapidly metastatic cancer is characterized by abundant desmoplastic reactions around pancreatic tumors mediated by the pancreatic stellate cells (PSCs) [3][4][5]. PSCs remain in quiescent state in normal pancreas, with a low extracellular-matrix (ECM) producing capacity. During pancreatic injury or inflammation, PSCs are activated by pro-inflammatory cytokines and growth factors to differentiate into myofibroblasts, expressing alpha smooth muscle actin (α-SMA) [3,6,7]. The transformed and activated stromal PSCs interact with the tumor cells, proliferate and produce ECM proteins and growth factors promoting fibrosis, pancreatitis and pancreatic cancer [4,8,9].
MicroRNAs (miRNAs) are a class of small (~22 nucleotide long), non-coding RNAs in multicellular organisms, which modulate key cellular mechanisms of proliferation, metabolism and apoptosis via post-transcriptional regulation of hundreds of genes [10]. miRNAs are initially generated as primary transcripts (pri-miRNA) from inter-and intragenic chromosomal regions predominantly via RNA polymerase II mediated transcription, and are then further processed by the Drosha RNase III enzyme to produce short hairpin pre-miRNAs [11]. Pre-miRNAs are exported to the cytoplasm by exportin 5, where they are further processed by the exonuclease III enzyme Dicer, in a complex, to generate mature miRNA. Mature miRNA, along with Agonaute 2, forms an RNA-dependent silencing complex and binds to the 3′-UTRs of the target gene mRNAs with imperfect complementarity to cause their degradation or translational suppression [11,12]. Accumulating evidences have shown the involvement of miR-NAs in regulation of pathological processes of variety of diseases including oncogenesis [12][13][14]. Studies have further demonstrated the association of dysregulated miR-NAs in stromal cells with progression of different types of cancer, including pancreatic cancer, indicating the potential of miRNAs in developing targeted therapies [15][16][17][18][19][20].
In our previous work, we found microRNA-29a (miR-29a) to be pre-dominantly an anti-fibrotic molecule in PDAC, where miR-29a was significantly downregulated in activated PSCs and fibroblasts of murine and human PDAC as compared to normal pancreas, resulting in enhanced stromal extracellular matrix (ECM) deposition in PDAC microenvironment [21]. In addition, co-culture of pancreatic cancer cells with miR-29a overexpressing PSCs resulted in significant reduction in colony formation ability of the cancer cells and stromal deposition [21]. Thus, given the anti-fibrotic and tumor suppressive role of miR-29a in PSC-mediated PDAC progression, in the current study, we sought to decipher the mechanism of miR-29a in PSC regulation by identifying some of the key downstream target genes of the molecule, which also have critical functional implications in stromal remodeling and PDAC pathogenesis. Here we show for the first time that miR-29a concatenates genes belonging to key pathways associated with PDAC microenvironment, indicating the importance of the molecule in PSCmediated PDAC stromal accumulation, suggestive of the potential of miR-29a as a therapeutic target for normalization of PDAC stroma.
Cell culture
Primary human pancreatic stellate cells (hPSCs) (3830, ScienCell Research Laboratories Carlsbad, California) were cultured in Dulbecco's Modified Eagle Medium (DMEM, 11965092, Life Technologies, Carlsbad, CA) supplemented with 10% FBS in a humidified 5% CO 2 incubator at 37°C. hPSCs were authenticated using short tandem repeat profiling, and were regularly tested for mycoplasma contamination (MycoAlert, Lonza). All cells used in this study were less than passage 9.
RNA extraction
Total RNA from cultured cells were extracted using the RNeasy plus Mini kit (74,134, Qiagen, Venlo, Netherlands) following manufacturer's protocol. The concentration and purity of the extracted RNAs were measured using a Nanodrop 2000 Spectrophotometer (Thermo Fisher Scientific, Carlsbad, CA).
RNAseq
For RNAseq, the quality and integrity of the extracted RNA were evaluated by a Bioanalyzer 2100 (Agilent technologies, CA). Samples with RNA Integrity Number (RIN) > 7.0 were used for RNAseq. cDNA libraries were prepared using the TruSeq RNA library kit (Illumina Inc., San Diego, CA). The libraries were amplified and then sequenced on an Illumina Hiseq.2000 instrument (San Diego, CA) with 100 bp paired end reads per sample. The quality of the sequence data was analyzed using FastQC [22]. The reads were mapped to the human genome (hg38) using STAR (v.2.5) [23]. Uniquely mapped sequencing reads were assigned to genes based on Gencode 25 using featureCounts (v1.6.2) [24]. Genes with read count per million (CPM) < 0.5 in two or more samples were filtered out and gene expression profiles were normalized using trimmed mean of M values (TMM) method. Differentially expressed genes (DEGs) were assessed by cutoff p-value of less than 0.05 after false discovery rate (FDR) adjustment with amplitude of fold change (FC) of gene expression greater than 2 linear FC.
Target prediction, functional enrichment and network analysis
Conserved miR-29a target genes were obtained using TargetScan (v7.1). The hypergeometric model was adopted to identify the overlap between DEGs and miR-29a predicted targets.
Functional enrichment analysis of the gene ontology (GO) terms and KEGG pathway analysis were performed using R package to investigate the biological functions and pathways of the identified genes. The proteinprotein interaction networks of the genes were explored using the STRING database, version 11 [25].
Quantitative real time PCR (qRT-PCR)
RNA was reverse transcribed to cDNA using High capacity cDNA Reverse Transcription kit (4368814, Thermo Fisher Scientific, Carlsbad, CA) with random primers for genes or custom primer pool for miRNA (Thermo Fisher Scientific, Carlsbad, CA). To measure mature miR-29a expressions, TaqMan qRT-PCR reactions were set up using TaqMan Fast Advanced Mastermix (4444557, Applied Biosystems Foster City, CA) with TaqMan probe and primers for mature miR29a (002112, Applied Biosystems, Foster City, CA) or U6 snRNA (001973, Applied Biosystems, Foster City, CA). To assay the mRNA levels of genes, qRT-PCRs were performed with PowerUp SYBR Green Mastermix (A25742, Applied Biosystems, Foster City, CA) and custom primers Table S1). miRNA and mRNA qRT-PCR were normalized to U6 and ACTB respectively. Samples were run in triplicates in a 10 μl final volume using ABI 7500 Real-Time PCR machine with standard settings. Relative expressions were analyzed using ΔΔCT method.
Western blot
Protein lysates were prepared with RIPA Buffer (PI-89900, Thermo Fisher Scientific, Carlsbad, CA) and quantified using BCA Protein Assay Kit (23,225, Pierce Biotechnology, Waltham, CA). Equal amounts of total protein were loaded onto NuPAGE 4-12% Bis-Tris Gels (NP0323, Invitrogen, Carlsbad, CA). After electrophoresis, the gels were electrotransferred onto polyvinylidene fluoride membranes, blocked with 5% dry non-fat milk and incubated overnight at 4°C with specific primary antibodies. The membranes were washed and then probed with corresponding HRP conjugated goat anti-mouse (31,430, Thermo Fisher Scientific, Carlsbad, CA) or goat anti-rabbit (31,460, Thermo Fisher Scientific, Carlsbad, CA) antibodies at 1:5000 dilution. To develop the blots, ECL detection kit (34,096, Thermo Fisher Scientific, Carlsbad, CA) was utilized and the images were captured on an Amersham Imager 600 (GE Healthcare, Chicago, IL). Densitometry analysis was performed using Image J software to quantify each protein band, which were then normalized against loading control GAPD H. The primary antibodies used in this study were anti-IGF-1 (ab9572, Abcam, Cambridge, MA), anti-
Statistical analysis
All data were expressed as mean ± standard error of the mean (SEM) of three independent experiments. Statistical analysis was performed by ANOVA or Student's t test. Statistical significance is indicated as *p < 0.05 or **p < 0.01 or ***p< 0.001.
RNAseq and identification of DEGs
RNAseq libraries were constructed using RNAs from control and miR-29a overexpressing hPSCs to generate global miR-29a targetome. Overexpression of miR-29a in the transfected hPSCs was verified by qPCR (Fig. 1a). Sequencing was performed with 2X 100 bp paired end reads. This yielded sequence reads ranging from 17 to 34 million pairs, of which 90-92% aligned to the hg19 genome assembly (Table 1). Quantile normalization with log 2 transformation of number of counts per million (CPM) was performed and quality of raw sequencing reads and depth were verified for differential expression testing between the control and miR-29a overexpressing PSCs. For identification of DEGs, genes were plotted in a volcano plot by their log10 P values with FDR (q value) < 0.05 against log 2 fold change (FC) (Fig. 1b). This identified 90 downregulated and 106 upregulated genes with FDR < 0.05 and log FC < -1 or > + 1 respectively (Table S2). Next, inputting the DEG IDs into the TargetScan database, we identified 20 putative direct miR-29a targets among the identified DEGs-19 of which were downregulated and one was upregulated (Fig. 1c). Among the downregulated miR-29a targets, IGF-1 exhibited the highest fold change, followed by COL5A3, E2F7, CLDN1, and MYBL2. DPYSL3 was the only upregulated target that met the screening criteria.
GO term enrichment and pathway analysis of downregulated genes
GO analysis of the DEGs with an FDR < 0.05 revealed that the downregulated (target and non-target) genes were significantly enriched in several PDAC relevant biological processes such as regulation of mitosis and cell cycle, cell migration and motility, cellular adhesion, cell proliferation, extracellular matrix organization and cytokine signaling ( Table 2). Among the 19 miR-29a predicted downregulated target genes, IGF-1, CLDN1 and ITGA6 were enriched in regulation of cell motility/ migration (Table 2). COL5A3, ADAMTS2, ITGA6, LAMC1 and IGF-1 associated with mechanisms of ECM remodeling. While ITGA6 and IGF-1 are negative regulators of apoptosis, E2F7 and MYBL2 contribute to the regulation of cell cycle (Tables 2 and 3). In addition, the pathways enriched for miR-29a overexpressing PSCs included IGF-1 signaling, Tp53 signaling, collagen pathway, integrin-laminin interactions, RAS/MAPK signaling and cytokine signaling as depicted in Table 3. Thus, the GO and pathway enrichment analyses indicate that miR-29a modulates effectors of signaling pathways associated with crucial mechanisms of ECM remodeling and tumor-stromal crosstalk, suggesting a potential role of the molecule in PSC-mediated regulation of PDAC tumor microenvironment (TME).
Validation analysis using qPCR and Western blots
Among the identified DEGs from the RNAseq, we selected all 19 down-and one upregulated miR-29a targets Fig. 1 RNAseq analysis of miR-29a overexpressing hPSCs. a qPCR analysis for miR-29a expression in hPSCs transfected with miR-29a mimics (29a OE) as compared to hPSCs transfected with scramble control (CTRL). Numerical data are represented as average fold change (ΔΔCT) ± standard error of the mean (SEM); ***p < 0.001; n = 6. b Volcano plot of DEGs (log FC > 1 or < − 1, FDR < 0.05) in hPSC cells overexpressing miR-29a compared to controls. The horizontal axis represents log2 fold change between miR-29a overexpressing and control hPSCs. The negative log10 of the q-value is plotted on the vertical axis. Each point on the graph represents one gene. c A hierarchically clustered heatmap showing the expression patterns of the differentially expressed miR-29a direct target genes in the three replicates for each of miR-29a overexpressing (OE1, OE2 and OE3) and control (Control 1, Control 2, Control 3) mRNAs. Red and blue represent up-and downregulation respectively, and the color intensity represents the level of fold changes along with a subset of 24 additional DEGs to validate the RNAseq results using qRT-PCR. The expressions of 43 of the 44 tested genes well matched between the RNAseq and qPCR analyses ( Table 4, Fig. 2a). Based on pathway analyses and available literature, IGF-1, COL5A3, CLDN1, E2F7, MYBL2, ITGA6 and ADAMTS2 were the most prominent miR-29a targets involved with one or more essential signaling mechanisms associated with TME regulation (Tables 2 and 3). Therefore, we next sought to find if miR-29a had a translational impact on these genes in PSCs. Our western blot analysis showed that protein levels of each of the seven selected targets were significantly diminished in miR-29a overexpressing PSCs (Fig. 2b). The most robust depletion was observed for ITGA6, followed by ADAMTS2 and IGF-1 respectively. All these three significantly downregulated target genes associate with ECM remodeling or fibrotic mechanisms. ITGA6 is a member of the integrin family that are heterodimer cell surface receptors comprising of α and β chains [26]. Alpha 6 containing integrins (α6/ β4 and α6/β6) are the primary receptors for laminins, including laminin1 (LAMC1), a major ECM component [26]. Further, ECM in interaction with cellular integrins forms a scaffold, and plays essential role in cell proliferation, migration/invasion and survival [26]. ADAMTS2, belonging to the ADAM metallopeptidase with thrombospondin type 1 motif (ADAMTS) family, is responsible for processing of collagen type I, II, III and V precursors (pro-collagens) into mature collagen by excision of aminopropeptide, which is essential for generation of collagen monomers and assembly of mature collagen fibrils [27,28]. Inhibition of ADAMTS2 has been shown to reduce stromal deposition and modulate TGF-β1 signaling [27,29]. IGF-1 plays an essential role in fibrotic processes in different organs including pancreas, liver and lung [30][31][32]. Recent reports demonstrate the association of IGF-1 in PSCs to promote stromal accumulation and basal growth rate in PDAC [33], as well as miR-29a-mediated regulation of the gene [34]. Interestingly, each of the seven tested targets have been shown to exhibit pro-tumorigenic effects. Together, the observations suggest an antifibrotic and tumor suppressive function of miR-29a in PSC mediated PDAC pathogenesis.
Network interactions of the downregulated miR-29a targets
To determine if the identified downregulated miR-29a direct target genes formed a network of interactions, we next analyzed the genes utilizing the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database. We included a few additional nodes to construct the network. We observed three distinct networks in the interactome, which consisted of insulin/IGF, RAS/ MAPK and laminin signaling pathways (Fig. 3). IGF-1, belonging to the IGF family members, is one of the key regulators of the insulin/IGF pathway. IGF-1 is a direct downregulated miR-29a target in our dataset, which interacts with other effectors of the pathway including IGF-1R, INSR, IGFBP4 IGFBP5 and FSTL1 (Fig. 3). Interestingly, one of the oncogenes PTPN1 in the pathway is also a predicted direct miR-29a target, however, our RNAseq data did not show differential expression for this gene with miR-29a overexpression, which could be an effect specific to the PSCs. Nonetheless, the insulin/IGF signaling is a key driver in tumorstromal interactions, metastasis and PDAC progression [33]. IGF-1 secreted by activated PSCs and fibroblasts in PDAC stroma via IGF-1 receptor (IGF-1R) promote cancer cell migration, invasion and metastasis [33,35]. In fact, the RAS/MAPK pathway identified in our study consisted of interactions of IGF-1 and IGF-1R with other genes in the pathway including NRAS, HRAS, KRAS, SOS1 and RAF1. It is well documented that the MAPK signaling cascade bridges the crosstalk between ECMmediated extracellular signaling through growth factors and their receptors such as IGF-1/IGF-1R, and subsequent intracellular response to allow cancer cell proliferation and migration [36]. IGF-1 bound activated IGF-1R phosphorylates insulin receptor substrates (such as IRS1, IRS2 and Shc). The Src homology 2 (SH2) domains of these substrates are recognized by signaling molecules to activate the intracellular effectors such as RAS, RAF and SOS and the RAS/MAPK pathway [37,38]. Interestingly, in our previous study, we observed significant downregulation of NRAS with miR-29a overexpression in PDAC cell lines [39]. In the current study, miR-29a overexpression also resulted in moderate downregulation of NRAS in PSCs (logFC = − 1.01), however the role of NRAS in PSCs is unknown. Nonetheless, it is apparent that miR-29a modulates extracellular IGF-1/IGF-1R signaling in PSCs, and intracellular NRAS expression in pancreatic cancer cells, which indicates a functional role of the molecule in tumor-stromal crosstalk via insulin/IRF -RAS/MAPK signaling mechanism in PDAC.
The identified interactome further consisted of three miR-29a targets namely ITGA6, LAMC1 and FSTL1 that associate with laminin interactions, which are salient to pancreatic ECM and desmoplasia [40][41][42]. LAMC1 encodes for laminin γ1 chain isoform, which are essential non-collagenous ECM glycoproteins, integral to basement membrane assembly and crucial for intra-and extracellular communication to modulate cellular behavior [43]. Laminin interactions, including that of LAMC1, have been shown to promote oncogenesis via processes including cancer cell migration, differentiation and metastasis [44][45][46][47]. Cytoplasmic laminin expression correlates with poor patient prognosis in pancreatic cancer [48] and has been shown as one of the most efficient ECM proteins to promote cell adhesion-mediated drug resistance [49]. Further, ECM-integrin interactions are found to be crucial for adhesion-mediated drug and resistance to chemotherapy [50,51].
Discussion
In our previous studies, we observed significant loss of miR-29a in several PDAC cell lines [21,39]. In addition, miR-29a was globally repressed in PDAC tumor tissues, as well as in a PSC-and epithelial cell-specific manner [21]. We further demonstrated that TGF-β1 via SMAD3 signaling negatively regulates miR-29a expression in PSCs and upregulates several ECM proteins including collagens, laminin and fibronectin [21]. In the current study, using RNAseq, we characterize the mechanism and pathway interactions by which miR-29a contributes to PSC-mediated regulation of ECM and tumor-stromal crosstalk. This will allow for a comprehensive understanding of the therapeutic applicability of the molecule in the context of PDAC stroma. RNAseq analysis with miR-29a overexpressing PSCs and controls identified a number of DEGs, which included predicted direct and indirect targets of the molecule. Because miRNAs primarily regulate genes either by mRNA decay or translational repression, we focused on the direct targets that were downregulated with miR-29a overexpression. We validated the translational repression of the targets namely IGF-1, COL5A3, CLDN1, E2F7, MYBL2, which exhibited the highest fold changes in the RNAseq dataset, along with ITGA6 and ADAMTS2, which had functional relevance in stromal regulation. Our western blot analysis indicated the highest repression of ITGA6, ADAMTS2 and IGF-1 protein levels with miR-29a overexpression in PSCs (Fig. 2b). Among these identified direct targets, association of IGF-1 and COL5A3 with PSCs in PDAC has been reported previously [33,52]. Network analysis with the targets identified three overlapping pathways related to IGF, RAS/MAPK signaling and laminin interactions. IGF-1 secreted by activated PSCs and CAFs via sonic hedgehog pathway activates IGF-1R in cancer cells triggering phosphorylation of insulin-receptor or Src substrates to promote PDAC metastasis via intracellular pathways such as RAS/MAPK [37,53]. In addition, high IGF-1 with low IGFBP3 expressions associated with enhanced risks for PDAC [54]. Expectedly, patients with advance clinical stages (II and III) of PDAC had higher levels of IGF-1R and low IGFBP3, and exhibited poor prognosis [54]. Interestingly, the IGF-1R expressions in these patients associated with high stromal abundance, suggesting the regulation of tumor-stromal crosstalk via IGF/IGF-1R signaling [54]. Another identified miR-29a target CLDN1 is a tight junction protein that facilitates cell-ECM communication and EMT in various cancer types [55][56][57]. The gene is shown to be a contributor in tumor-stroma crosstalk in pancreatic cancer [58]. Although the regulation of CLDN1 in PSCs has not been reported previously, studies have shown the gene to be under the regulation of IGF-1 signaling [59,60]. Upregulation of collagens, including COL5A3, is a salient feature of fibrosis and malignant tumor stroma, including that in PDAC [52,61,62]. Collagens are abundantly expressed in PDAC ECM; and collagen V, by binding with α2β1 integrin receptors, stimulates migration, proliferation and metastasis in PDAC [63]. Interestingly, ADAM TS2, another identified miR-29a downregulated target, primarily functions to process collagens I, II, III and V precursors into mature molecules [27,28]. The gene promotes fibrosis via activation of TGF-β signaling [64]. Evidently, miR-29a plays an anti-fibrotic role in PDAC by influencing ECM deposition via modulation of multiple targets in the collagen pathway. In addition to these genes that directly regulate tumormicroenvironment and desmoplasia, the top targets identified from our dataset consisted of the two additional genes E2F7 and MYBL2, which play essential roles in cell cycle regulation. E2F7 associates with poor patient outcome in several types of cancer including PDAC [65][66][67] and has been shown essential for mouse embryonic survival [68]. Inhibition of E2F7 enhanced G1 phase percentage in prostate cancer reducing cellular proliferation [67]. Similarly, MYBL2 is Network analysis for miR-29a predicted targets. Network interaction of miR-29a targets identified by RNAseq was constructed using the STRING database. The genes highlighted in black circles are the predicted miR-29a targets a transcription factor which promotes cell proliferation and differentiation by fostering cell cycle entry into S and M phases, and is dysregulated in types of cancer [39,69,70]. A recent study demonstrated the regulatory role of MYBL2 in promoting PDAC desmoplasia and PSCs' growth through sonic hedgehog and adrenomedullin via paracrine and autocrine signaling [71], however the role of the gene in PSCs has not been reported. A negative feedback regulatory mechanism between miR-29a and MYBL2 influencing the activation of PSCs is possible, but this requires future validation. Nonetheless, the identified set of miR-29a target genes exhibit a pro-fibrotic and tumorigenic function in PDAC desmoplasia and progression via multiple targeted pathways, although PSC-specific function of some of the identified target genes such as E2F7, CLDN1, MYBL2 and ADAMTS2 has not been studied previously. Together, the observations in the current study signify that overexpression of miR-29a may lead to inhibition of PSC-induced pro-fibrotic and desmoplastic effects by targeting these genes to impair signaling mechanisms such as sonic hedgehog, IGF, RAS/MAPK, collagen metabolism and laminin pathways, and perturbing their normal cellular responses to promote PDAC progression. As mentioned above, IGF-1 signaling axis is a key mechanism that promotes PDAC tumor-stromal crosstalk and drug resistance. In our RNAseq dataset, we observed the most robust downregulation of the IGF-1 gene among all miR-29a targets. It is possible that in addition to IGF-1 alone, miR-29a regulates IGF-signaling via modulating multiple components of the pathway in PSCs, such as indirect regulation of genes including IGF-1R, INSR and direct targeting of some others. It is worthy to note that MYBL2 and E2F7 are miR-29a targets that are at the functional convergence of p53-IGF-1 pathways. Stromal p53 has been implicated as a key component that reprograms activated pancreatic and hepatic stellate cells to transform them into quiescent states [72,73]. Depletion of p53 in stromal cells caused faster and more aggressive tumor development with enhanced invasion and metastasis of cancer cells, suggesting a paracrine mechanism of p53 in tumor progression [74,75]. In addition, studies have reported the occurrence of inactivating p53 mutations in fibroblastic stromal cells and their association in promoting tumor progression and cancer cell metastasis in types of carcinogenesis [74], although the molecular mechanisms are still unclear. MYBL2 is a downstream effector of the p53 pathway [69]. With p53 mutations, MYBL2 repression is uncoupled allowing enhanced binding of the molecule with MuvB and FOXM1 leading to activation of mitotic genes [69,76]. FOXM1 is an essential component of Akt signaling, which functions both in the context of tumor stroma and cancer cells to promote tumorigenesis [77][78][79][80]. Interestingly, Akt pathway is under inverse regulation of IGF-1 signaling [79,81,82]. Similarly, E2F7 is a crucial transcription factor, which promotes E2F1-p53 dependent apoptosis and cell-cycle arrest [68,83]. In our RNAseq data with miR-29a overexpressing PSCs, we found E2F1 as one of the indirect downregulated targets. In addition, E2F7 has also been shown to be activated by Akt signaling in carcinomas [83][84][85]. Although the exact mechanisms of MYBL2 and E2F7 in PSCs is still to be understood, our results suggest that dysregulation of miR-29a in PSCs derepresses genes such as IGF-1, MYBL2 and E2F7, which may in turn disrupt stromal p53 regulation, promoting PSC-mediated tumor proliferation.
GO analysis showed that the direct and indirect miR-29a downregulated targets were enriched in crucial cellular and molecular functions associated with PDAC stromal remodeling and proliferation. The biological processes consisted of those related to cell cycle regulation, collagen formation, ECM organization and immune signaling ( Table 2). Our study further identified interconnected networks comprising of essential pathways in PDAC stromal regulation and desmoplasia (Table 3). Although a single miRNA is known to target hundreds of genes, resulting in their post-transcriptional repression, based on the functional network of the differentially expressed targets, the predominant phenotypic effect of a miRNA can be systematically analyzed in a contextspecific manner. Our analysis using PSCs identifies a number of miR-29a target genes that are crucial players in PDAC stromal remodeling and tumor-stromal crosstalk, suggesting the importance of the molecule in their pathway regulations to modulate PDAC microenvironment and tumor progression.
Conclusion
The current study is the first to use RNAseq platform for a comprehensive characterization of the PSC transcriptome under the regulation of miR-29a. In PDAC, activated PSCs foster cancer cell migration via desmoplastic reaction characterized by increased collagen, laminin and other ECM deposition resulting in fibrosis. Our data identified altered expressions of a number of novel genes under miR-29a regulation, including IGF-1, COL5A3, CLDN1, E2F7, MYBL2, ITGA6, ADAMTS2, and related pathways such as insulin-IRF, RAS/MAPK, laminin and collagen pathways in PSCs that are dysregulated or associate with PDAC tumor-stromal crosstalk and ECM remodeling. Given the functional relationship among the identified miR-29a targets in our PSCs dataset, it is likely that restoration of miR-29a in PSCs will dwindle or escalate the interconnected tumorsuppressive/pro-tumorigenic networks respectively in PDAC microenvironment, causing global regulation of the network functions to hinder the disease progression. Since our conclusions are primarily based on computational analysis, future investigations aimed to delineate the mechanistic relationship of miR-29a, its targets and related pathways in PSCs as well as cancer cells, would allow for a deeper comprehension of the associated pathological changes in tumor-stromal crosstalk in PDAC. This would be essential to assess the therapeutic modalities of miR-29a and its target networks in the disease. Nonetheless, our data in the current report identifies novel genes and networks under the regulation of miR-29a in PSCs, bolstering an anti-tumorigenic function of the molecule in the context of PDAC stroma. These findings suggest that targeted upregulation of miR-29a may hold great therapeutic value in efficacious PDAC treatment.
Additional file 1: Table S1. Primers for qPCR validation of differentially expressed genes in hPSCs.
Additional file 2: Table S2. Differentially expressed genes as identified by RNAseq analysis in miR-29a overexpressing hPSCs as compared to control cells. | 2020-07-13T14:21:02.326Z | 2020-04-21T00:00:00.000 | {
"year": 2020,
"sha1": "44434955df1f6a2da2c2e7e20b548808e3359c44",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-020-07135-2",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "44434955df1f6a2da2c2e7e20b548808e3359c44",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
254469293 | pes2o/s2orc | v3-fos-license | Children’s Sex and the Happiness of Parents
Demographers are interested in sex preferences for children because they can skew sex ratios and influence population-level fertility, parenting behavior, and family outcomes. Based on parity progression ratios, in most European countries, there are no sex preferences for a first child, but a strong preference for mixed-sex children. We hypothesize that mixed-sex preferences also influence parental happiness. Parents’ disappointment with a second child of the same sex as the first could have negative effects for parents and children. We use longitudinal data from the German Socio-Economic Panel and the British Household Panel Study to examine parental happiness by the children’s sex and analyze whether these effects differ by parent’s sex, age, nativity, and educational attainment. The results are only partially consistent with predictions from parity progression ratios. As expected, parental happiness does not depend on the sex of the first child. We find weak evidence suggesting that two boys decrease happiness, but the findings are not consistent across German and British data or across subpopulations. Moreover, two girls do not reduce happiness. Although sex preferences influence fertility, they appear to have little impact on happiness, perhaps because of unobserved positive factors associated with having same-sex children.
Introduction
Demographers have long been concerned with parents' sex preferences of their children. Sex preferences can substantially increase the level of fertility if couples continue to have children until they have the number of desired sons, daughters, or mix that they desire (Bongaarts and Potter 1983). More broadly, sex preferences are a public issue because son preference combined with sex-selection technologies has led to skewed sex ratios in many parts of Asia (Park and Cho 1995;Li et al. 2000;Johansson and Nygren 1991) and recently similar skews were documented among immigrant populations in Canada (Almond et al. 2013). Although there has been much less research on this topic in European countries, sex preferences still exist (Andersson et al. 2006;Brockmann 2001;Hank and Kohler 2000) and can be a very important factor affecting the level of fertility in low fertility settings (Wood and Bean 1977;Bongaarts 2001). In fact, many studies have found a consistent preference for having at least one child of each sex in many European countries. These sex preferences are documented from either parity progression ratios, that is, the proportion going on to have an additional child based on the sex composition of existing children, or intentions to have another child based on a given sex composition (Andersson et al. 2006;Brockmann 2001;Hank and Kohler 2000).
In traditional contexts with strong son preferences, daughters often receive differential treatment in terms of nutritional and health resources within the household (Lundberg 2005;Strauss and Thomas 1995;Thomas 1994). There is little evidence of this type of sex-based discrimination based on resources in Europe and North America, and this is thought to be because public and private pension provision has eroded son preference, parents can afford to treat all of their children well, and the gender revolution led to cultural change in childrearing practices. In fact, Taubman (1991) finds few differences in the treatment of boy and girl children in terms of bequests, transfers, and education. Given the seemingly equal treatment of boys and girls, it is notable that there are differences in some aspects of parenting behavior and family outcomes based on the sex of children (Lundberg 2005). For example, in the USA, fathers spend more time with sons than daughters, and mothers with sons report greater marital happiness (Raley and Bianchi 2006). However, no research has investigated whether the preference for mixed-sex children in Europe translates into differential parental happiness.
In this paper, we examine whether we can learn about sex preferences from parents' happiness after a child is born, and whether parent's happiness maps onto parents' behavior measured with parity progression. Prior research on European countries suggests that there is no preference for boys or girls, except after the birth of the first child when the preference becomes to have a child of different sex. We therefore hypothesize that the sex of the first child does not influence parental wellbeing, but if the first two children are the same sex, parental well-being should be dropped. We test our prediction using nationally representative longitudinal data sets from Germany and Britain.
The topic is important because if parents are disappointed by the sex composition of their children-which parity progression ratios suggest is happening if the second child is same sex as the first-the parents' subjective well-being and mental health could suffer and translate into negative effects on children. Prior research on fertility and well-being of the parents has focused on the association between the number of children, timing of children, and whether the associations between birth of a child and changes in well-being are permanent or transitory. The evidence is mixed. According to a review by McLanahan and Adams (1987), no study (at that time) had found that parents would be better off by any conventional measure of well-being than childless people. A more recent review by Hansen (2012) also states that most of the evidence suggests that people are better off without having children. Research focusing directly on subjective well-being, or happiness, however, is not that bleak about the potential impact of children on happiness. Kohler et al. (2005) analyze Danish twins and document a strong increase in happiness among those who have one child. Clark et al. (2008) conduct a longitudinal analysis that combines all parities, and find that while the birth of a child increases happiness, the impact is short-lived. Margolis and Myrskylä (2011) and Aassve et al. (2012) both show that the association between the number of children and happiness varies across countries and is sensitive to the context. For our analysis, a particularly important study is Myrskylä and Margolis (2014), as their analysis covers the same two countries as the current paper, Germany and United Kingdom, and the findings provide important motivation for our analysis. Myrskylä and Margolis (2014) analyze the longitudinal association between happiness and fertility and report the same global finding that Clark et al. (2008) delivered: children increase happiness only temporarily. However, the association varies by parity so that although the increase in happiness associated with first birth is large, the increase associated with the second birth is only about half of the first. It is possible that the increase in happiness associated with a first birth is particularly large because assuming no gender preferences, there can be no disappointment. With the second child, however, there is close to 50 % chance of disappointment if the parents desire one of both sexes. Margolis and Myrskylä (2015) also show that parental happiness is an important predictor of further parity progression, suggesting that the links between fertility and happiness are both short and long term and have consequences for further fertility behavior. However, it is also possible that even strong sex preferences do not translate into differences in parental happiness due to countervailing forces. Consider the explicit prediction of this paper that parents with two same-sex children are less happy than parents with one of each. Two same-sex children may also positively influence parental happiness. Potential mechanisms could include mundane issues such as increased possibilities to recycle clothes and other items from the older sibling to the younger. If two same-sex children elicit the desire to have an additional child, this may also influence happiness by binding the parents together through an unfinished agenda. Finally, whatever disappointment there may be with the sex of the second child, this effect may be short-lived, as are the effects of many important life events (Clark et al. 2008).
Sex Preferences in Europe
Given people's decreasing claim to have sex preferences over time, demographers infer sex preferences from behavior. Parity progression analyses examine the proportion of parents to have another child given the number and sex combination of existing children. By measuring actual behavior, we can infer sex preferences. The second and less common way in which sex preferences are documented is by examining fertility intentions given the sex of existing children. Using survey data, one can estimate the likelihood of intending to have an additional child(children) given the set of children that one already has.
Compared to research on children's sex preferences in Asia, Africa, and the Americas, there is much less research on sex preferences in Europe. There are two main findings from the European context. First, there are few sex preferences affecting progression to a second birth. For a first child, there is no effect of sex on parity progression in Denmark, Finland, Norway, and Sweden (Andersson et al. 2006). There is a slight preference for daughters in Portugal and in eastern Germany (Brockmann 2001;Hank and Kohler 2000) and a slight preference for boys in western Germany (Hank and Kohler 2000) although the West German finding was not found in Brockmann's study (2001). Second, there is a consistent desire to have mixed-sex families (Hank and Kohler 2000). This is found in Austria, Belgium, Czech Republic, East Germany, Hungary, Italy, Latvia, Lithuania, Slovenia, Spain, Sweden, Denmark, and Switzerland, but not in Finland, France, West Germany, Norway, Poland, and Portugal (Andersson et al. 2006;Hank and Kohler 2000) and the UK (Dahl et al. 2006).
Consequences of Sex Preferences
At the macro level, sex preferences can have large effects on the population level of fertility, especially in low fertility settings. Strong desires for a particular sex composition lead to substantially higher fertility. For example, if couples have children until they have at least one son, TFR will be 1.94. If they continue until they have at least one daughter, it will be 2.06. If they want one daughter and one son, then they will have on average three children (Bongaarts and Potter 1983).
The micro-level effects of sex preferences have gotten less attention but are not unimportant. Fathers' investments in children tend to be somewhat higher in families with sons, they spend more time with sons than daughters, and more often stay in a marriage if there are sons (Raley and Bianchi 2006). There are two reasons for these patterns. First, parent-child relationships may be symmetric or asymmetric based on the gender of parents and children, especially if there is gender differentiation within the family (Williamson 1976). While the differences in labor market outcomes by sex in Europe have been declining, there continue to be important differences in work hours (Antecol 2000;Sayer 2005) and also in how much family and care work adult children do for their parents (Bolin et al. 2008;Haberkern and Szydlik 2010). Therefore, parents might want one daughter and one son in order to ensure a child filling each of these roles. Another reason is that parents might derive different pleasure from seeing sons and daughters grow up and doing certain activities with them (Bulatao 1981;Friedman et al. 1994;Hoffman and Manis 1979). For example, some activities may be enjoyed more by sons or daughters such as sports, outdoor activities, or shopping. Parents' happiness could differ if they get to do more or less of these things that they value, based on sex combinations of the children. In addition, parents may value variation.
The Current Study
In this paper, we examine three research questions. First, we examine whether the sex of the first child or sex of the first two children affects parity progression in the UK and Germany. We replicate the findings from prior research on this topic. Second, we examine whether the sex of the first and first two children affects parents' happiness. Our first hypothesis is that parents' happiness will vary based on children's sex. These differences will mirror the preferences for mixed-sex families that we observe in parity progression. An alternative hypothesis is that there will be no effect of children's sex on parents' happiness. Perhaps, there will be a short-term effect when finding out the child's sex, but that parents will get over this short-term disappointment and we will see few strong medium-term effects on happiness. We test whether sex preferences exist in each year until children are ten years old.
Third, we examine whether the effects of children's sex on parental happiness differ by the parent's sex, education level, nativity, and age. Parents may derive more enjoyment from doing activities with children of the same sex. Therefore, we test whether men or women have different happiness responses to different sex combinations of children. Sex preferences may be more or less distinct by the education level of parents. Those with less education and more entrenched traditional gender roles may have stronger sex preferences for children than those with less traditional gender norms. The effect of children's sex on the happiness of parents also might vary by nativity. We would expect that immigrants to Europe would have stronger son preferences if they are coming from societies where these beliefs are widespread. Finally, we analyze the sex preferences revealed by parental happiness by the age at which one becomes parent or has the second child. Assuming parents desire one of each, the second child being same sex as the first may be a particularly large disappointment for older parents who may have less time and opportunities to have a third child.
Data
We use the German Socio-Economic Panel (SOEP) and the British Household Panel Survey (BHPS). The SOEP is a representative longitudinal study of households including Germans living in the old (West) and new (East) German states, foreigners, and recent immigrants to Germany (Wagner et al. 2007). The SOEP started in 1984 with the new German states added in 1991. The BHPS is an annual survey that started in 1991, consisting of a nationally representative sample of households. We chose these two countries as the context for the study because these are two of the largest countries in Europe; together, they cover both a broad range of fertility regimes from lowest-low fertility (Germany having had below 1.3 TFR in the early 1990s) and moderately high modern European fertility (UK having had TFR between 1.6 and 1.9 since the 1991); they both have large immigrant populations to test whether patterns vary by nativity; and they have high-quality data with which to test our hypotheses. Both of these data sets are very large, representative surveys with many measures of happiness, life satisfaction, and the number and sex composition of children. The surveys also have long enough followup to examine short-and medium-term effects of children's sex on parental happiness and parity progression.
Our analysis draws on survey waves from 1984 to 2013 in Germany and 1991-2012 in the United Kingdom. For the parity progression analysis, we include individuals whose first or second birth was observed within the time window. In the SOEP, there were 6001 observed first births and 2988 observed second births. In the BHPS, the corresponding numbers are 7012 first births and 3113 second births. These are the respondents for whom children's sex might have some effect on happiness or future reproductive behavior. After exclusion of those who had twins, triplets, or missing data, the resulting sample sizes are 4810 respondents (SOEP) and 5042 (BHPS) for progression to parity two, and 2482 (SOEP) and 2013 (BHPS) for progression to parity three.
Sample sizes of the happiness regressions are larger than in the parity progression analysis because we are able to include persons who had children before they enrolled in the sample. For example, if someone had a child 9 years before entering the survey, we observe this person's happiness 9 years after the birth and include in the analysis as long as x is less than 10 years. The sample sizes for the happiness regressions are therefore 5532 and 5299 persons for first birth analysis for SOEP and BHPS, respectively, and 4115 and 4247 persons for second birth analysis for SOEP and BHPS, respectively.
To answer the question of how sex of the child or children influences happiness, we analyze happiness from one year before the birth of the first or second child (to capture effects during pregnancy, when the sex of the child may already be known) until 10 years after the birth. With an increasing lag, the sample sizes get smaller. To increase statistical power, we include also parents whose children were born before the observation period. Figure 1 illustrates the sample selection process for both the parity progression analysis and happiness analyses. The samples are approximately equal in the analysis of parity progression and happiness in the year when the child was born. For the analysis of happiness in the year before the child is born, the sample is slightly smaller. In the analysis of long-term effects of child sex on happiness, the longer the lag gets, the more recent the parents are excluded and the parents who had children in the years preceding the start of the survey are included.
Measures
We analyze two dependent variables: parity progression and subjective well-being of parents. Parity progression is derived from births of children, which are indicated by a change in the number of biological children reported in the birth biography data. The birth biography data also include the sex of the child. We exclude stepchildren or adopted children from the analysis because they are not observed in the data.
We use two slightly different questions to measure parental well-being in the German and British data. In the German sample, our measure is based on the question, ''How satisfied are you with your life, all things considered?'' with responses range from zero (completely dissatisfied) to ten (completely satisfied). In the British sample, parental well-being is measured with the question, ''Have you recently been feeling reasonably happy, all things considered?'' with responses ranging from one (much less happy than usual) to four (more happy than usual). The BHPS also includes another question on happiness: ''How dissatisfied or satisfied are you with your life overall,'' with answers ranging from one (not satisfied at all) to seven (completely satisfied). The latter life satisfaction however is not asked consistently through waves, so we focus on the general happiness question which measured consistently through all the BHPS waves. Although questions about happiness and life satisfaction are not exactly the same, they both capture positive aspects of subjective well-being. Moreover, prior research suggests that associations between childbearing and these two subjective well-being variables are highly similar (Myrskylä and Margolis 2014). We rescale the 4-point general happiness variable used in the BHPS that has values 1, 2, 3, and 4 to range from zero to ten (by subtracting 1 and multiplying by 10/3) to allow comparison of the magnitude of the coefficients across BHPS and SOEP.
Other variables used are the sex of the parent, age at birth of the first or second child, education at the time of the birth of the first or second child, and country of origin. In the German data, education is measured in years. In the British data, education is measured with six categories. Different variables of the SOEP and BHPS are mapped into a dichotomous high/low education level. For the BHPS, Country of origin is classified in both countries into native born versus non-native born; in the SOEP data, natives include those who migrated before the year 1949. The sample sizes are not large enough to distinguish between different countries of origin.
Method
Our methodological approach rests on the assumption that child sex is exogenously determined. While recent research has shown that in Western countries, there may be subpopulations with skewed sex ratios at birth (Almond et al. 2013); suggesting sex-selective abortion, these deviations apply to small subpopulations. At the total population level, we argue that the assumption of exogenously determined distribution of children's sex, while perhaps not strictly true, is still close to reality. Under this assumption, we do not need to worry much unobserved selection or control variables. Therefore, our models are simple.
In the analysis of parity progression, we use Cox proportional hazard models to estimate the relative risk of having an additional child depending on the sex distribution of existing children. We estimate separate models for progression to parity two and progression to parity three. In progression to parity two, the key predictor is an indicator for whether the first child was a boy. As control variable, only age at first birth is included. In progression to parity three, we again control only for age at second birth, and the key predictors are whether the first two children were both boys, both girls, or one of each. In both regressions, individuals are censored at the end of the survey or at age 45, whichever happens first. To analyze whether the impact of child sex on further parity progression depends on individual characteristics, we also estimate models that interact child sex with the sex of the parent; education (high vs. low); age of the parent (\25, 25-34, 35?); and country of origin.
The analysis of the effect of children's sex on happiness is based on a series of regressions in which a separate model is estimated for happiness before and after the birth of the child. To capture the potential effect on happiness during pregnancy, we estimate a model in which happiness in the interview preceding the birth (year ''-1'') of the child is regressed on the sex of the child. A separate model is estimated for first births using indicator for whether the child was boy as a predictor, and second births using indicators for whether the children were both boys or both girls as predictors. Next, we estimate similar models regressing happiness on the sex distribution of children but now analyzing happiness in the interview after the birth of the child (year ''0''). We continue estimating such regressions until the 11th interview after the birth of the first or second child (year ''10''). We then graph the point estimates of happiness over the years from -1 to ?10 to analyze how, if at all, happiness changes in response to the sex of the children. As in the parity progression analysis, we control only for age at birth; in addition, we estimate interaction models by sex of the parent, age, education, and country of origin. We use linear regressions since others have found that treating life satisfaction as ordinal versus cardinal makes little difference (Ferrer-i-Carbonell and Frijters 2004).
In the analysis of the impact of same-sex versus mixed-sex distribution of children on happiness, we keep the parents who have a third birth in the analysis also after the birth of the third child. This is important as otherwise the group that has two children would, over time, start to converge to a group that is just content with the two children they have as those who are not content to move to parity three, and this would potentially bias our estimates of the effect of the sex distribution of children on happiness.
Results
First, we examine sample characteristics for the BHPS and SOEP. Table 1 presents descriptive statistics including the sample sizes, demographic characteristics, and information about the happiness of respondents and children's sex. The sample sizes for the German and British data at the time of the first child are both larger than five thousand and are smaller for the analysis of the second child. Just more than half of respondents have a boy for their first child, corresponding to normal sex ratios at birth that are slightly above 1.0. Among those that have two children, approximately a quarter have two boys, another quarter two girls, and half have one of each. Average levels of happiness are high, mostly around 7 on a 0-10 scale. Most births occur to respondents between the ages of 25 and 34, with smaller proportions to older and younger respondents. Between two-third and three-quarter of respondents go on to have a second child within ten years, and between 30 and 40 % of those having a second child go on to have a third. About four in ten Germans and two in ten of those in the UK have a high level of education at the time of a birth. Last, the majority of German respondents are native born, while about half of respondents in the UK are immigrants.
Next, we examine whether the sex of the first child, and sex of the first two children predicts progression to the next higher order birth. Table 2 presents results from Cox proportional hazard models predicting progression to a second and third birth, based on the sex of the existing children. The first two columns show the hazard of having a second birth based on the sex of a first child. We find that in both the UK and Germany that there is no difference in the likelihood of having a second child based on the sex of child one. The hazard ratios are 1.03 and 1.04 and are not significantly different from 1.0.
We further examine whether there are differences in parity progression to a third birth, given having two boys or two girls, relative to having one child of each sex. In both countries, we find strong effects of sex composition on progression to a third birth. In the UK, there is a 34 % higher hazard of a third birth given the first two being boys and a 55 % higher hazard given two girls. In Germany, the respective hazard ratios are 39 and 36 % higher for two boys and two girls than having mixedsex children. In sum, there do not seem to be sex preferences for progression to a second child, but these are strong ones for mixed sex which are found in a higher likelihood of progression to a third birth.
Second, we examine the effects of children's sex on parents' happiness. Figure 2a-d present the results graphically, for those who have one and two children and who may progress to a higher order birth, first in Germany and then in the UK. The lines chart the effects on happiness at each time period, where zero is the year that the parent reports the child and go through ten years after the child is born. Figure 2a shows the effect on happiness of having a boy relative to a girl. There are no differences in Germany in parental happiness given the sex of the first child, and this is evident from no dots denoting significant differences from zero. Figure 2b examines the effect of first child's sex on the happiness of British parents. There is just one time period where a sex difference can be detected. This is one year after the child is reported. Those reporting boys have slightly lower happiness in this year than reporting girls. But there are no differences found for any other years. Moreover, Figs. 1 and 2 show the results by parental characteristics, and these are equally flat as those obtained with the full sample. Overall these results suggest that there are no large sex preferences. Figure 2c, d chart the happiness of parents that have two boys (solid black line), and two girls (dotted gray line) relative to those that have mixed-sex children. In Germany (Fig. 2c), those who have two boys have significantly lower happiness than those having one boy and one girl, but only in the year reporting the child and the year after. In contrast, those reporting two girls have slightly higher happiness two and five years after having a child than those with mixed-sex children. In the UK (Fig. 2d), there are no differences in happiness between those with two girls and one boy and one girl. However, those with two boys are slightly less happy than parents of mixed-sex children the year before the child, the year after the child, and then are happier than mixed-sex parents five years after a birth. These differences are not large in magnitude, and only appear in a small number of years.
We also estimated the effects of children's sex distribution on happiness by parental characteristics. The results are shown in Figs. 3 and 4 of ''Appendix''. We note that in the SOEP, the decrease in happiness associated with two boys is most consistently observed among those aged 25-34 (significant decline in one year before, year of the birth, and year after), and the magnitude of the happiness decline is largest among immigrants. These results are in contrast to the BHPS data, which show no decline in happiness among parents aged 25-34 who had two boys, and among migrants, the decline is observed only on the 4th year after birth. Looking at the effect of two girls on happiness, the results in the SOEP rather suggest slight increases in happiness rather than decreases. However, none of the observed patterns about increased happiness following the birth of a second girl are replicated in the BHPS data.
Overall, the results are largely mixed, and in the handful of cases where the point estimates are significant, the magnitude is not very large. Moreover, the significant effects are in most cases observed only for two boys and not for girls, or vice versa, and in only one of the data sets. This is in sharp contrast to the parity progression results which were consistently in line with prior research and expectations. This leads us to conclude that the differences in happiness generated by variation in the sex distribution of children are quite small. In sum, although we expected to see parents to be less happy with same-sex children, there is little evidence to support this.
Discussion
Demographers are interested in sex preferences for children because they can skew sex ratios and influence population level fertility, parenting behavior, and family outcomes. In modern European countries, where gender equality is relatively high and parents' reliance on children in older age has been eroded by public and private pensions, strong preferences for mixed-sex children still emerge in parity progression. Based on parity progression ratios, in most European countries, there are no sex preferences for a first child, but a strong preference for mixed-sex children.
We hypothesized that mixed-sex preferences also influence parental happiness. The topic is important as parents' disappointment with a second child of the same second child that only appear in a very small number of years. We also tested whether the results are stronger among subpopulations of parents (age, education, sex, and nativity). For this analysis, the results were mixed and of very small magnitude. While mixed-sex preferences influence fertility (likelihood of having a third birth), they have little impact on parental happiness. The fact that there are no differences in parental happiness by the sex of a first child maps onto the fact that there are no differences in parity progression to a second child. However, there is a divergence between results for parental happiness and parity progression after a second birth. It is possible that even strong sex preferences do not translate into differences in parental happiness due to countervailing forces as two same-sex children may also positively influence parental happiness. On the one hand, parents may be disappointed to have a second child of the same sex as the first. But on the other hand, this disappointment may be short-lived as has been found for many other life events (Clark et al. 2008). It could also be that positive factors emerge regarding having same-sex children such as the ability to reuse gendered children's items or participate in similar gendered activities. Another potential explanation for the disparity between the small effect of sex composition on happiness, but the large effect on subsequent childbearingis that parents perceive that they will benefit in the long run from having at least one child of each sex. Parents' thoughts about long-run happiness and the sex composition of children can be tested with other data, but was not able to be tested in our study.
We hypothesized that two children of the same sex would decrease parental happiness, as parity progression ratios after two children suggest a disappointment in the sex of the second child. We consider that null finding-little if any effect on parental happiness-as positive. A predicted (but not observed) decline in parental happiness would have obviously indicated a decline in well-being of the parent, and indirectly perhaps also a decline in the well-being of the child or both children, as parental well-being is an important determinant of child development. A lack of such a decline in parental well-being is a welcome null finding, as it suggests that parents are happy with the children they have, regardless of their sex. | 2022-12-10T15:19:22.965Z | 2016-08-01T00:00:00.000 | {
"year": 2016,
"sha1": "fa62f8c8d49d888f82e874b72580bee907a9d1e9",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10680-016-9387-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "fa62f8c8d49d888f82e874b72580bee907a9d1e9",
"s2fieldsofstudy": [
"Psychology",
"Economics"
],
"extfieldsofstudy": []
} |
257741646 | pes2o/s2orc | v3-fos-license | Design and Identification of a Novel Antiviral Affinity Peptide against Fowl Adenovirus Serotype 4 (FAdV-4) by Targeting Fiber2 Protein
Outbreaks of hydropericardium hepatitis syndrome caused by fowl adenovirus serotype 4 (FAdV-4) with a novel genotype have been reported in China since 2015, with significant economic losses to the poultry industry. Fiber2 is one of the important structural proteins on FAdV-4 virions. In this study, the C-terminal knob domain of the FAdV-4 Fiber2 protein was expressed and purified, and its trimer structure (PDB ID: 7W83) was determined for the first time. A series of affinity peptides targeting the knob domain of the Fiber2 protein were designed and synthesized on the basis of the crystal structure using computer virtual screening technology. A total of eight peptides were screened using an immunoperoxidase monolayer assay and RT-qPCR, and they exhibited strong binding affinities to the knob domain of the FAdV-4 Fiber2 protein in a surface plasmon resonance assay. Treatment with peptide number 15 (P15; WWHEKE) at different concentrations (10, 25, and 50 μM) significantly reduced the expression level of the Fiber2 protein and the viral titer during FAdV-4 infection. P15 was found to be an optimal peptide with antiviral activity against FAdV-4 in vitro with no cytotoxic effect on LMH cells up to 200 μM. This study led to the identification of a class of affinity peptides designed using computer virtual screening technology that targeted the knob domain of the FAdV-4 Fiber2 protein and may be developed as a novel potential and effective antiviral strategy in the prevention and control of FAdV-4.
Introduction
Fowl adenoviruses (FAdVs), belonging to the Aviadenovirus genus of the Adenoviridae family, are grouped into 5 species (FAdV-A to -E) and further divided into 12 serotypes (FAdV-1 to 8a and 8b to 11) on the basis of profiles of restriction enzyme digestion and serum cross-neutralization assays [1,2]. Hepatitis-Hydropericardium syndrome (HHS) associated with fowl adenovirus serotype 4 (FAdV-4) infection was first reported in Angara Goth, Pakistan, in 1987 and is known as Angara disease [3]. HHS subsequently spread to other Asian countries, where it became endemic, but also to some Arabian countries and parts of Latin America [4], including reports from the Middle East [5], Russia [6], Slovakia [7], Central and South America [8,9], Japan [10], and Korea [11]. HHS has been commonly reported to be caused by FAdV-4 in China and has been associated with high
Reagents and Antibodies
His-tag monoclonal antibody was purchased from Proteintech (Wuhan, China). The rabbit polyclonal antibody and the mAb 5A5 against the FAdV-4 Fiber2 protein were prepared in our laboratory. The affinity peptides were screened with molecular docking in our laboratory using SYBYL-X 2. purchased from Beyotime (Shanghai, China). Positive chicken serum against FAdV-4 was prepared in our laboratory.
Expression and Purification of the Knob Domain of the Fiber2 Protein
The knob domain of the FAdV-4 Fiber2 protein was determined to be aa279-479 through homology comparison and modeling on the basis of the existing crystal structure of the adenovirus fiber protein. The knob domain of the FAdV-4 Fiber2 protein (amino acids 279-479) was expressed in E. coli BL21(DE3) cells using induction with a final concentration of 0.1 mM isopropyl-β-D-thiogalactoside (IPTG) overnight at 16 • C. The preliminary purification of the protein was performed using a Ni-NTA column (Merck Millipore, Darmstadt, Germany) and further purified using a gel-filtration column, Superdex200 Increase 10/300 GL (GE, Fairfield, CT, USA). The purity and bioactivity of the protein were determined with 12.5% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and Western blot with a His-tag monoclonal antibody (dilution 1:5000).
Crystallization, Data Collection, and Structural Determination of Fiber2
Crystallization of the knob domain of the FAdV-4 Fiber2 protein was carried out at room temperature (25 • C) using the sitting-drop vapor-diffusion method with an equal volume of the target protein at 15 mg/mL and various crystallization reagents from crystallization screening kits (Hampton). The crystal picture of the FAdV-4 Fiber2 knob has been shown in Figure S1. The crystals were flash-frozen in liquid nitrogen using a cryoprotection solution with 20% glycerol in the crystallization solution. X-ray data sets of the crystals were collected at a wavelength of 0.97 Å and a temperature of 100 K on the beamline BL19U1 at the Shanghai Synchrotron Radiation Facility (SSRF). All data sets were processed using HKL-2000 [40]. The structure was determined with molecular replacement using Phaser [41] with the structure of a chicken embryo lethal orphan (CELO) short-fiber knob (PDB ID: 2VTW) as the search model. The structure was refined using phenix.refine [42] and manually adjusted with Coot [43]. The atomic coordinates and structure factors of the knob domain of the FAdV-4 Fiber2 protein have been deposited in the Protein Data Bank (PDB ID: 7W83). The statistics of data collection and refinement are summarized in Table 1.
Crystal Knob Domain of FAdV-4 Fiber2
Disallowed regions 0.8 PDB ID 7W83 a R merge = Σ(|I − < I >|)/Σ(I), where I is the observed intensity and < I > is the average intensity of all measured observations equivalent to reflection I. b Numbers in parentheses represent values in the highest-resolution shell.
Analytical Ultracentrifugation
Sedimentation velocity experiments were performed in a Beckman Coulter Proteome-Lab XL-A analytical ultracentrifuge (Beckman Coulter Inc., Brea, CA, USA) at 25.0 • C and 50,000 rpm in standard two-sector cells using an An-60 Ti rotor. Samples were equilibrated in the rotor at 25.0 • C for at least 1 h prior to the collection of 144 scans over a 2.5 h period. Initial analyses were performed in SEDFIT (60) using a continuous c(s) model with a resolution of 120 and S ranging from 0 to 15.
The Design and Virtual Screen of the Peptides
The design and screening methods of all the peptides were described in the previous articles [44,45]. The original docking peptide conformation was designed using the Biopolymer/Build/Build Protein module in SYBYL-X 2.1.1 software on the basis of the crystal structure of the C-terminal knob domain of the FAdV-4 Fiber2 protein. Then, a peptide library consisting of a series of random peptides with different lengths (2-9 amino acids) was designed after hydrogenation, MMFF94 charge addition, and energy gradient optimization. The best active pocket region was selected by analyzing the crystal structure of the knob domain of the Fiber2 protein of FAdV-4 docked with the polypeptide library using the Surflex-Dock program in the SYBYL-X 2.1.1 software. Finally, the docking results were evaluated and analyzed comprehensively using a software scoring system. The affinity peptides were synthesized on the basis of standard Fmoc solid-phase peptide synthesis (Fmoc-SPPS) and purified using reversed-phase high-performance liquid chromatography (RP-HPLC). The syntheses of the affinity peptides were executed by GL Biochem Ltd. (Shanghai, China).
Immunoperoxidase Monolayer Assay
FAdV-4 virus, at a multiplicity of infection (MOI) of 0.01, was premixed with 200 µM peptides at 37 • C for 1 h and then inoculated onto near-confluent LMH cell monolayers at 37 • C in 5% CO 2 . The virus inoculum was removed after 1 h, and the infected cells were maintained in DMEM with 2% FBS for 24 h. After removing the supernatant, the cells were washed twice with PBS and fixed with ice-cold methanol at -20 • C for 30 min. The plates were then blocked with 5% skim milk at 37 • C for 1 h, followed by incubation with positive serum against FAdV-4 at 37 • C for 1 h and HRP-conjugated rabbit anti-chicken IgG (H + L) at 37 • C for 40 min. During each step, the plates were washed with PBST six times. The results were obtained using an AEC Peroxidase Substrate kit (Solarbio) and observed with an inverted microscope.
Real-Time Quantitative Polymerase Chain Reaction
The FAdV-4 copy number was determined using a real-time quantitative polymerase chain reaction (RT-qPCR). Viral DNA was extracted from the cells with a TaKaRa Mini Viral RNA/DNA Extraction kit Version 5.0 (TaKaRa Biotechnology Co., Ltd., Dalian, China) according to the manufacturer's protocols. The FAdV-4 Hexon gene was used as an indicator for the presence of viral DNA. The standard positive plasmid was serially diluted 10-fold to 10 -2 -10 -8 to generate the standard curve, and the copy numbers of the samples were quantified using the absolute quantification method. The primers for the RT-qPCR were designed using Primer 5.0 software ( Table 2) The standard curves were generated, and the quantity of the viral DNA in the samples was calculated.
Surface Plasmon Resonance Assay
The equilibrium dissociation constant (KD) of each peptide was determined using a Biacore X100 instrument (General Electric Company, Fairfield, CT, USA). All the experimental procedures were performed in accordance with the Biacore X100 manual. The knob domain of the FAdV-4 Fiber2 protein was covalently coupled to a CM5 chip using the EDC/NHS method at the optimal pH, which was determined first in a pre-experiment. The running buffer (HBS-EP, pH 7.4, filtered with a 0.22 µm filter) was then allowed to flow through the CM5 chip. The peptides to be detected were diluted with HBS-EP in 2-fold serial dilutions and then injected into the machine from a low concentration to a high concentration to observe the response signal changes in real time. In each cycle, the peptide solution was set to flow through the chip for 120 s at a constant flow rate of 30 µL/min, and then HBS-EP was injected to flow through the chip for 120 s to dissociate the peptides from the protein. Then, 0.25% SDS was used to completely elute the peptides. Ultimately, the kinetic dissociation constants of the binding reactions were calculated and analyzed using Biacore X100 Evaluation software, version 2.0.2 (General Electric Company, Fairfield, CT, USA).
Cytotoxicity Assay
The cytotoxicity of the peptides in the LMH cells was assayed using the CCK-8 method. The LMH cells were seeded in 96-well plates with a density of 5 × 10 3 cells/well at 37 • C in 5% CO 2 . When the LMH cells formed a monolayer, the medium was discarded and replaced with FBS-free DMEM containing different concentrations of a peptide (0, 1.56, 3.125, 6.25, 12.5, 25, 50, 100, and 200 µM). Then, 10 µL of CCK-8 reagent was added to each well after 24, 48, and 72 h of incubation and then further incubated at 37 • C for 1 h. The absorbance of each well at 450 nm was read with a microplate analyzer.
TCID 50 and Indirect Immunofluorescence Assay
FAdV-4 virus (MOI = 0.01) was mixed with different concentrations of peptide at 37 • C for 1 h and then inoculated into near-confluent LMH cell monolayers at 37 • C in 5% CO 2 . The virus inoculum was removed after 2 h, and the infected cells were maintained in DMEM with 2% FBS. The virus infectivity was assessed by measuring the 50% tissue culture infectious dose (TCID 50 ) at various time points. The cytopathic effects (CPEs) were observed under the microscope, and the values of TCID 50 were calculated following the Reed-Muench method. Meanwhile, the virions in the infected cells were detected with IFAs at 24 h.p.i. IFAs were performed with the mAb 5A5 prepared in our laboratory (1:500 dilution) as the primary antibody and FITC-conjugated goat anti-mouse IgG as the secondary antibody. The cell nuclei were stained with 4 , 6-diamidino-2-phenylindole (DAPI, Beyotime, Shanghai, China) for 5 min in the dark. The samples were observed with an inverted fluorescence microscope (Olympus, Tokyo, Japan). The expression level of the FAdV-4 Fiber2 protein was measured with Western blot using rabbit polyclonal antibodies against the FAdV-4 Fiber2 protein, and the results were visualized with ECL reagent. Images were obtained with a chemiluminescence imaging system (Fusion FX7; VILBER, Paris, France).
Statistical Analysis
All the statistical analyses were performed using GraphPad Prism version 8.0 (Graph-Pad Software, San Diego, CA, USA). Three replicates were included in each experiment, and each experiment was independently repeated at least three times. All the statistical analyses and calculations are expressed as mean ± SEM. Statistical significance was determined with an unpaired t test when only two groups were compared or with a one-way analysis of variance (ANOVA) when more than two groups were compared. Statistical significance was determined at the levels of p < 0.05 (*), p < 0.01 (**), p < 0.001 (***), or p < 0.0001 (****).
Expression and Purification of Recombinant Protein Fiber2
The knob domain of the FAdV-4 Fiber2 protein was expressed in E. coli BL21(DE3) cells and purified with a Ni-NTA column. A schematic of the Fiber2 protein is shown in Figure 1A. The SDS-PAGE and Western blot results show that the protein was purified with an apparent molecular mass of 23 kDa ( Figure 1B). The purified protein was further eluted at 14.78 mL using a Superdex200 Increase 10/300 GL ( Figure 1C), which suggests that the protein might exist as a trimer. Moreover, the protein that was not boiled before SDS-PAGE appeared at 65 kDa, demonstrating that the protein was trimeric in the elution ( Figure 1D). The result of sedimentation velocity analytical ultracentrifugation (AUC) further confirms that the molecular weight of the protein was about 65.0 kDa in solution, revealing the native trimeric form of the protein ( Figure 1E).
Crystal Structure of the Knob Domain of FAdV-4 Fiber2
Optimal crystals were acquired with a reservoir solution containing 0.1 M sodium acetate trihydrate, pH 7.0, and 12% w/v polyethylene glycol 3350. The crystal structure was solved with molecular replacement using a monomer of the knob domain of the CELO short fiber as a search model (PDB ID: 2VTW). The asymmetric unit contained two monomers, which formed two trimers by generating a symmetry mate (Figure 2A). Each monomer formed an anti-parallel β-sandwich with a topology similar to other known structures of knob domains of adenovirus fiber protein. The β-sandwich comprised two β-sheets, ABCJ and GHID ( Figure 2B), for which nomenclature was proposed for the knob domain of the HAdV5 fiber [46].
Design and Synthesis of Peptides on the Basis of the Crystal Structure of the Knob Dom of the FAdV-4 Fiber2
A region of the knob domain of the FAdV-4 Fiber2 protein comprising 81 amino a was selected as the docking active pocket on the basis of the structure obtained a ( Figure 3). The docking active pocket included the top of the trimer and the region tween the two chains (sequences are shown in Figure S2). A virtual library of linear tides with a capacity of 24,000 was obtained after hydrogenation, MMFF94 charge a tion, and energy gradient optimization. After molecular docking between the sele docking pocket and the peptide library with the Surflex-Dock program using SYBY 2.1.1 software, the affinity between the peptides and the knob domain of the FAdV-ber2 protein were evaluated using the total score values, which were calculated w consensus score function. Subsequently, 30 peptides with higher total score values synthesized for subsequent experiments (Table 3).
Design and Synthesis of Peptides on the Basis of the Crystal Structure of the Knob Domain of the FAdV-4 Fiber2
A region of the knob domain of the FAdV-4 Fiber2 protein comprising 81 amino acids was selected as the docking active pocket on the basis of the structure obtained above ( Figure 3). The docking active pocket included the top of the trimer and the region between the two chains (sequences are shown in Figure S2). A virtual library of linear peptides with a capacity of 24,000 was obtained after hydrogenation, MMFF94 charge addition, and energy gradient optimization. After molecular docking between the selected docking pocket and the peptide library with the Surflex-Dock program using SYBYL-X 2.1.1 software, the affinity between the peptides and the knob domain of the FAdV-4 Fiber2 protein were evaluated using the total score values, which were calculated with a consensus The docking active pocket is colored in red. The trimeric protein is colored in gray.
Interaction between the Peptides and the Knob Domain of the Fiber2 Protein
To analyze the interactions between the peptides and the knob domain of the Fiber2 protein, the View/Surfaces and Ribbons/Create/MOLCAD module in the SYBYL-X 2.1.1 software and Pymol software were used to complete computer virtual analyses. The hydrogen bonds (yellow dotted lines) between P15 and the knob domain of the FAdV-4 Fiber2 are shown in Figure 5A, and the docking figures of other peptides are shown in Figure S4. The key amino acids at the binding site between P15 and the knob domain of the FAdV-4 Fiber2 were Glu-125, Pro-126, Ser-135, Val-138, Gly-140, Asn-146, and Thr-173. These hydrogen bonds contributed considerably to the interaction between P15 and the protein. The electrostatic interaction and hydrophobic interaction between P15 and the active site of the knob domain of the FAdV-4 Fiber2 protein are shown in Figure 5B. With the presence of oppositely charged residues on the interaction surface, the peptide and the protein may be attracted to each other because of electrostatic interactions. Therefore, the electrostatic interaction may also play an important role in the binding of P15 to the protein.
The hydrophobic force may not be the dominant force, as the surface between P15 and the protein was not lipophilic or hydrophilic, as analyzed with SYBYL-X 2.1.1. Furthermore, surface plasmon resonance (SPR) assays were executed to determine the affinity between the nine peptides (P1, P5, P6, P10, P14, P15, P25, P27, and P30) and the knob domain of the Fiber2 protein. Meanwhile, the OVA was chosen as the unrelated protein, and the interaction with P15 was also determined to evaluate the selectivity interaction between P15 and the knob domain of Fiber2. As shown in Figure S5A,B, the results show that the response level of OVA-P15 was between 0 and 4 RU, which is similar to the OVA-HBS buffer. Thus, these responses seem to be meaningless signals. Additionally, the affinity fit was not able to obtain a rational result ( Figure S5C). In contrast, the KD value between P15 and the knob domain of the Fiber2 protein was 1.44 µM ( Figure 5C). The results of the other peptides are shown in Table 4. The KD values of the other eight peptides with the knob domain of the Fiber2 protein ranged from 0.57 to 23.97 µM, suggesting that the binding affinities of the peptides to the protein were strong. (C) Viral DNA was extracted fr with DNA extraction kits, and then the FAdV-4 genome copies were measured using RT-qP negative control. Data represent means ± SEM from three independent experiments. **: p < p < 0.001; ****: p < 0.0001; ns: not significant.
Interaction between the Peptides and the Knob Domain of the Fiber2 Protein
To analyze the interactions between the peptides and the knob domain of the protein, the View/Surfaces and Ribbons/Create/MOLCAD module in the SYBYL software and Pymol software were used to complete computer virtual analyses. T drogen bonds (yellow dotted lines) between P15 and the knob domain of the FAd ber2 are shown in Figure 5A, and the docking figures of other peptides are shown ure S4. The key amino acids at the binding site between P15 and the knob domain FAdV-4 Fiber2 were Glu-125, Pro-126, Ser-135, Val-138, Gly-140, Asn-146, and T These hydrogen bonds contributed considerably to the interaction between P15 a protein. The electrostatic interaction and hydrophobic interaction between P15 a active site of the knob domain of the FAdV-4 Fiber2 protein are shown in Figure 5 the presence of oppositely charged residues on the interaction surface, the pepti OVA-HBS buffer. Thus, these responses seem to be meaningless signals. Additionally, the affinity fit was not able to obtain a rational result ( Figure S5C). In contrast, the KD value between P15 and the knob domain of the Fiber2 protein was 1.44 μM ( Figure 5C). The results of the other peptides are shown in Table 4. The KD values of the other eight peptides with the knob domain of the Fiber2 protein ranged from 0.57 to 23.97 μM, suggesting that the binding affinities of the peptides to the protein were strong.
Cytotoxicity of P15 in LMH Cells
To analyze the effects of P15 at different concentrations on the vitality of LMH cells at different time points, cytotoxicity assays were performed using a CCK-8 assay. The LMH cells were treated with P15 at increasing concentrations (0, 1.56, 3.125, 6.25, 12.5, 25, 50, 100, and 200 µM), and the cytotoxicity was evaluated with CCK-8 after 24, 48, and 72 h of incubation. The results suggest that the treatment of P15 with different concentrations had no cytotoxic effect on the LMH cells after incubation for up to 72 h, compared with the untreated group ( Figure 6). from brown (lipophilic) to blue (hydrophilic). (C) The real-time binding and fitting curve between the purified Fiber2 trimer protein and P15 was performed with SPR (Biacore X100, General Electric Company, Fairfield, CT, USA). The KD value was fitted and calculated with an appropriate model using Biacore X100 evaluation software, version 2.0.2 (General Electric Company, Fairfield, CT, USA).
Cytotoxicity of P15 in LMH Cells
To analyze the effects of P15 at different concentrations on the vitality of LMH cells at different time points, cytotoxicity assays were performed using a CCK-8 assay. The LMH cells were treated with P15 at increasing concentrations (0, 1.56, 3.125, 6.25, 12.5, 25, 50, 100, and 200 μM), and the cytotoxicity was evaluated with CCK-8 after 24, 48, and 72 h of incubation. The results suggest that the treatment of P15 with different concentrations had no cytotoxic effect on the LMH cells after incubation for up to 72 h, compared with the untreated group ( Figure 6).
P15 Inhibits the Infectivity of FAdV-4 In Vitro
To further validate the anti-FAdV-4 effect of P15, we analyzed the FAdV-4 Fiber2 protein expression and virus proliferation at different time points. FAdV-4 virus (MOI = 0.01) was premixed with peptides at 37 °C for 1 h, and then the mixture was inoculated onto LMH cell monolayers at 37 °C for 1 h. The cells were maintained in fresh DMEM containing 2% FBS without peptides. The expression level of the Fiber2 protein at 48 h.p.i. was measured using Western blot. As shown in Figure 7A, the Fiber2 protein expression in every P15-treated group (10, 25, and 50 μM) was reduced compared with the untreated group. The viral titers of FAdV-4 were significantly decreased in the P15 treatment group compared with the untreated group at 24, 48, 72, and 96 h ( Figure 7B). The results indicate that viral proliferation was inhibited in a dose-dependent manner during the P15
P15 Inhibits the Infectivity of FAdV-4 In Vitro
To further validate the anti-FAdV-4 effect of P15, we analyzed the FAdV-4 Fiber2 protein expression and virus proliferation at different time points. FAdV-4 virus (MOI = 0.01) was premixed with peptides at 37 • C for 1 h, and then the mixture was inoculated onto LMH cell monolayers at 37 • C for 1 h. The cells were maintained in fresh DMEM containing 2% FBS without peptides. The expression level of the Fiber2 protein at 48 h.p.i. was measured using Western blot. As shown in Figure 7A, the Fiber2 protein expression in every P15-treated group (10, 25, and 50 µM) was reduced compared with the untreated group. The viral titers of FAdV-4 were significantly decreased in the P15 treatment group compared with the untreated group at 24, 48, 72, and 96 h ( Figure 7B). The results indicate that viral proliferation was inhibited in a dose-dependent manner during the P15 treatment. Moreover, the results are consistent with the results of the immunofluorescence assays (IFAs) ( Figure 7C). Taken together, P15 showed a pronounced antiviral effect against FAdV-4 infection. treatment. Moreover, the results are consistent with the results of the immunofluorescence assays (IFAs) ( Figure 7C). Taken together, P15 showed a pronounced antiviral effect against FAdV-4 infection.
Discussion
FAdV-4 is the predominant etiological agent of HHS, which has caused a heavy economic burden on the poultry industry. There has been abundant research on developing a vaccine to prevent the disease but almost no research on the development of antiviral drugs [47]. Therefore, the development of an effective antiviral agent for preventing FAdV-4 infection is urgently needed. In this study, the C-terminal knob domain of the FAdV-4 Fiber2 protein was expressed and purified, protein with a high purity was crystallized, and its structure (PDB ID: 7W83) was determined. On the basis of the structure obtained, affinity peptides that specifically targeted the knob domain of the Fiber2 protein were designed and characterized. A total of eight peptides with potential antiviral activity were screened using IPMA and RT-qPCR. Treatment with P15 significantly reduced the levels of Fiber2 protein expression and viral titers during FAdV-4 infection, as determined with Western blot, IFA, and viral titer experiments. Therefore, P15 (WWHEKE) was confirmed to have an antiviral effect on FAdV-4 in LMH cells in vitro.
Peptides can effectively bind to the active site of a target protein to play a certain functional role with the help of precise design [45,48]. Peptides designed with molecular docking technology and computer virtual screening technology can be more accurately targeted to a protein site. Currently, commonly used high-throughput screening methods include phage display technology, mRNA display technology, combinatorial chemistry technology, and computer virtual screening technology [49]. Compared with other high-
Discussion
FAdV-4 is the predominant etiological agent of HHS, which has caused a heavy economic burden on the poultry industry. There has been abundant research on developing a vaccine to prevent the disease but almost no research on the development of antiviral drugs [47]. Therefore, the development of an effective antiviral agent for preventing FAdV-4 infection is urgently needed. In this study, the C-terminal knob domain of the FAdV-4 Fiber2 protein was expressed and purified, protein with a high purity was crystallized, and its structure (PDB ID: 7W83) was determined. On the basis of the structure obtained, affinity peptides that specifically targeted the knob domain of the Fiber2 protein were designed and characterized. A total of eight peptides with potential antiviral activity were screened using IPMA and RT-qPCR. Treatment with P15 significantly reduced the levels of Fiber2 protein expression and viral titers during FAdV-4 infection, as determined with Western blot, IFA, and viral titer experiments. Therefore, P15 (WWHEKE) was confirmed to have an antiviral effect on FAdV-4 in LMH cells in vitro.
Peptides can effectively bind to the active site of a target protein to play a certain functional role with the help of precise design [45,48]. Peptides designed with molecular docking technology and computer virtual screening technology can be more accurately targeted to a protein site. Currently, commonly used high-throughput screening methods include phage display technology, mRNA display technology, combinatorial chemistry technology, and computer virtual screening technology [49]. Compared with other highthroughput screening methods, computer virtual screening technology has many incomparable advantages, such as a lower cost and shorter time for development [50][51][52]. The programs for the docking of peptide ligands with proteins include AutoDock vina [53], Glide [54], GOLD [55], Surflex-Dock [56], and GalaxyPepDock [57]. Surflex-Dock represents an advance in flexible molecular docking. There was a study showing that Surflex-Dock performed as well in terms of Root-Mean-Square Deviation (RMSD) accuracy for docked ligands and docking speed compared with the best methods in each category, and Surflex-Dock was significantly more accurate in its scoring with a 5 to 10-fold lower false positive rate than other methods for an equivalent true positive rate [58]. However, considering the complexity of the actual docking situation, it may be necessary to try different docking methods to find the best method for each protein, and we chose the most appropriate method in theory in this study. The same situation applied to the selection of the docking active pocket. The receptors of most adenoviruses including HAdV37 [59], HAdV-D26 [60], HAdV-5 [61], and CAV-2 [62] have been revealed, and the fiber-receptor complex structure has been determined [63,64]. It has been shown that the C-terminal domain interacts with the receptors [46,64,65]. However, research on the receptors' interaction with FAdV-4 Fiber2 has not been reported yet. Considering that other adenoviruses usually interact with the receptor through the knob domain, we selected this C-terminal knob domain as the docking pocket, which is exposed to the outside and more likely to interact with other molecules. We selected most of the regions at the top of the trimer and the region between the two chains as targets, which represents a choice for exploring the possibility of different regions of the knob domain of the FAdV-4 Fiber2. Other locations are also potential sites. The optimal results indicate that our docking strategy was suitable and effective and provide ideas for the design of antiviral peptides against the infection of FAdV-4 and other diseases. Additionally, we acknowledge that there is no direct correlation between affinity and antiviral activity based on our experiment results. Furthermore, some peptides used in the study, such as P22, exhibited affinity to the Fiber2 protein but had no antiviral effects ( Figure S6). Therefore, we infer that the antiviral effect of the peptide is not only related to the strength of affinity but also to the target site.
Our work provides the crystal structure of the C-terminal knob domain of the FAdV-4 Fiber2 protein for the first time. Furthermore, a structural characterization using gelfiltration chromatography and AUC revealed a trimer form of the protein. The crystal structure of the knob domain of the FAdV-4 Fiber2 protein showed that the asymmetric unit contained two monomers, which could form two trimers by generating a symmetry mate. Each monomer formed an anti-parallel β-sandwich with a topology similar to other known structures of knob domains of adenovirus fiber proteins. As previously reported by Zhang et al. [32], changes in the tail and knob regions of the Fiber2 resulted in virulence differences in the strains. Thus, the crystal structure of the C-terminal knob domain of the Fiber2 may provide a structural basis for studying the differences in virulence among various strains. Treatment with P15 significantly inhibited virus proliferation, probably through the binding of the peptide to the C-terminal knob domain of the FAdV-4 Fiber2 protein. The binding site may be of great significance for studying the effect of Fiber2 on FAdV-4 proliferation.
Though P15 significantly inhibited the proliferation of FAdV-4 in LMH cells, which was revealed with IPMA, RT-qPCR, Western blot, IFA, and viral titer experiments, the inhibition mechanism of P15 is still unknown. Many antiviral drugs work by blocking one or more of the virus cycle steps, including attachment to target cells [66], entry [67], replication [68,69], and release [70]; therefore, identification of the specific stages in which P15 performs antiviral activity would be helpful for investigating the inhibition mechanism of P15. Moreover, in vivo experiments to evaluate the antiviral effect of P15 are needed to facilitate a more comprehensive assessment of the antiviral potential of P15.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/v15040821/s1, Figure S1: The crystal picture of FAdV-4 Fiber2 knob. Figure S2: The sequences of the docking active pocket. The sequences of docking active pockets are colored in yellow. Figure Figure S5: The response of P15 to OVA and HBS buffer and the affinity fit result. (A) The response level of OVA-P15 was between 0 and 4 RU. (B) The HBS buffer was almost the same with OVA-P15. (C) The affinity fit of OVA-P15 was not able to obtain a rational result. Figure S6: (A) The real-time binding and fitting curve between purified Fiber2/trimer protein and P22 was performed with SPR assay. The KD value was fitted and calculated with the appropriate model using Biacore X100 Evaluation software, version 2.0.2 (General Electric Company, Fairfield, CT, USA). (B) The IPMA image of P22. Scale bars: 200 µm. | 2023-03-26T15:17:44.635Z | 2023-03-23T00:00:00.000 | {
"year": 2023,
"sha1": "8ab34928e5d3e8309e322251cc57f23d1e7f7159",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/15/4/821/pdf?version=1679640005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "63141dd3386e043bbb14fe3d43f5325a45860129",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25462229 | pes2o/s2orc | v3-fos-license | Phospholipid Transfer Protein Deficiency Protects Circulating Lipoproteins from Oxidation Due to the Enhanced Accumulation of Vitamin E*
Vitamin E is a lipophilic anti-oxidant that can prevent the oxidative damage of atherogenic lipoproteins. However, human trials with vitamin E have been disappointing, perhaps related to ineffective levels of vitamin E in atherogenic apoB-containing lipoproteins. Phospholipid transfer protein (PLTP) promotes vitamin E removal from atherogenic lipoproteins in vitro, and PLTP deficiency has recently been recognized as an anti-atherogenic state. To determine whether PLTP regulates lipoprotein vitamin E contentin vivo, we measured α-tocopherol content and oxidation parameters of lipoproteins from PLTP-deficient mice in wild type, apoE-deficient, low density lipoprotein (LDL) receptor-deficient, or apoB/cholesteryl ester transfer protein transgenic backgrounds. In all four backgrounds, the vitamin E content of very low density lipoprotein (VLDL) and/or LDL was significantly increased in PLTP-deficient mice, compared with controls with normal plasma PLTP activity. Moreover, PLTP deficiency produced a dramatic delay in generation of conjugated dienes in oxidized apoB-containing lipoproteins as well as markedly lower titers of plasma IgG autoantibodies to oxidized LDL. The addition of purified PLTP to deficient plasma lowered the vitamin E content of VLDL plus LDL and normalized the generation of conjugated dienes. The data show that PLTP regulates the bioavailability of vitamin E in atherogenic lipoproteins and suggest a novel strategy for achieving more effective concentrations of anti-oxidants in lipoproteins, independent of dietary supplementation.
The oxidation theory of atherogenesis has received wide support from a number of different lines of evidence (1,2). In particular, treatment of hypercholesterolemic animals with a variety of potent synthetic anti-oxidants has resulted in inhi-bition of the progression of atherosclerosis (3). However, a direct relationship between the susceptibility of LDL 1 to oxidation and the extent of atherosclerosis has not been found in all studies, and attempts to prevent atherogenesis by feeding diets enriched in "natural" anti-oxidants have provided mixed and sometimes disappointing results (2,3). Recently, it was shown that feeding large doses of vitamin E to apoE-deficient mice decreased the progression of atherosclerosis (4,5). However, with a few exceptions (6,7), the administration of vitamin E in human trials has been negative (8 -12). An important issue that has not been addressed in such studies is the actual concentrations of vitamin E in atherogenic lipoproteins. Recently, mice with ␣-tocopherol transfer protein deficiency were shown to have reduced vitamin E content in lipoproteins, and moderately increased susceptibility to atherosclerosis (13). However, little is known of the physiological mechanism regulating the turnover and levels of vitamin E in the plasma lipoproteins.
The plasma phospholipid transfer protein (PLTP) mediates both net transfer and exchange of phospholipids between lipoproteins (14). PLTP can also bind and transfer several other amphipathic lipids, including unesterified cholesterol, diacylglycerides, and lipopolysaccharides (15). PLTP has been shown in vitro to facilitate the transfer of vitamin E from VLDL to HDL (16,17) and from lipoproteins into tissues (16,17), but it is not known if PLTP regulates vitamin E levels in lipoproteins or tissues in vivo. PLTP knock-out (PLTP0) mice were recently shown to be resistant to atherosclerosis, in part related to decreased secretion and levels of apoB containing lipoproteins (18). The decreased secretion and levels of apoB lipoproteins was demonstrated by crossing the PLTP deficiency trait into apoE-deficient and apoB transgenic backgrounds. However, an anti-atherogenic effect of PLTP deficiency was also seen in LDL receptor knock-out mice, even though plasma levels of apoB lipoproteins were identical to controls. This indicates an additional anti-atherogenic mechanism of PLTP deficiency. In this study we have investigated the hypothesis that PLTP has a physiological role in transferring vitamin E between lipoproteins: this hypothesis predicts an increased content of vitamin E in apoB-containing lipoproteins in PLTP-deficient mice, and a decreased susceptibility to oxidation. Such findings would provide a plausible novel anti-atherogenic mechanism related to PLTP deficiency, beyond the effects of lowering BLp levels (18).
Lipid and Protein Measurements-Total cholesterol, phospholipids, and triglycerides were assayed by using commercially available enzymatic kits, i.e. CHOD-PAD (Roche Molecular Biochemicals), PAP 150 (BioMérieux), and Triglyceride (Roche Diagnostic Systems-Hoffman-La Roche) kits, respectively. Total lipid was calculated as the sum of cholesterol, phospholipids, and triglycerides. Proteins were measured using bicinchoninic acid reagent (Protein Assay Reagent, Pierce (27).
␣-Tocopherol Quantitation in Isolated Lipoproteins-Lipophilic compounds were extracted from lipoprotein fractions by an ethanol/hexane solution (1:3, v/v), as previously described (28). The hexane fraction was evaporated under nitrogen, and it was finally recovered in methanolacetonitrile-chloroform solution (25:60:15, v/v). ␣-Tocopherol was assayed by high-performance liquid chromatography (29) on a Beckman Gold system equipped with a Brownlee Spheri-5 RP 18 column that was connected to a diode array detector (model 168, Beckman). ␣-Tocopherol acetate was added to each sample as an internal standard before the extraction.
␣-Tocopherol Quantitation in Vascular Tissue-Mice were anesthetized with intraperitoneal sodium pentobarbital injection, and the thoracic and abdominal aorta was rapidly removed and transferred into a saline solution. After the loose connective tissue was carefully removed, arteries were homogenized in a micropotter with 500 l of saline containing 50 mM ascorbic acid, and 500 l of an ethanolic solution containing 50 mg/liter butylated hydroxytoluene (BHT). The extraction procedure was conducted as previously described (30). Briefly, 1 ml of ethanol and 300 l of 10 M KOH were added, and saponification was conducted at 80°C for 30 min with intermittent shaking. The saponified solution was cooled down on ice water, and it was mixed with 4 ml of hexane, 2 ml of distilled water, and 10 l of a 100 g/ml ethanolic solution containing tocopheryl acetate (Fluka 95250) as an internal standard. The mixture was shaken vigorously for 1 min, and the upper layer was collected after low speed centrifugation and evaporated. The extract was finally dissolved in 100 l of methanol containing 5 mM ammonium acetate. ␣-Tocopherol was analyzed by liquid chromatography-mass spectrometry on a Nucleosil C18/5 m, 2.0-ϫ 250-mm column (Macherey-Nagel, Dü ren, Germany) using 5 mM ammonium acetate in CH3OH as the eluant at a flow rate of 0.4 ml/min. Positive ion electrospray ionization-mass spectrometry was performed on an MSD 1100 mass spectrometer (Agilent Technology, Waldbronn, Germany). The voltages of the aperture and capillar were set up at 80 and 3500 V, respectively, and the flow rate of the drying gas was 8 liters/min. Ions at m/z 431 and 490 were used to measure ␣-tocopherol and tocopherol acetate, respectively. ␣-Tocopherol level was determined by comparison with a standard curve that was obtained with known amounts of ␣-tocopherol (Fluka 89550).
Conjugated Diene Formation-LDL from LDLR0/PLTP0, LDLR0, apoBTg/CETPTg/PLTP0, and apoBTg/CETPTg mice were isolated by a two-step procedure: the Ultracentrifuge-isolated 1.006 Ͻ d Ͻ 1.063 plasma fraction was passed through a Superose 6 column on an FPLC system (31), and 1-ml fractions containing only LDL were pooled. In the case of apoE0/PLTP0 and apoE0 mice, we used total apoB-containing particles that were Ultracentrifuge-isolated from total plasma as the d Ͻ 1.063 fraction. Isolated lipoproteins were oxidized at 37°C in the presence of either copper sulfate (5 M) or 2Ј-azobis(2-amidinopropane)hydrochloride (AAPH, Wako Pure Chemical Industries), and the forma-tion of conjugated dienes was monitored at 234 nm over a 20-h period.
Measurement of Anti-oxidized LDL Autoantibodies-Autoantibody titers to epitopes of oxidized LDL were measured as described previously (32). Diluted plasma samples were added to microtiter wells coated with either malondialdehyde-modified LDL (MDA-LDL) or copper-oxidized LDL. Bound autoantibodies were then detected with either anti-mouse IgG or anti-mouse IgM antibodies coupled to alkaline phosphatase. Bound antibodies were finally detected in the presence of a chemiluminescent substrate, and results were expressed in relative light units per 100 ms (32).
Preparation of Human Plasma Phospholipid Transfer Protein-PLTP was partially purified from fresh human plasma. All purifications steps were performed on an FPLC system (Amersham Biosciences) according to the sequential procedure previously described (33). Briefly, the d Ͼ 1.21 g/ml plasma fraction was isolated by a 48-h, 45,000 rpm ultracentrifugation step performed in a 50-Ti rotor. The resulting infranatant was fractionated successively by hydrophobic interaction chromatography on a Phenyl-Sepharose CL-4B column (Amersham Biosciences) and by affinity chromatography on an Heparin-agarose column (Amersham Biosciences), yielding ϳ1000-fold purification of PLTP as compared with the plasma d Ͼ 1.21 g/ml fraction (33).
Effect of PLTP Deficiency on the Plasma Distribution and
Arterial Content of Vitamin E-In chow-fed mice in the C57Bl6 background, PLTP deficiency resulted in the redistribution of ␣-tocopherol among the plasma lipoproteins ( Fig. 1). Although plasma levels of ␣-tocopherol did not differ significantly between PLTP0 and control mice (0.18 Ϯ 0.01 versus 0.24 Ϯ 0.02 mg/liter, respectively, NS), the ␣-tocopherol content of HDL was significantly reduced, and the ␣-tocopherol content of LDL was significantly increased in PLTP-deficient animals ( Fig. 1, top panel). A 2-fold increase in the ␣-tocopherol to lipid ratio was observed in LDL from PLTP0 mice compared with wild type controls (Fig. 1, bottom panel), whereas no significant change was observed in the ␣-tocopherol to lipid ratio in HDL, reflecting the reduced levels of HDL in PLTP0 mice. The level of vitamin E was decreased in aorta, with ␣-tocopherol to artery weight ratios that were significantly lower in PLTP0 mice than in control mice of both sexes (Fig. 2). These findings show an essential role of PLTP in determining vitamin E levels in lipoproteins and vascular tissues. They are consistent with the hypothesis that PLTP transfers vitamin E from apoB-containing lipoproteins (BLps) into HDL, and from lipoproteins into tissues (16,17).
Effects of PLTP Deficiency on Plasma Levels and Distribution of Vitamin E in Hyperlipidemic Mice-To determine if accumulation of vitamin E in BLps might contribute to the athero-protective effect of PLTP deficiency (18), we further investigated the effects of the PLTP deficiency trait on plasma ␣-tocopherol levels and lipoprotein distribution in hyperlipidemic plasmas of LDLR0, apoE0, and apoB/CETPTg backgrounds. Total plasma ␣-tocopherol levels were significantly higher in LDLR0/PLTP0 mice compared with LDLR0 mice (6.7 Ϯ 0.9 versus 4.7 Ϯ 0.7 mg/liter, respectively, p Ͻ 0.005). Moreover, lipoprotein analysis showed that concentrations of ␣-tocopherol were significantly increased in both VLDL and LDL but unchanged in HDL (Fig. 3). The increase in vitamin E in VLDL and LDL was demonstrated both as a plasma concentration and as a ratio to total lipids. Total ␣-tocopherol was dramatically increased in plasma of apoE0/PLTP0 mice compared with apoE0 mice (2.5 Ϯ 0.4 versus 0.9 Ϯ 0.2 mg/liter, respectively, p Ͻ 0.05), and there was a 5-fold increase in the ␣-tocopherol to lipid ratio in the VLDL fraction, the major atherogenic lipoprotein in these animals (Fig. 3). Even though the increase in plasma ␣-tocopherol did not reach statistical significance in apoBTg/CETPTg/PLTP0 mice compared with apoBTg/CETPTg mice (15.5 Ϯ 1.6 versus 11.5 Ϯ 2.4 mg/liter, NS), significant increases in the vitamin E content of the atherogenic lipoproteins (VLDL and LDL) were observed (Fig. 3). This result suggests that CETP, although related to PLTP, does not substitute in transferring vitamin E out of apoBcontaining lipoproteins (34). These studies show a major role of PLTP in determining the concentration of vitamin E in atherogenic lipoproteins.
Effect of PLTP Deficiency on Conjugated Diene Generation in Copper-oxidized ApoB-containing Lipoproteins-To establish whether the increased vitamin E content of atherogenic lipoproteins from PLTP-deficient animals rendered them less susceptible to oxidation, we isolated the VLDL plus LDL fraction and measured the generation of conjugated dienes in the presence of either copper sulfate ( Fig. 4, a, c, and d) or AAPH (Fig. 4b). The formation of conjugated dienes, monitored at 234 nm over a 20-h period, was remarkably delayed by PLTP deficiency in all genetic backgrounds (Fig. 4), and similar observations were made when lipoproteins were oxidized with either copper or AAPH (Fig. 4, a and b). The lag phase of conjugated diene formation in LDL particles was 45 ϩ 15 min versus 150 ϩ 30 min (LDLR0 and LDLR0/PLTP0 mice, respectively), 30 ϩ 15 min versus 75 ϩ 15 min (apoBTg/CETPTg and apoBTg/ CETPTg/PLTP0 mice), and 45 ϩ 15 min versus 180 ϩ 30 min (VLDLϩLDL particles from apoE0 and apoE0/PLTP0 mice). The differences were all highly significant (p Ͻ 0.001).
Effect of Exogenous, Purified PLTP on the Vitamin E Content and Oxidizability of ApoB-containing Lipoproteins from PLTPdeficient Mice-To confirm a direct role of PLTP in determining the distribution of ␣-tocopherol and the oxidizability of apoBcontaining lipoproteins, plasma from LDLR0 or LDLR0/PLTP0 mice was incubated for 2 h at 37°C in the presence or absence of purified exogenous PLTP. As observed previously, the oxidation susceptibility of LDL isolated from LDLR0/PLTP0 mice was markedly reduced compared with LDL from LDLR0 mice (Fig. 5). This difference was reversed by the addition of PLTP to the LDLR0/PLTP0 plasma. In parallel, the ␣-tocopherol content of the VLDLϩLDL fraction from LDLR0/PLTP0 plasma was significantly reduced in the presence of PLTP, to levels similar to those observed in plasma from LDLR0 mice expressing normal levels of PLTP (Table I). Lag phase and ␣-tocopherol values did not vary significantly when LDLR0 samples were supplemented with PLTP, indicating that ␣-tocopherol was already equilibrated among the lipoproteins in mice expressing PLTP. The reversal of the abnormalities of vitamin E content and oxidizability of apoB-containing lipoproteins when PLTP-deficient plasmas were supplemented with purified PLTP proves that the observed effects (Figs. 3 and 4) are a direct consequence of PLTP action in plasma.
Effects of PLTP Deficiency on Circulating Levels of Antioxidized LDL Autoantibodies-Autoantibodies to epitopes of oxidized LDL are known to progressively rise over time in cholesterol-fed LDLR0 mice (35,36), their titer correlates with the extent of atherosclerosis (32,36,37), and the baseline titer of autoantibodies to malonedialdehyde-modified LDL (MDA-LDL), a model of oxidized LDL, is a predictive marker of atherosclerosis (35,38). To determine if the increased vitamin E content of apoB-containing lipoproteins in PLTP-deficient animals might be associated with decreased oxidation of LDL in vivo, we measured the titer of IgG and IgM autoantibodies against MDA-LDL and copper-oxidized LDL (Cu-LDL). In each of the three hyperlipidemic backgrounds, PLTP deficiency was accompanied by a significant reduction (50 -81%) in the titer of IgG autoantibodies, using either MDA-LDL or Cu-LDL as model epitopes of oxidized LDL (Fig. 6). Such a drop in the autoantibody titer in PLTP0 animals was not systematically observed with the IgM isotype. Indeed, although the IgM titer was significantly reduced in apoE0/PLTP0 mice, titers were increased or unchanged in LDLR0 and apoBTg/CETPTg backgrounds (Fig. 6). DISCUSSION We have previously shown that PLTP deficiency provides protection against atherosclerosis in apoE0 and apoBTg/ CETPTg mice, due in part to decreased hepatic production of apoB and decreased plasma levels of atherogenic lipoproteins (18). However, PLTP deficiency also conferred protection in LDLR0 mice even though apoB levels were not decreased, suggesting that PLTP deficiency had other anti-atherogenic properties. The present study demonstrates a novel in vivo role of PLTP in determining the concentration of vitamin E in BLps and suggests that increased vitamin E in BLps represents an additional mechanism by which PLTP deficiency protects against atherosclerosis in mice. PLTP deficiency led to an increased concentration of vitamin E in VLDL and/or LDL, and the magnitude of the effect on the vitamin E to lipid ratio varied from an increase of 70% in LDLR0 mice to 500% in apoE0 mice. In normolipidemic mice there was also a decrease in vitamin E content in HDL and aorta. These results are consistent with a physiological role of PLTP in transferring vitamin E from VLDL to HDL and then into tissues. The accumulation of vitamin E in BLps in PLTP-deficient mice was the most consistent and dramatic effect, and it was associated with a marked reduction in susceptibility of these particles to oxidative modification. This provides a cogent explanation for the previously observed anti-atherogenic effect of PLTP deficiency in LDL receptor KO mice, in which there was no change in BLp levels. Although the magnitude of the effect of vitamin E accumulation on atherogenesis appears to be only moderate (5,6), our findings illustrate the important principle that the concentration of anti-oxidants in the relevant BLps is determined by factors acting beyond the dietary intake. In addition to the reduced secretion of BLps, they identify an additional anti-atherogenic mechanism that can be anticipated from PLTP inhibition and further support the idea that PLTP inhibitors or combined PLTP/CETP inhibitors may have a role as anti-atherogenic drugs (18,39).
The role of different genes in the absorption, transport, and tissue uptake of dietary ␣-tocopherol (the ingested form of vitamin E with the greatest biological activity) has been elucidated by various genetic deficiency states. Thus, intestinal BLp assembly is essential for vitamin E absorption, as patients with abetalipoproteinemia become deficient in vitamin E (40,41). Following delivery of vitamin E in chylomicrons to the liver, vitamin E is incorporated into VLDL for hepatic secretion; the key role of ␣-tocopherol transfer protein in this process is illustrated by human and murine genetic deficiency states (40 -43). Based on in vitro studies, PLTP had been proposed to mediate the transfer of vitamin E from VLDL into HDL and from lipoproteins into tissues. The present study in PLTPdeficient mice provides direct evidence for an essential role of PLTP in these processes in vivo, and the most consistent finding in PLTP deficiency was the significant increase in the vitamin E content of VLDL and/or LDL. The plasma ␣-tocopherol transfer activity clearly reflects a specific property of PLTP. The involvement of the related plasma cholesteryl ester transfer protein (CETP) in this process was ruled out by demonstrating a similar vitamin E enrichment of BLps in apoB/ CETPTg mice with PLTP deficiency. Reconstitution of PLTP in PLTP-deficient plasma indicated that the major effects of PLTP on vitamin E distribution in lipoproteins likely reflects a direct action in plasma. However, the incorporation of vitamin E into HDL and tissues still occurred in the absence of PLTP, indicating additional mechanisms of vitamin E transport. In this regard, both lipoprotein lipase (44) and the cellular LDL receptor (45) were shown to contribute to the uptake of ␣-tocopherol by peripheral cells. In addition, the recent demonstration that ABCA1 can promote efflux of vitamin E from cells (46) suggests that hepatic ABCA1 could represent an alternative pathway for incorporation of vitamin E into HDL in the liver.
Recent studies have shown that PLTP deficiency reduces atherosclerosis in all three of the commonly used, atherosclerosis-susceptible hyperlipidemic mouse models (18). In part this was related to reduced secretion and levels of apoB-lipoproteins, but this could not explain reduced atherosclerosis in LDLR0/PLTP0 mice, where VLDL and LDL levels were unchanged. In the light of recent studies (18), high levels of vitamin E in apoB-containing lipoproteins from PLTP-deficient animals are a plausible mechanism to explain decreased atherosclerosis in LDLR0/PLTP0 mice, and increased vitamin E levels in BLps would likely contribute to decreased atherosclerosis in other mouse models. This is especially likely in the apoE0/PLTP0 mice, where a profound reduction in atherosclerosis was observed (18) and vitamin E content in VLDL was increased 5-fold (Fig. 3). Interestingly, quantitatively similar increases in the vitamin E content of apoB-containing lipoproteins and moderate decreases in atherosclerosis susceptibility were reported in apoE0 mice fed vitamin E-supplemented diets (4,5). In these studies, changes in atherogenesis were of similar magnitude to the apoB-independent, atheroprotective effects of PLTP deficiency (18). Although the relative decrease in atherosclerosis in LDLR0/PLTP0 mice was statistically significant only at the earlier time point (18), similar temporal effects of transgene expression have been repeatedly observed in mouse atherosclerosis studies, and differences in atherosclerosis in mice deficient in lymphocytes (47), MCP-1 receptor (48), or overexpressing apoA-I (49) were much more marked at earlier than later time points. One interpretation is that vitamin E is more important for the rate of lesion initiation, than for the rate of lesion progression.
Consistent with the evidence that the increased content of vitamin E helped to retard atherogenesis through reduction in the generation of oxidized LDL, we noted a decrease in IgG autoantibody titers to both MDA-LDL and Cu-LDL. A close correlation between such antibody titers and both the extent of lesion formation and the level of oxidized LDL has been previously observed (32,50,51). The changes in titers of IgM autoantibodies were more variable, presumably reflecting the FIG. 5. Effect of the supplementation of total plasma with purified PLTP on the vitamin E content of LDL. Pooled plasmas (200 l) were incubated for 2 h at 37°C in the absence or in the presence of purified PLTP (10 l of a partially purified, 2.5 g/ml PLTP preparation). The LDL-containing fraction from distinct samples was Ultracentrifuge-isolated, and the formation of conjugated dienes was monitored at 234 nm over a 20-h period. Each value corresponds to mean Ϯ S.D. of three to four determinations, each of them conducted with pooled plasmas from three to four animals. admixture of both T cell-dependent as well as non-T cell-dependent "natural" antibodies (52).
The oxidation theory of atherogenesis had its origins in an attempt to understand the mechanisms by which LDL could promote macrophage foam cell formation, because native LDL is not taken up in sufficient amounts to make foam cells (1). Oxidative modification of LDL facilitates its uptake into macrophages by scavenger receptors, such as SR-A and CD36 (53,54), and facilitates aggregation and uptake by additional pathways (55). The existence of oxidized epitopes in atherosclerotic lesions, as well as studies with CD36 and SRA knock-out mice, have generally supported an important role of oxidative modification in atherogenesis.
Most animal studies with potent lipophilic anti-oxidants, such as probucol, have consistently shown a protective effect of these agents against atherosclerosis (reviewed in Ref. 3). However, the results of intervention studies with less potent vita-min anti-oxidants, such as vitamin E, have provided mixed results. In one study of vitamin E-fed apoE KO mice, the mice were dramatically protected from lesions formation; the plasma levels of vitamin E correlated inversely with the extent of atherosclerosis and with the urinary excretion, plasma, and arterial levels of F2 isoprostanes that are decomposition products of lipid peroxidation (4). In contrast, most of the human studies, which have used lower doses of anti-oxidants, have been negative (3). Although there have been two small trials suggesting a benefit of vitamin E administration (6,7), there are five trials using vitamin E that have been negative (8 -12). A potential shortcoming of the human studies is that doses of vitamin E may have been too low to be effective, and equally important, that susceptible individuals may not have been studied. For example, Meagher et al. (56) have recently shown that even high doses of vitamin E did not reduce isoprostane levels in healthy subjects. In contrast, vitamin E supplemen-FIG. 6. Effect of PLTP deficiency on the circulating levels of anti-oxidized LDL autoantibodies. IgG (left panels) and IgM (right panels) levels of anti-oxidized LDL autoantibodies were determined by using malondialdehyde-modified LDL (upper panels) or copper-oxidized LDL (lower panels) as models of oxidized LDL. Data are mean Ϯ S.E. of n ϭ 4 animals. *, p Ͻ 0.05 versus PLTPϩ/ϩ; Mann-Whitney test. tation to subjects undergoing hemodialysis, a condition known to be associated with enhanced oxidative stress, was associated with a 50 -70% reduction in cardiovascular events (7). These observations emphasize the lack of knowledge of determinants of the bioavailability of anti-oxidants in relevant sites such as the apoB-containing lipoproteins. The present study indicates that PLTP represents one such factor determining vitamin E concentration in BLps. It is interesting to note that PLTP deficiency is athero-protective, even though vitamin E contents were moderately reduced in aorta of PLTP0 mice. This finding indicates a major role of anti-oxidant concentration in BLps in determining atherosclerosis. Our data suggest that PLTP could play a role in the discordance observed between the susceptibility of LDL to oxidation ex vivo and the dietary intake of vitamin E (57). For instance, no significant relationship was noted between the dietary intake and plasma concentration of ␣-tocopherol in type 2 diabetics (58), a population with increased PLTP levels (59,60), decreased vitamin E content of LDL (61), and increased susceptibility of apoB particles to oxidation (61).
In conclusion, in addition to the previously reported effect of PLTP deficiency on hepatic secretion of apoB-containing lipoproteins (18), the results of the present study provide another molecular mechanism by which PLTP deficiency protects against atherogenesis. By impairment of ␣-tocopherol transfer activity, PLTP deficiency results in the enhanced accumulation of vitamin E in atherogenic apoB lipoproteins and a resultant decrease in their susceptibility to oxidative modification. If PLTP plays a similar important role in humans, then PLTP inhibition may be a novel strategy to decrease atherosclerosis. | 2018-04-03T01:01:45.002Z | 2002-08-30T00:00:00.000 | {
"year": 2002,
"sha1": "b266f46ac4ea1d73e4d1fd8f4211a7dd3a928a80",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/277/35/31850.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "cfb510c7a69a733f70179ea425b28edd2594443a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
244909772 | pes2o/s2orc | v3-fos-license | Disorder at the Start: The Contribution of Dysregulated Translation Initiation to Cancer Therapy Resistance
Translation of cellular RNA to protein is an energy-intensive process through which synthesized proteins dictate cellular processes and function. Translation is regulated in response to extracellular effectors and availability of amino acids intracellularly. Most eukaryotic mRNA rely on the methyl 7-guanosine (m7G) nucleotide cap to recruit the translation machinery, and the uncoupling of translational control that occurs in tumorigenesis plays a significant role in cancer treatment response. This article provides an overview of the mammalian translation initiation process and the primary mechanisms by which it is regulated. An outline of how deregulation of initiation supports tumorigenesis and how initiation at a downstream open reading frame (ORF) of Tousled-like kinase 1 (TLK1) leads to treatment resistance is discussed.
INTRODUCTION
The process of translation initiation begins with the formation of two protein complexes that occur in parallel and converge at the 5 ′ end of the mRNA. The ternary complex-which comprises of eukaryotic translation initiation factor 2 (eIF2), GTP, and initiator methionyl-tRNA (Met-tRNA i )is the preliminary step in the assembly of the 43S pre-initiation complex (PIC). Together with the 40S ribosomal subunit and eukaryotic translation initiation factors, eIF1, 1A, 3, and 5, the ternary complex binds to form the PIC. Independently, assembly of a protein complex occurs at the 5 ′ end of the mRNA through recognition of the m7G cap by the cap-binding protein, eIF4E. The RNA helicase eIF4A and the scaffolding subunit eIF4G recruitment results in the formation of the eIF4F complex. The binding of eIF4B to eIF4A stimulates the unwinding of the mRNA immediately downstream of eIF4F, which facilitates the loading of the PIC [1]. Through interaction with eIF4E and the poly A binding protein (PABP), eIF4G bridges the 3 ′ and 5 ′ ends of the mRNA forming a closed-loop conformation that aids in spatially localizing the translation machinery for subsequent rounds of protein synthesis on the same translated mRNA [1] (Figure 1).
Recruitment of the PIC is facilitated by eIF4A-mediated unfolding of mRNA and the affinity of eIF3 for RNA and for eIF4G. The PIC is loaded on the mRNA in an open conformation, which is permissive to scanning, to locate the start AUG trinucleotide that can base-pair with the complementary sequence in Met-tRNA i [2]. The search process occurs linearly in a 5 ′ -3 ′ direction and is presumed to be due to eIF4Afacilitated unwinding of RNA structures at the leading edge of PIC and the presence of eIF4B at the trailing end to restrict movement of PIC in reverse. The identification of the first AUG in an optimal sequence context initiates stable pairing with the anti-codon in the Met-tRNAi and it leads to the eviction of eIF1 and the hydrolysis of eIF2-GTP to eIF2-GDP by eIF5. The full-engagement of the Met-tRNAi at the P site in the 40S ribosomal subunit results in a change in PIC to a close state which halts further scanning of the mRNA [2]. Subsequent release of eIF2-GDP and eIF5 makes way for the joining of the 60S ribosomal subunit, catalyzed by GTPase eIF5B, and for the formation of the 80S translation initiation complex. The dissociation of eIF5B-GDP and the departure of eIF1A signals entry of the translation initiation complex in the protein synthesis phase (Figure 1) [1][2][3].
Translation initiation is a highly regulated cellular activity that occurs in response to the availability of molecular factors, nutrients, and hormone and stress signaling. It is tightly controlled at multiple steps in the process and described here are the major regulatory nodes and our understanding by which deregulation of initiation shifts the cellular proteome to being conducive to tumorigenesis.
REGULATION OF TERNARY COMPLEX FORMATION
The control of translation at the ternary complex assembly is at one of the earliest of stages of translation initiation. eIF2 is essential to the loading of Met-tRNAi onto the 40S ribosomal subunit for the assembly of the 43S PIC. Since Met-tRNAi has a higher affinity for GTP-bound eIF2 than GDP-bound, the GDP-GTP exchange factor, eIF2B, is an important player in regulating ternary complex formation. eIF2 is a heterotrimeric protein composed of α, β, and γ subunits. Cellular kinases activated by stress such as protein kinase R-like endoplasmic reticulum kinase (PERK) in unfolded protein response, GCN2 in amino acid deprivation, RNA-activated protein kinase (PKR) in viral infection, and HRI in metabolic stress can phosphorylate eIF2α at serine 51. The GDP-bound phospho-eIF2-Ser 51 avidly interacts with and sequesters eIF2B, reducing levels of unbound eIF2B for eIF2-GDP to eIF2-GTP recycling [4]. As a result, phosphorylated eIF2 tamps down global translation initiation in stress. Through interaction with eIF2 and protein phosphatase 1 (PP1), GADD34 facilitates the dephosphorylation of eIF2 by PP1 [5], which restarts protein synthesis in cell recovery.
Despite repression in global translation initiation in stress, protein synthesis from stress-response transcripts remains largely unaffected. There are two prevailing mechanisms by which translation of these transcripts occur. First, the incorporation of phosphorylated eIF2 in the PIC reduces scanning fidelity of the complex, leading to the bypass of the upstream start sites. ATF4 is a transcription factor critical to the expression of genes that drive a prosurvival cellular program in response to stress [6]. With 2 inhibitory upstream ORFs, translation of ATF4 is limited in non-stressed cells. However, phospho-eIF2 can contribute to leaky scanning, and initiation at downstream ORFs leads to protein synthesis such as of ATF4 that would otherwise not occur in homeostasis [7]. Additionally, sequestration of eIF2B by phospho-eIF2 leads to limited Met-tRNA-engaged PIC and the overriding of initiation at upstream ORFs. Through association of the scanning ribosomal 40S subunit with the ternary complex downstream initiates translation at the next ORF.
Second, the alternate initiation factor eIF2A competes with eIF2 for loading of initiator tRNA (tRNAi) on the 40S complex. eIF2 is the predominant player in translation initiation, but following its phosphorylation and sequestration, contribution of eIF2A to ternary complex formation increases significantly as it is refractory to eIF2 inhibitory kinases [8,9]. Although a direct demonstration of interaction is lacking, eIF2A can recruit an alternative initiator tRNA, Leu-tRNAi, and initiate translation at non-AUG triplets such as CUG and UUG that would otherwise be discriminated against. As alternative ORFs do not contribute significantly to the translational landscape in non-stressed cells, an incremental increase in eIF2A can lead to a relatively large increase in alternate ORF expression.
Certain stress response transcripts evade translational repression through initiation at IRES or at sequence-specific elements in the 5 ′ UTR. Notably, a switch from cap-dependent to cap-independent translation of prosurvival factors, HIF-1α, VEGF, and BCL2, is induced by hypoxia [10]. Translation initiation occurs independent of canonical ternary complex and relies on eIF5B to deliver tRNAi.
LENGTH AND COMPLEXITY OF 5 ′ UTR
When present in a favorable sequence context, translation begins at the first AUG near the 5 ′ end of the mRNA. However, length and structural complexity of the 5 ′ UTR can influence the rate of translation initiation. An inverse relationship exists between translation initiation and length of the 5 ′ UTR and its ability to self-anneal and form complex structures [3]. Self-assembly of single stranded RNA into stable secondary structures impedes movement of the PIC and limits initiation at the projected start site. Helicase activity of eIF4A is vital to deconvoluting short double-stranded mRNA at 5 ′ end for loading of the 43S PIC, and structured mRNA 5 ′ ends are reliant on eIF4A for efficient initiation. However, eIF4A is insufficient to unwind structures of high complexity during scanning and participation of cellular helicases such as DHX29 and DDX3 is vital to resolving the structures [11].
ROLE OF SEQUENCE IN START CODON SELECTION
Traveling downstream from the 5 ′ end of the mRNA, the PIC scans the sequence base-by-base to locate the start site. Translation often starts at the first AUG that is encountered; however, its selection as a start site is dictated by the context in which the trinucleotides reside [12]. When present within the Kozak consensus sequence, G/ACCAUGG, the trinucleotides are considered to be in an optimal context for translation initiation [13]. In particular, the nucleotide at −3 position in relation to the AUG determines the efficiency of start codon selection and there is a preferential occupancy of an A in yeast and mammals [14]. Often AUG in a poor context escapes detection by the scanning machinery for another downstream that is in a better contextual reference frame. Non-optimal sequence context or near-similar start triplets such as CUG or UUG in a favored consensus can lead to inconsistencies in the use of the start codon and misguided translation initiation accounts for the diversity of protein isoforms. By computational analysis nearly half of the transcripts in mammalian cells have an upstream ORF, and ribosomal profiling suggests that many are translated in homeostasis [15]. In instances where downstream AUG(s) is in-frame without an intervening stop codon, leaky scanning at upstream AUG can result in protein isoforms that differ in the N-terminus region but are otherwise identical. N-terminal differing isoforms when sorted in separate cellular compartments can have distinct cellular targets and function. On the other hand, an upstream AUG residing in a near-optimal sequence context significantly limits initiation of the main ORF from the downstream site. Control of initiation through uORFs regulates the translation of tumorigenic proteins such ATF4 in homeostasis. In the event of a stop codon between the upstream and downstream AUGs or an intervening RNA sequence with a propensity to self-anneal into stable structures stalls PIC that skipped the upstream AUG, or ribosomes, attenuating the reinitiation at the downstream ORF and a paucity of the main protein isoform. Often, this is not the case in normal cells as ternary complexes are abundant and they reassociate to form a translation competent complex for initiation at the downstream start codon.
INFLUENCE OF INITIATION FACTORS IN START SITE SELECTION
Translation initiation factors that form the PIC also play important roles in the selection of the start codon [12]. Structural studies indicate that eIF1 interaction with Met-tRNAi allosterically prevents its precise engagement at the P site in the 40S ribosomal subunit and thus keeps the PIC in an open conformation that is conducive to scanning [16]. Imperfect fit of the Met-tRNAi is suggested to also discriminate against suboptimal anti-codon pairing. With accurate base-pairing at AUG in preferred consensus, the release of eIF1 occurs and the PIC adopts a closed conformation which signals the termination of the scanning and the beginning of protein translation. Mutants of eIF1 in yeast that interact poorly with the PIC initiate protein synthesis at near-cognate codons and at codons in unfavorable consensus sequences [17]. eIF1 mutants increase the complexity of protein isoforms that arise from the same mRNA, emphasizing the importance of eIF1-PIC interaction to distinguishing alternative translation starts. eIF3 plays a pervasive role in initiation events. It recruits PIC to the 5 ′ end of the mRNA because of its affinity for eIF4G and the complex, and increased expression of eIF3 increases global translation including transcripts that are not abundantly translated in homeostasis. In yeast, mutations in eIF3 disrupt initiation as PIC fails to adopt a closed conformation on recognition of the start codon.
eIF4E AND PHOSPHORYLATION BY MNK KINASES
The 5 ′ end cap structure is directly recognized by eIF4E, the rate-limiting component in formation of the eIF4F complex. Unexpectedly, an increase in eIF4E does not increase the global translation rate, but it alters the cellular proteome by preferentially upregulating translation of transcripts with long and structured 5 ′ UTR-many of which encode growthpromoting or malignancy-associated proteins [18,19]. eIF4E is considered a central player in carcinogenesis as high levels of the protein induce oncogenic transformation in mouse fibroblasts [20], while antisense-mediated decrease in eIF4E reverses the aggressive proliferative phenotype of Rastransformed fibrosarcomas [21]. The function of eIF4E is primarily regulated by its interaction with 4E binding protein 1 (4EBP1) as sequestration by 4EBP1 limits eIF4E availability for cap recognition and formation of the eIF4F complex. The activity of eIF4E is also in part regulated through phosphorylation at serine 209 by activated mitogen-activated protein kinaseinteracting kinases (MAPK-interacting kinases), MNK1 and MNK2. The MNK enzymes are activated by RAS signaling through the downstream effector, extracellular regulated kinase (ERK) and p38 MAP kinase. MNK1/2 knockout mice, however, develop normally [22], and a lack of a phenotype in eIF4E phospho-mutant (S209A) transgenic animals suggests that the modification does not play a significant role in growth and development [23]. Unlike wild-type 4E, the expression of the phospho-mutant (S209A) in MEF increases resistance to cellular transformation, and non-phosphorylatable eIF4E mutant mice (eIF4E S209A ) and MNK1/2-deficient mice are resistant to tumor progression despite a genetic background that would otherwise promote invasive cancers [23,24]. Phosphorylated eIF4E preferentially upregulates translation of a specific set of transcripts involved in epithelial-mesenchymal transition (EMT) and metastasis, which supports the observation of increased tumor invasiveness and distant metastasis in mutant mice [25]. Mechanistic studies in a reticulocyte lysate cell-free system indicate that MNK augments eIF4E-eIF4G interaction and the assembly of eIF4F, and that phosphorylation of eIF4E preferentially facilitates the translation of mRNAs with cap and stem-loop structure at the 5 ′ end [26]. The mechanism by which the modification promotes translation of select mRNA is debated, but phosphorylation-dependent eIF4E sumoylation and the increase in eIF4F stability [27] is suggested to contribute to the relative increase in translation of mRNAs with structural complexity in the 5 ′ UTR.
TRANSLATION REGULATION BY mTOR-4EBP1 SIGNALING
The other major signaling pathway that regulates initiation is the mammalian target of rapamycin (mTOR)-4EBP1 pathway.
Nutrients, insulin, and growth factors activate the pathway and signals relayed through the mTOR complex 1 (mTORC1) phosphorylate the downstream effectors ribosomal protein S6 kinase and 4EBP1. Phosphorylation of 4EBP1 reduces its binding affinity to eIF4E and increases eIF4E availability for eIF4F formation. Though activation of the MNK-eIF4E pathway and the mTOR-4EBP1 pathway promotes formation of eIF4F complex, the differential effects of MNK inhibition and mTOR inhibition suggest that eIF4F complexes assembled in each pathway target different transcript types [28]. It is also conceivable that phospho-eIF4E engages specific eIF4G and/or eIF4A isoforms on structurally-defined 5 ′ UTR and thus exhibits a predilection for certain transcripts. It is not surprising, then, that cell death initiated through inhibition of either pathway is less effective than inhibition of both.
DEREGULATION OF TRANSLATION INITIATION IN CANCER
Initiation is a rate-limiting step in translation of capped mRNA as eIF4E is the least abundant of translation initiation factors. The altered levels of molecular factors and/or signaling pathways in cancer cells increase synthesis of non-conventional protein isoforms, resultant from aberrant translation initiation (Figure 2). Through altered abundance, cellular location, or activity of regulators of cell-cycle, apoptosis, DNA damage response and repair, and lineage fidelity, the proteome disproportionately favors reprogramming and survival of cancer cell despite a non-conducive environment [29,30].
A majority of head and neck squamous cell carcinoma (HNSCC) are driven by an overactive AKT-mTOR signaling pathway due to increased prevalence of PI3K mutations and PTEN loss [31,32]. The increase in phosphorylated 4EBP1 downstream of the PIK3-AKT-mTOR signaling is an important modification that supports eIF4F formation, and inhibitors of mTOR such as rapamycin and its synthetic analogs that increase dephosphorylated levels of 4EBP1 showed promise in the treatment of HNSCC as short-term monotherapy prior to definitive treatment [33]. In a meta-analysis of 11 clinical trials, mTOR inhibitors, as single-agent therapy, failed to show a significant tumor response, and a better partial tumor response in combination with chemotherapy and/or radiotherapy needs additional evaluation to validate the sensitizing effect [34]. Rather than biallelic mutations, haploinsufficiency accounts for reduced 4EBP1 in a number of head and neck tumors [35], and TCGA database analysis demonstrates a correlation between reduced 4EBP1 expression and adverse survival outcome. Supporting this finding is the demonstration that 4EBP1/2 knockout mice are conducive to tumor growth, whereas the mutant mice with 4EBP1 that is non-phosphorylatable by mTOR limits tumor progression [35]. The central initiation factor in capdependent translation, eIF4E, is oncogenic when overexpressed, and multiple lines of evidence support its role in tumor formation in vivo. Correspondingly, high eIF4E in HNSCC tumors and adjacent margins carry poor prognostic implications [36,37]. However, instead of a singular increase in eIF4E or a decrease in 4EBP1, there is increasing consensus that ratio eIF4E to 4EBP1 is an improved indicator of patient survival [38,39]. mRNA expression analyses (TCGA database) support the higher predictive value of the dual mRNA signature in HNSCC (high eIF4E and low 4EBP1) relative to each independently (Figure 3). An increase in phosphorylated 4EBP1 and/or eIF4E is linked to poor prognosis in a variety of cancers including head and neck cancer [40]. MNK1/2 antagonists investigated before were potent suppressors of metastasis, but adverse effects limited their clinical transition. MNK1/2 inhibitor Tomivosertib exhibits an acceptable safety profile and, in phase II investigation, it extended progression-free survival in checkpoint inhibitor-refractory nonsmall cell lung cancer [41]. The inhibitor is currently in clinical evaluation for a number of advanced solid malignancies, including HNSCC (NCT03616834).
Global translation initiation is suppressed when eIF2α is impaired by phosphorylation. Increased phosphorylation of eIF2α Ser 51 inhibits the release of GDP-GTP exchange factor eIF2B and, in turn, the formation of the translation complex. However, increased phospho-eIF2 and limited ternary complex availability promotes leaky scanning and skipping of upstream AUG. Reassembly with the 40S complex downstream when the ternary complex is available promotes initiation instead at the downstream ORF. Although eIF2A is a poor eIF2 competitor, the alternate carrier of initiator tRNA adopts a more prominent role in cancer cells as it is refractory to eIF2α inactivating kinases [9]. eIF2A interaction with Leu-tRNAi alters the cellular proteome as initiation occurs at unconventional ORFs. The use of non-canonical start sites in transcripts that promote growth and dedifferentiation is accepted to underlie tumorigenesis [42]. FGF2 mRNA have multiple upstream ORFs and initiation at AUG and non-AUG codons generate a variety of protein isoforms that are pro-angiogenic and pro-tumorigenic [43,44]. Noncanonical initiation codons are also responsible for different isoforms of oncogenic MYC in stress and in eIF4E transformed cells [45,46]. It is of interest, then, that increased expression of PKR (eIF2AK2), a kinase that phosphorylates eIF2α, and of eIF2A, the recruiter of alternate tRNAi, occur in a majority of head and neck cancers (TCGA dataset) and that they significantly correlate with poor survival of HNSCC patients (Figure 4).
Stress response to viral infection inhibits protein synthesis through interferon-induced eIF2 inactivating kinase PKR. Viruses override translation inhibition to establish a conducive environment for viral replication and pathogenesis. Translational recovery in HPV infection occurs through E6 oncoprotein that facilitates dephosphorylation of eIF2α by GADD34-PP1 [47]. Reversal of eIF2 phosphorylation and of cellular rewiring of translation is suggested as a reason for improved treatment outcomes observed in HPV+ HNSCC patients.
The helicase activity of eIF4A is instrumental to the unwinding of the mRNA 5 ′ end for docking of PIC. By stabilizing eIF4A interaction at complex mRNA structures and suppressing unwinding, inhibitors of eIF4A effectively target a subset of oncogenic mRNAs to limit tumor growth. Although eIF4A1 or eIF4A2 expression in HNSCC tumors (TCGA dataset) did not correlate with overall survival (data not shown), a significant correlation with progression-free survival was observed (Figure 5). Despite potent anti-tumor activity of eIF4A inhibitor silvestrol, pharmaceutical logistics limited its clinical evaluation. Inhibitor eFT226 (Zotatifin), suggested to have a similar mechanism of action to silvestrol, is the first eIF4A inhibitor to enter the clinical trials (NCT04092673) for advanced solid malignancies.
The dead box helicase DDX3 supports translation through recruitment of eIF3 at the 5 ′ end of the mRNA and it plays a critical role in unwinding complex RNA structures that impede scanning and ribosome transit. Increased expression of DDX3 promotes an aggressive phenotype in head and neck tumor cells by bypassing upstream inhibitory ORFs and initiating translation of ATF4, a transcription factor that also regulates the expression of genes associated with epithelial-mesenchymal transition [11]. In a helicase-independent manner, DDX3 drives expression of amphiregulin (AREG) and the secretory phenotype that stimulates growth of oral squamous cell carcinoma cells in an auto-paracrine mechanism [48]. Correspondingly, increased DDX3 expression, wild-type or mutant, correlates with adverse prognosis in HNSCC [11,48]. Inhibitors of DDX3 activity are anti-tumoral in multiple cancer types; however, considering the prevalence of mutant DDX3 and helicase-independent function of DDX3, pharmacological agents that suppress DDX3 expression could have a wider clinical applicability.
Of recent, epigenetic and transcriptomic changes have gained prominence in disease prognostication, and disorder in translation control underpinning treatment challenge has garnered less attention. We briefly elaborate on a non-canonical TLK1 isoform selectively expressed in eIF4E-rich cellular milieu and its role in limiting cancer treatment efficacy.
TOUSLED-LIKE KINASE 1
TLK is evolutionarily conserved in metazoans and there are two homologs in humans: TLK1 and TLK2. TLKs are constitutively expressed in cells but their activities peak in interphase. The first identified targets of TLKs were histone chaperones ASF1A and ASF1B and expectedly, the kinases participate actively in chromatin assembly [49]. During replication halts in response to genotoxic stressors, TLK activity is transiently inhibited through phosphorylation by CHK1, and it fits the concept that kinase activity and DNA replication are intricately interconnected [50]. Not surprisingly, then, is the loss of TLK1/2 or inhibition of their activity results in unstable chromatin due to replication fork collapse and an increase in flawed transcription from heterochromatic regions of the genome [51,52]. The presence of in-frame AUGs decoded by Met-tRNAi appear to contribute to TLK1 isoform diversity in cells. There are 3 in-frame AUGs upstream of the coiled-coiled motifs. AUG1 resides in a less-favored sequence context (TTGAUGA), and translation initiation at the following AUG, AUG2, in a near-Kozak consensus (GCAAUGG) may contribute to the full-length isoform present in most cell types ( Figure 6A). Relative to the long isoform, initiation at AUG3, also in a favored context with A at −3 position (ACAAUGC), further downstream encodes a third variant that lacks a significant portion of the N-terminus region but shares identity from the two coiled-coiled dimerization motifs through to the C-terminal catalytic region. The short isoform lacks the putative nuclear localization sequence (NLS; RGRKRK) in the N-terminal region but retains a potential NLS (LAKRK) between the coiled-coiled motifs. The translation of the short isoform correlates clearly with increased levels of available eIF4E [53]. Active mTOR signaling during recovery from doxorubicin-induced DNA breaks leads to an increase in phospho-4EBP1 in murine mammary epithelial cells and the preferential translation of the shorter ORF. Abundant availability of eIF4E to drive eIF4F-driven repeat initiation, promiscuous PIC scanning due to stress-activated phosphorylation of eIF2, and limited assembly of ternary complex are factors suggested to drive TLK1 initiation at the downstream AUG3. Multiple studies in various cell types show that overexpression of the short variant augments cellular repair defenses leading to rapid recovery from genomic damage [54,55] and, in accordance, the abundance of the isoform in breast tumors is shown to correlate with poorer patient prognosis [56,57]. Tumor cells are reliant on the DNA repair machinery to adapt to a higher burden of genomic breaks due in part to elevated oxidative stress, a high metabolic rate, and a hypoxic environment. Mechanistically, TLK activity is critical to homology-directed repair of double-strand breaks [58] and to the stabilization of replication forks [51]. Despite the absence of the established NLS, overexpression of the shorter variant improves DNA repair kinetics, and it suggests that nuclear localization directed either by the downstream NLS or by dimerization with the full-length form contributes to the reparative phenotype. Alternatively, the variant orchestrates cellular recovery through target proteins in the cytoplasm or regulates transit of factors that promote DNA repair between the cellular compartments. Although the short TLK1 isoform phosphorylates many of the same target proteins as the fulllength in vitro [59], the altered ratio of isoforms in cancer cells warrants investigation into the preferred substrates of isoform 3 in vivo that contribute to resistance to genotoxic chemotherapeutics and radiation.
Crosstalk between mesenchymal and epithelial cells during development influences cell fate and the exchange is vital to homeostasis in adulthood. Genes specifically upregulated in mammary epithelium-adjacent fibroblasts include TLKs, and conditional loss of TLK1 or TLK2 in transgenic animals induces hyperproliferation of the ductal epithelium [60]. The mechanism by which TLK1/2 regulates crosstalk, however, is a subject of ongoing investigation. Deregulated signaling from the tumor microenvironment is acknowledged to affect tumor FIGURE 6 | (A) Schematic of TLK1. In-frame translational start codons decoded by tMet-tRNAi generate isoforms that differ in the N-terminal region. Unlike AUG1, trinucleotides AUG2 and AUG3 are in near-optimal sequence context. Isoform 3 derived from translation initiation at AUG3 is devoid of putative NLS. (B) Lentivirus-RFP transduced SCC40 cells were puromycin-selected and RFP+ cells were sorted by FACS. Primary human foreskin fibroblasts (HFF) were transduced with control adenovirus or adenovirus-TLK1 (MOI 1000) and 24 h after transduction cells were trypsinized and resuspended with SCC40 RFP+ cells at a 1:3 ratio. Cells were co-cultured and time-lapse images acquired in the Incucyte (Sartorius) incubator. Quantification of RFP surface area from a representative experiment is shown.
growth, and TLK-depleted fibroblasts when co-cultured with human breast cancer cells increased cancer cell proliferation. In corollary, human fibroblasts overexpressing TLK1 limit growth of oral squamous cell carcinoma cells in co-cultures ( Figure 6B). Reciprocal signaling between cancer-associated fibroblasts and cancer cells also control tumor behavior, and selectivity in smart therapeutics that suppress TLK1 in cancer cells while augmenting its expression in normal cells can improve cancer treatment response as well as limit normal tissue toxicity [54,61].
A plethora of factors participate in translation initiation and multiple regulatory nodes in the nexus tightly control the process. Deregulation of initiation is a hallmark of cancer and translation of otherwise repressed ORFs promotes oncogenesis and contributes to aggressive, treatment-recalcitrant tumor phenotypes. An overactive AKT-mTOR pathway, reduced expression of 4EBP1, upregulated eIF2 inhibitory kinases, and increased expression of eIF2A in HNSCC patients correlate with poor patient prognosis and it suggests that redirection to non-canonical initiation renders tumors refractory to treatment. mTOR inhibitors have been extensively evaluated in HNSCC and despite an improved early outcome, the development of drug-recalcitrant tumor cells leads to recurrence [34,62]. Tumor heterogeneity within the same tumor mass contributes to treatment refractoriness and tumor recurrence. Other than genetic and epigenetic changes, the tumor microenvironment plays an important role in cancer recurrence. Tumor niches exposed to hypoxia, nutrient deficiency, oxidative stress, and/or shifts in pH reprogram cellular processes to improve cell fitness. Stress-induced deregulation of initiation promotes oncogenic progression. Hypoxic stress induces breast cancer aggressiveness through alternative initiation of pluripotency factors, SNAIL1, NANOG, NODAL, and like hypoxia, mTOR inhibition-induces phosphorylation of eIF2α to drive the synthesis of de-differentiation factors [63]. It is therefore appealing to speculate that an increase in cellular plasticity could underlie limited response to mTOR inhibitors in HNSCC, and it underscores the need to therapeutically target multiple deregulated translation hubs that include phospho-eIF2 to curtail the development of intrinsically resistant tumor cells.
AUTHOR CONTRIBUTIONS
GS-D conceived the idea, conducted the analysis, and wrote the paper. | 2021-12-07T14:23:24.917Z | 2021-12-07T00:00:00.000 | {
"year": 2021,
"sha1": "f598b3553fb740a115275ba399dd9c1c15342f2b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/froh.2021.765931/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "f598b3553fb740a115275ba399dd9c1c15342f2b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
258179940 | pes2o/s2orc | v3-fos-license | Well posedness of linear parabolic partial differential equations posed on a star-shaped network with local time Kirchhoff's boundary condition at the vertex
The main purpose of this work is to provide an existence and uniqueness result for the solution of a linear parabolic system posed on a star-shaped network, which presents a new type of Kirchhoff's boundary transmission condition at the junction. This new type of Kirchhoff's condition-that we decide to call here local-time Kirchhoff 's condition-induces a dynamical behavior with respect to an external variable that may be interpreted as a local time parameter, designed to drive the system only at the singular point of the network. The seeds of this study point towards a forthcoming theoretical inquiry of a particular generalization of Walsh's random spider motions, whose spinning measures would select the available directions according to the local time of the motion at the junction of the network.
Introduction
We are given a terminal time condition T > 0, an integer number I (with I ≥ 2) and starshaped compact network: x u i (t, x, l) + b i (t, x, l)∂ x u i (t, x, l) +c i (t, x, l)u i (t, x, l) = f i (t, x, l), (t, x, l) ∈ (0, T ) × (0, R) × (0, K), Local time Kirchhoff's boundary condition: ∂ l u(t, 0, l) + In order to simplify our study, we have assumed in our framework that all the rays R i have the same length R > 0, that a Neumann boundary condition holds at x = R, and that a Dirichlet boundary condition ψ i holds at l = K. Of course, a more general network setting could be treated with similar tools: one could for instance consider more general rays, and/or a mix of Neumann and Dirichlet boundary conditions or local-time Kirchhoff's boundary conditions at the vertices, etc.
Let us explain the main motivation that grounds our study of the system (1). Initially introduced by J. Walsh in [22], Walsh's Brownian spider motion is a continuous process on a set of I rays embedded in R 2 emanating from {0}. Roughly speaking, to each ray R i we associate a weight α i that corresponds (very) heuristically to the probability for the process to visit R i when it leaves the junction point {0}. Inside each ray and apart from the junction point, the process behaves like a Brownian motion. However, because the trajectories of the Brownian motion are not of bounded variation, this intuitive description does not make sense: starting from {0} the process visits all the rays at once (all the rays are visited on any arbitrary small time interval).
As a generalisation of Walsh's Brownian motion, diffusions on graphs were introduced in the seminal works of Freidlin and Wentzell [5] for a star-shaped network N R and then for general graphs in Freidlin and Sheu [4].
Given I pairs (σ i , b i ) i∈I of mild coefficients of diffusion from [0, +∞) to R satisfying the following condition of ellipticity: ∀i ∈ [ [1, I]], σ i > 0, and given α 1 , . . . , α I ) positive constants satisfying I i=1 α i = 1, it is proved in [5] that there exists a continuous Feller Markov process x(·), i(·) valued in N R whose generator is given by the following operator: L : In the above, for k = 0, 1, 2, . . . the k-th order continuous class space on the junction network Remark 1.1. As is standard in the definition of the space C k (N R ) (see [5], [15], [10]), we do not impose the existence of a continuous k-th order derivative at the junction, so the notation C k (N R ) might be a little misleading at first. The reader should keep in mind that -in addition to the existence of k-th order separate derivatives in all directions -only continuity is imposed at the junction 0.
Thereafter, it is shown in [4] that there exists a one dimensional Wiener process W defined on a probability space (Ω, F, P) and adapted to the natural filtration of x(·), i(·) , such that the process x(·) satisfies the following stochastic differential equality: dx(t) = b i(t) (x(t))dt + σ i(t) (x(t))dW (t) + dℓ(t) , 0 ≤ t ≤ T.
In the above equality the process ℓ(·) has increasing paths, starts from 0 and satisfies: Moreover, the following Itô's formula is proved in [4]: for any sufficiently regular f . The process ℓ(·) can be interpreted as the local time of the process x(·), i(·) at the junction point {0} ; indeed, a quadratic approximation of the local time process ℓ is given by the convergence: In a forthcoming work [14], we aim at constructing a spider diffusion process satisfying uniqueness in law, with random spinning measure (α i ) i∈I that may depend on the own local time of the spider process at the junction {0}. In this framework, we conjecture that the underlying process x(·), i(·) , ℓ(·) exists and satisfies the following Itô's rule: for sufficiently regular f . In the above, we use the data set The proof for the existence of such a process satisfying (3) is planned to be performed using the original construction of concatenation of solutions for martingales problems by Stroock and Varadhan in [18]. The more difficult proof for a criteria that ensures the uniqueness of a weak solution (or uniqueness in law), will be achieved using a PDE argument: this last important issue gives the justification for this contribution.
The precise understanding of the diffraction of Walsh's random motions may have important applications, for instance if one is interested to describe the diffusive behavior of particles subjected to scattering (or diffraction), for which very little physical information is known. As an example, we mention [1]: the theory of quantum trajectories states that quantum systems can be modelled as scattering processes and that these scattering effects may occur in prescribed directions emanating from a single point. Note that light scattering is a phenomenon that has long-time attracted many scientists for its importance in advanced photonics technologies such as on-chip interconnections, refined bio-imaging, solar-cells, heat-assisted magnetic recording, etc. (For an account on all these topics and the importance of the scattering phenomenon, see for e.g. [17]).
There have been several works on linear and quasilinear parabolic non degenerate equations of the form (1) -but without involving any dependence with respect to some 'external variable' -that present a classical formulation of the boundary Kirchhoff's condition: For linear equations, up to our knowledge, one of the most relevant work is the one by Von Below in references [19,20,21]. Essentially, it is shown in [19] that -under natural smoothness and strong compatibility conditions -linear boundary value problems defined on a star-shape network that involve a linear boundary Kirchhoff's condition at the junction point are well-posed. The proof relies on a particular linear transformation that mutis mutandis permits to retrieve the classical framework of parabolic systems. Note that this approach increases the dimension of the original problem and cannot be adapted directly to the framework of this contribution -at least to the best of our abilities. We revisit the result of [19] in Section 3 by presenting another path for the construction of solutions: namely, we follow the main ideas presented by the second author in [15] (in a Quasi-linear parabolic framework) and proceed to proof of the convergence of elliptic schemes as was successively performed in [15] for the existence of classical solutions in suitable Hölder spaces for non degenerate quasi-linear parabolic systems.
Let us recall that in [20], the strong maximum principle for semi linear parabolic operators with Kirchhoff's condition was proved, while in [21] the author studied the classical global solvability for a class of semilinear parabolic equations on ramified networks, where a time-dynamical condition is prescribed at each node of the underlying network. Compared to the results stated in [19] (when no dependency on the 'external variable' l is involved), our methodology permits to re-state the well-posedness of the problem in the fully linear case with a weakening on the necessary compatibility conditions for the data at the boundary and also on the required regularity of the coefficients at the junction point {0}. We will investigate also useful bounds for the derivatives of the solution -especially the key term |∂ t u(t, 0)| -these bounds play a crucial role in the construction of the solution to the system (1).
In the linear setting, let us mention also another approach that was developed by M.K. Fijavz, D.M ugnolo and E. Sikolya in [8]: their idea is to combine semi-group theory with variational methods in order to understand how the spectrum of the operator relates to the structure of the network. We will not investigate these issues in this contribution.
Parabolic (or elliptic) equations posed on networks can also be analyzed in terms of viscosity solutions. To our knowledge, the first results on viscosity solutions for Hamilton-Jacobi equations on networks have been obtained by Schieborn in [16] for the Eikonal equation. Later, investigations have been discussed in many contributions on first order problems [2,6,9], elliptic equations [10] and second order problems with vanishing diffusion at the vertex [11]. In contrast and to the best of our expertise, mainly because of the difficulty of this subject, second order Hamilton-Jacobi equations on networks with a non-vanishing viscosity at the vertices have seldom been studied in the literature.
The construction of the solution of the system (1) involving the local time variable l is achieved by proving the convergence of a parabolic scheme that uses a discretization grid corresponding to the variable l (see Section 4).
Using classical arguments, we prove that uniqueness for solutions of system (1) holds true for solutions that have enough regularity (see Theorem 2.6). Under mild assumptions, we will see that classical solutions of the system (1) belong to the class C 1,2 in the interior of each edge and C 0,1 in the whole domain (with respect to the time-space variables (t, x)). Since the variable l drives dynamically the system only at the junction point {0} with the presence of the derivative ∂ l u(t, 0, l) in the local time Kirchhoff's boundary condition, one can expect a regularity in the class C 1 for l → u(t, 0, l) and this is indeed the case (see our main Theorem 2.4 and point iv) in Definition 2.1). Inside each ray, because of the lack of information on the dependency of the solution w.r.t. the variable l, we believe there is very little hope to prove the existence of a partial derivative with respect to l in the classical sense. However, we manage to prove that the solution of the system admits a square integrable generalized derivative ∂ l u with respect to the variable l (see again Theorem 2.4 and point v) in Definition 2.1).
Recall that the roots of our study of system (1) are grounded to our inquiry regarding the possible construction of a Walsh-spider diffusion living on N R having a spinning measure that selects directions with respect to its own local time. Having this in mind, one should remember that the local time at the junction point {0} exists only if diffusion coefficients are non degenerate.
Clearly, both problems are deeply connected. From a PDE technical aspect pointing towards the construction of the corresponding Walsh-spider diffusion, the main challenge is to obtain an Hölder continuity of the partial functions l → ∂ t u i (t, x, l), ∂ x u i (t, x, l), ∂ 2 x u i (t, x, l) for any x > 0. We will show that such regularity is guaranteed by the central assumption on the ellipticity of the diffusion coefficients on each rays together with the mild dependency of the coefficients and free term with respect to the variable l.
The paper is organized as follows. In Section 2 we introduce all the necessary material needed for our purposes and we announce our main Theorem 2.4. We also state a comparison theorem (Theorem 2.6) that will be of constant use in the proofs. Without involving the additional local time variable 'l' at this stage but under somewhat weaker assumptions, we provide in Section 3 another proof of the main result obtained in [19] for parabolic systems that involve a standard boundary Kirchhoff's transmission condition. In particular, by adapting the same methods as those employed in [15], we manage to derive interesting bounds for the solution and its partial derivatives. Finally in Section 4, we prove our main result concluding to the well-posedness of system (1).
Introduction and Main results
In this section we state our main result -Theorem 2.4 -regarding the solvability of the parabolic problem (1) involving the local-time Kirchhoff's boundary condition at the junction point, posed on a star-shaped compact network.
2.1. Notations and preliminary results. Let us start by introducing the main notations as well as some preliminary results.
Let I ∈ N * be the number of edges and R > 0 be the common length of each ray. The bounded star-shaped compact network N R is defined by: The intersection of all the rays (R i ) 1≤i≤I is called the junction point and is denoted by {0}.
We identify all the points of N R by couples ( The 'space domain' where the PDE system (1) will be studied is the following one: whereas the 'time-space domain' will be denoted by: where T > 0 denotes some fixed horizon time.
For the functional Hölder spaces that will be used in the sequel we will use standard notations (see e.g. Chapter 1.1 of [13]). We recall that for any bounded Lipschitz domain O of R n , then is the set of real valued bounded Lipschitz functions endowed with the norm |·| W 1,∞ (O) .
We write for any f ∈ W 1,∞ (O): Let us now give the definition of the class of regularity for our solution of the PDE system (1).
We say that is in the class f ∈ C , the map (t, x, l) → f i (t, x, l) has an incremented Hölder regularity in the interior of each ray R i and belongs to C 1+ α 2 ,2+α, α (v) finally, for all i ∈ [[1, I]], on each ray R i , f admits a generalized derivative with respect to the variable l in In the same way, we define the classes C 1,2,0 {0} Ω T analogously to i) − ii) − iii) − iv) − v) but without any additional Hölder regularity; Lip 2,0 {0} Ω is defined analogously to i) − ii) − iii) − iv) − v) but removing the dependence on the time variable and with Lipschitz regularity.
Let us recall a very useful lemma of interpolation. The main ingredients of its proof can be found in Lemma 2.1 of [15] ; for the convenience of the reader, we provide a sketch of the proof at the end of this work (see Appendix A).
for some given constants ν 1 , ν 2 , ν 3 ∈ R + and α, β, γ ∈ (0, 1). Then One of the main important technical issues when one wants to study the well posedness of the system (1) is to characterize the regularity of the derivatives ∂ t u, ∂ x u, ∂ 2 x u with respect to the variable l of some possible generalized solution u. Here, we will see that the smoothness of a generalized solution (the term generalized being applied only on each ray R i separately, leaving the junction {0} out) is determined only by the smoothness of the coefficients and free terms.
We state the following important lemma.
Assume that the coefficient a is elliptic: , a(t, x, l) ≥ a > 0 and the coefficients and free terms (a, b, c, f ) have Hölder regularity in the class C α 2 ,α, α Then u belongs to the class C 1+ α 2 ,2+α, α Remark that one could choose C α,β,γ (0, T ) × (0, R) × (0, K) as the class of regularity for the coefficients and free terms and show that the solution belongs to C 1+α,2+β,γ (0, T ) × (0, R) × (0, K) . For the reader's convenience, we have used here the classical terminology given in [13], pointing towards a possible extension of the well-posedness of similar systems as (1) to the quasi-linear framework.
Proof. Introduction: a short reminder for the interior regularity of weak solutions in the classical case When there is not dependency with respect to the variable l, results on the interior regularity of weak solutions of parabolic equations may be found for example in Theorem 12.1 III of [13].
Before getting into all the details of the proof, let us provide for the convenience of the reader a short remainder of the main ideas that lead to the result of the classical case given in Theorem 12.1 III of [13].
Now, assume that w is some continuous representative of a generalized solution of the last problem (5) and that w belongs to the class W 1, Associated to w, we introduce the following parabolic problem: where L ∂U denotes the lateral boundary surface of U ⊂ (0, T ) × (0, R).
Here, in the setting of this classical parabolic Dirichlet problem, we have regularized the value of w -for instance by standardly using a family of convolution kernels (ξ n ) that tend weakly to the Dirac mass -in order to ensure the regularity of the solution v n at the boundary.
Classical arguments guarantee that the solution v n is in the class C 1+ α 2 ,2+α (U ). Then, the classical Schauder's estimates (written for v n ) combined with the use of Ascoli's theorem ensure that -up to a subsequence -the sequence (v n ) converges locally uniformly to v ∈ C 1+ α 2 ,2+α (U ) . Hence, we conclude as in Theorem 12.1 III of [13] for the classical interior regularity of the weak solution w.
In order to adapt these arguments to our setting, we see that the key point is to obtain a Schauder's type estimate for the parametric parabolic problem (that involves the variable l).
Step 1. Proof of a Schauder's type estimate In the sequel and for the proof itself we consider the following data (a, b, c, f ) ∈ C α 2 ,α, α and we assume the ellipticity assumption for the leading coefficient a: a(t, x, l) ≥ a > 0.
By giving a closer look to the estimations given in [7] Chapter 8 Section 10 (see also [13] IV- §10, but the dependence of the constant w.r.t to the distance to the boundary is less apparent), we observe the non decreasing behavior of C(l) with respect to δ(l) we see that we are allowed to choose C(l) = C > 0 independent of l ∈ [0.K] in (9).
Fix now (l, q) ∈ [0, K] and denote by v(·, l) and v(·, q) two solutions of the parametric problem (7), with parameters l and q. Remark that v(·, l) − v(·, q) solves the following parabolic problem with unknown function w: where in the last equation the expression of the free term F is given by: Using the classical Schauder's estimate on the following open sets: we see that there exists a constant C > 0, independent of (l, q) such that: We obtain therefore that: we can conclude finally that v ∈ C 1+ α 2 ,2+α, α 2 O and that there exists a positive constant C > 0, depending only on the data δ, α, a, a C α 2 ,α, α namely that (8) Step 2. Proof of the interior regularity for weak solutions of (4) We are now in position to adapt the arguments exposed in the introduction of the proof to our context, taking into account the dependency on l.
Let u ∈ W 1,2,0 of the parametric parabolic problem in the statement of the lemma.
In order to adapt the arguments exposed in the introduction of the proof to our context, we introduce naturally the following parabolic problem with parameter l ∈ [0, K] posed on some connected open subset U = (s, s ′ ) × (z, r) satisfying U ⊂⊂ (0, T ) × (0, R): (see for e.g. the classical results on Solvability of Problems 5.4' and 5.4 in [13]). Now note that the regularized map u n (·, l) belongs to the class C ∞ [0, T ] × [0, R] and that it is always possible to perform the convolution in such a way that u n satisfies Thus, the result of this discussion gives us insurance that there exists a finite constant C := sup n≥0 C(n) < +∞ independent of n such that With the same arguments used to prove the Schauder's estimates (11), it is not hard to check where as before, M > 0 stands for some constant depending only on the data.
Hence, just like for the remainder in the introduction of the proof, combining (13) together with Schauder's estimates (14) allows to apply Ascoli's theorem: up to a subsequence (v n ) converges locally uniformly in the class C 1+ α for any l ∈ (l 1 , l 2 ).
This solution v is also a generalized solution in the sense that: for any φ ∈ C ∞ c O (the class of infinite differential function with compact support strictly included in O).
Formally speaking, the previous limit v depends on the set O: in order to emphasize its dependence on O let us denote it v O for a moment. Since O may be arbitrarily taken in U , we may consider (O p ) an increasing sequence of pavements converging to U as p tends to infinity. The preeceding shows that we can attach to this sequence a doubly indexed subsequence v n (p) k (k,p)∈N * ×N * , which satisfies that for any p, lim k→+∞ v n (p) Proceeding to a diagonal extraction, we now consider v n (p) p p∈N * . By construction, for any q ∈ N * , v n (p) p p≥q is a subsequence of v n (q) k k∈N * and as such, the subsequence v n (p) Since the ladder holds true for any q, the family (v Oq ) q has to be consistent and our subsequence v n (p) p p∈N * converges locally uniformly in the class (16). This convergence Moreover, for any l ∈ (0, K), the convolution regularization (u n (., l)) converges pointwise to u(., l). In turn, (12) shows that (v n (p) p (., l)) p converges to u(., l) on the lateral surface L ∂U ensuring that for all l ∈ (0, K): We now proceed to show that which will in turn imply finally that u ∈ C 1+ α 2 ,2+α, α 2 U × (0, K) .
Denote by
a dense countable subset of (0, K). Fix l ∈ K and let {φ ε ∈ C ∞ [0, K] , ε > 0} denote a family of smooth functions converging in the sense of distribution to the Dirac distribution δ l as ε ց 0.
With the same arguments, we have also: Therefore u(·, l) and v(·, l) are two classical weak solutions of the same parabolic problem, on the domain U = (s, s ′ ) × (ℓ, r) possessing the same boundary conditions on L ∂U . From the weak uniqueness in the class W 1,2 2 (s, s ′ ) × (ℓ, r) that follows from our assumptions (see for instance the weak uniqueness result stated in [13] Theorem 9.1 Chapiter IV), we deduce that: This implies the existence of some negligible set N l ⊂⊂ [s, s ′ ] × [ℓ, r], such that: and: We can conclude that: Using now the key assumption that u ∈ C α 2 ,α, α , we can conclude by the continuity of both u and v with respect to the variables (t, x) that: The density of K in (0, K) and the continuity of both u and v with respect to the variable l yield Now observe that U has been arbitrarily taken in (0, T ) × (0, R), so that in fact which concludes the proof of the lemma.
Assumptions and main results.
In this subsection, we introduce the data involved in the PDE system (1) with the required assumptions and we state our main Theorem 2.4. Next, we proceed to the proof of a comparison theorem for the PDE system (1). Because the result is of particular importance for our forthcoming probabilistic inquiry of the construction of Walsh's spider motions whose spinning measure depend on the local time, we shall conclude the subsection by the statement of an extension of Theorem 2.4 in the case of unbounded star-shaped network.
The proof is given in Appendix.
Existence and uniqueness for a Parabolic PDE with Kirchhoff's local time condition.
For the rest of these notes, we fix α ∈ (0, 1).
We introduce the following data: We assume that the data D satisfies the following assumption: Compatibility conditions at the boundaries: We state the main central result of this work, which asserts the unique solvability of the parabolic linear PDE system (1) posed on N R and having a dynamical 'local-time Kirchhoff's boundary condition' at the junction point {0}.
Theorem 2.4. Assume that the data D satisfies assumption (H). Then, the system (1) is Next, we give the definitions of super and sub solutions for the system (1), and we prove a comparison Theorem.
Assume that the data D satisfies assumptions (H).
Proof. Let λ(K, R) = λ > C(K, R), where C(K, R) is some constant whose expression will be given later (the definition of C(K, R) is given in (18)). First fix s ∈ (0, T ) and ℓ ∈ (0, K). We argue by contradiction and we assume that where the supremum is taken over all Using the continuity and the terminal boundary conditions satisfied by u and v in the assumptions of the theorem, the supremum above is then reached at a point Assume first that Hence: Using the fact that u is a super solution, whereas v is a sub solution, we obtain from the boundary and hence a contradiction.
Suppose now that x 0 ∈ (0, R), then the optimality conditions in the directional derivatives with respect to the variables t and x imply: Using now the fact that v is a sub solution while u is a super solution of (1) on the ray R i 0 , we obtain using the positivity assumption on the coefficient where: Therefore, using (17) and the defining property for λ, we obtain a contradiction.
and v i (t 0 , 0, l 0 ) = v j (t 0 , 0, l 0 ) = u(t 0 , 0, l 0 ), using the regularity with respect to the variable l of both u and v at {0} (coming from condition iv) in the definition of C 1,2,0 {0} Ω T ), we obtain: By definition of (t 0 , x 0 , l 0 ) = (t 0 , 0, l 0 ), we have also that for all i ∈ [[1, I]] and h ∈ [0, R]: Therefore, applying a first order Taylor expansion with respect to the variable x in the neigh- Now, using the ellipticity assumption on the coefficients (α i ) 1≤i≤I (H) a)-(ii)), observing that the coefficient r is non negative and also the fact that v is a sub solution while u is a super solution of (1) at {0}, we obtain: which yields a contradiction.
All cases lead to contradictions, resulting in the fact that for all 0 Using the continuity of u and v w.r.t variables (t, l), we deduce finally that for all (t, ( x, l).
Existence and uniqueness for the solution of a Parabolic PDE with Kirchhoff's local time
condition on an unbounded domain. We conclude this section by stating a theorem regarding the uniqueness and the solvability for the system (1) posed this time in a unbounded star-shaped network: The parameter l involved by the dynamic Kirchhoff's local time boundary condition at {0} is also considered to evolve in an unbounded half-line, namely we now have l ∈ [0, +∞).
Our concern comes from an upcoming research work [14] : we will need this result in order to prove the uniqueness in law for some kind of generalized Walsh-Spider diffusion living on the star-shaped network N ∞ that selects its direction at the junction point {0} according to its own local time at the junction (see the Itô's rule (3) given in Introduction).
For our purposes, we introduce the following data: We assume that the data D ∞ satisfies the following assumption: a) The following ellipticity condition for the terms a i , α i i∈[[1,I]] : b) The following bounded Lipschitz regularity for the coefficients at each rays And for the coefficients at the junction point {0}: the map l → g(0, l) is Lipschitz bounded continuous (as the coefficients and free terms in the last assumption b)-(i), b)-(ii)). The following compatibility condition holds: Similarly to the definitions involved in Theorem 2.4, we give the definition for the class of regularity for a solution of system (1), now extended to an unbounded domain denoted by We say that .
(i) the following continuity condition holds at the junction point {0}: x, l) has a regularity in the class x, l) has a regularity at the interior of each ray R i in the class C 1+ α 2 ,2+α, α (iv) at the junction point {0}, the map (t, l) → f (t, 0, l) has a regularity in the class , on each ray R i , f admits a generalized locally integrable derivative with respect to the variable l in We have the following Theorem, whose proof is postponed in Appendix : Theorem 2.8. Assume that the data D ∞ satisfies assumption (H ∞ ). Then the following system: is uniquely solvable in the class C 3. The main result obtained by Von Below in [19] revisited.
Up to our knowledge, the first result obtained for linear parabolic equations posed on networks -involving Kirchhoff's type boundary conditions at the vertices -was obtained by Von Below in [19]. Essentially, it is proved in this paper that a linear parabolic problem on a network phenomenon was also observed in [15].
Our main objective is to ensure the well posedness of the system (1), where a new variable l comes into play. If one wishes, as is natural, to exploit and adapt similar ideas as in the classical approach by following the techniques of proof given in [13] and performing the same kind of transformations as in [19] for example, the following issues would surely have to be considered: i) obtain an explicit solution on the half line with constant coefficients using the heat kernel.
This relates to the joint density of the reflected Brownian motion and its local time; ii) obtain the solvability with general coefficients in the half line (for e.g. as in Chapter IV, Section 7 of [13]) but taking good care of the fact that we do not have an uniform parabolic operator in the variables (x, l); iii) adapt the theory of linear parabolic systems to the linear operator involved by the system, like in Chapter VII of [13], which would lead to very long polynomial calculations.
As already mentioned in the Introduction, in this paper we prefer to choose another path and dig into the recent ideas of [15], where the second author obtained classical solvability in Hölder spaces for quasi-linear parabolic system posed on a star-shaped network, with a homogeneous Neumann or Kirchhoff's condition denoted by F , by constructing and studying a convergent elliptic scheme.
Let us now give some insights on the methodology used in [15]. First, elementary arguments show that the elliptic quasi-linear problem is well posed (see [10] or Appendix B in [15]). The data of the system satisfies the classical assumption of uniform ellipticity, with quadratic growth in the gradient variable given in Chapter VI of [13], whereas the boundary condition F is assumed to be increasing with respect to the gradient at the junction point {0}. The main key is to obtain first a bound for |∂ t u| in the whole domain (see Lemma 4.1 in [15]). Note that in the quasilinear context of [15], the price to pay was to consider homogeneous coefficients. Let us also mention that all the bounds for the solution are completely independent of Kirchhoff's boundary condition F (see Lemma 4.1 and 4.2 in [15]).
Following these ideas, the construction of the solution of system (1) will be done via a convergence approximation scheme by constructing a parabolic discretization scheme with a discrete grid w.r.t. the variable l: .
Getting accurate expressions of the bounds for the derivatives of the solution of each such l p -step parabolic problem is of crucial importance in order to guarantee the convergence of the sequence towards a non-exploding solution as the mesh-size of the l-grid tends to zero. This section is entirely devoted to the matter of getting expressions of these bounds that are good enough (see Theorem 3.5).
With this purpose in mind, we follow the same line of arguments as in [15]: we construct an elliptic system designed to converge to the parabolic problem. The linear character of our system permits to simplify some of the arguments since Bernstein's estimates are no longer needed to find a bound for the gradient term; also, up to a bit of extra burdensome technicalities, the strong assumption on the homogeneity of the coefficients that was needed in [15] is not required here. A central key is to obtain an uniform bound of the elliptic system approximation time derivative n|u k−1 (t, 0) − u k (t, 0)| at the junction point {0}. This is done in Proposition 3.3, where we provide a refined uniform bound independent of the coefficients appearing on the rays: this refined bound will be crucial to ensure the convergence of our l-step parabolic scheme in Section 4.
In the whole remaining of this section, we consider the following data: We assume furthermore that the data D ′ satisfy the following assumption: a)-Ellipticity condition for the terms a i , α i , i∈[[1,I]] and λ: b)-Compatibility conditions for the initial condition g: We consider the following parabolic system posed on the star-shaped network N R : where u 0 i (x) = g i (x). By applying inductively classical results on elliptic partial differential equations (see for e.g. Theorem 2.1 of [10]) gives us insurance that at each step k ∈ [[1, n]] the above elliptic system (22) admits a unique solution (u k i ) i∈[ [1,I]] in the class C 2 (N R ). A map h in the class C 2 (N R ) is a super (resp. sub) solution corresponding to E k if: The elliptic comparison theorem holds true in the class C 2 (N R ) (see Theorem 3.3 in [15]), that is if f is a super solution and v a sub solution, then f ≥ v in the whole domain N R .
For a fixed n ∈ N * , we will denote in the sequel L k i i∈[ [1,I]],k∈[ [1,n]] the family of operators acting each on φ ∈ C 2 ([0, R]) and defined by Using this notation, h is a super (resp. sub) solution corresponding to E k implies L k i h ≥ 0 (resp. ≤ 0) for all x ∈ (0, R).
Let us now turn to the boundary conditions needed to apply the comparison theorem.
Moreover, it is also clear that ∂ xφ k (R) = 0 ≥ 0. For Kirchhoff's condition, we need that is guaranteed whenever In conclusion of this analysis, we have shown by induction that (φ k i ) i∈
3.2.
Uniform bound for the approximated time derivative.
where we have set and where C is a universal constant (namely one can choose C = 1188).
Proof. Step 1. Adaptive approximation of the identity
For technical reasons that will appear clearly later, we need to introduce approximations of the identity function. These approximations will be used to ensure Kirchhoff's condition for the super and sub solutions constructed in the proof.
In order to simplify the notations, let us fix for a moment k ∈ [[1, n]] and let us drop any Observe that Set θ > 0 a small parameter. We introduce the following interpolation polynomial that satisfies the following important facts The polynomial P k θ is constructed s.t. the θ-approximation of the identity ψ k θ defined by is a twice-differentiable function.
An elementary study of the polynomial Q(x) = 3x 5 − 8x 4 + 6x 3 shows that it takes only positive values and satisfies that x − Q(x) ≥ 0 for any x ∈ [0, 1]. In particular, rewritting P k θ , we see that for all x ∈ [0, θ]: Hence, from the bound (33), we see that it is possible to choose θ 0 > 0 small enough so that Observe also that there is a universal constant C > 0 (a rough computation gives C ≥ 66 × 18 = 1188 as announced in the statement of the proposition) s.t.
where we made use once again of the bound (33) with n ≥ where (ε n (θ)) is a sequence of functions vanishing as θ goes to 0 (non uniformly w.r.t n) and
M θ k,n k∈[[1,n]]
is a well-chosen purposely designed uniformly bounded sequence of positive numbers.
To that end, the main idea is to apply, for each k ∈ [[1, n]], the comparison theorem to E k with a super solution of typeφ Observe also that for all x ∈ (0, R), we see that (46), using all the previous estimates together with the induction hypothesis, we see that in order to guarantee L kφ Let us now turn to the boundary conditions needed to apply the comparison theorem.
The continuity condition at the junction point {0} is satisfied: indeed, it is satisfied for and since for all (i, j) ∈ [[1, I]] 2ψ i,θ (0) =ψ j,θ (0) = 0, we havê It is also clear that for all x in the vicinity of R we have ) (satisfied when k = 1 because of the compatibility condition b)(ii)).
We now look at Kirchhoff's condition. Since the initial condition (g i ) i∈[ [1,I] which is guaranteed whenever In conclusion of our analysis, the family of functions (φ k i,θ ) k∈[ [1,n]] defined in (39) is assured to be a super solution if the sequence (M k,n ) k∈[[0,n] satisfies the initialization condition (40) together with (47) and (48).
Step 3. Synthesis
In regard of our previous analysis, we construct a purposely designed sequence M θ k,n k∈ [[0,n] by setting Defined likewise, the sequence M θ k,n k∈ [[0,n] is purposely constructed in order to satisfy (40), Now using the explicit expression of M θ k,n and letting θ tend to 0 in the previous inequality yields finally that for all i ∈ [ [1, n]] and k ∈ [[1, n]] : for large enough n ∈ N * .
Unfortunately, the previous inequality does not not give a sufficient bound at the junction point {0} for our purposes. In order to ensure the convergence of the parabolic scheme involving the local time variable l we need a more refined bound on the time derivative at the junction point {0}. This is the subject of the next subsection where we refine the previous analysis to get a better bound at the junction point {0}.
3.3.
Refined estimates for the approximated time derivative at the junction point.
In this subsection, we give a proof of a specific estimation bound for the approximated time Moreover, it is notable that the same structure equation (50) will also be used as the key ingredient to show that the accumulated time spent by the spider motion at the junction point has Lebesgue measure zero (non-stickiness condition), which is a crucial step in order to prove an Itô formula for the spider motion in presence of discontinuities of the driving coefficients at the junction point.
where C(g) is the constant defined in (31).
Proof. As already mentioned, the main idea is to perform the same computations carried over in the proof of Proposition 3.2, but replacing there the construction of the sequence (M k,n ) by the construction of a sequence of functions (v k,n ) whose values at x = 0 depend crudely on the parameters in Kirchhoff's condition. Such a sequence of functions will naturally tragically explode as n tends to infinity on the interior of each branch -except at the junction point {0} -which is just enough for our purposes.
and by defining v k,n for k ∈ [ [1, n]] inductively as the unique solution in C 2 ([0, R]) of the following well-posed second order ordinary differential equation: where the source term κ θ k,n is given by induction by setting Here the constant K and the error term ε n (θ) are the same that appear in the proof of Proposition Note that from the explicit form of v θ k,n it is easy to show by induction that In particular, note that v θ k,n is an increasing function. Remember the definition of our family of approximations of the identity ψ k θ introduced in (36). Following the proof of the Proposition 3.2, we show by induction that the following mapŝ are respectively super and sub solution of the corresponding elliptic problems.
Initialization holds true due to the conditions imposed on g at x = 0 and x = R, the expression of the constant v θ 0,n and the fact that ψ 0 θ ≡ id. Following the same computations as in Step 2. Analysis in the proof of Proposition 3.2, we are going to show that the conditions needed to ensure the comparison on the whole domain, namely are satisfied.
Clearly, since g, u k−1 , (ψ i,θ ) i∈[ [1,I]] and the constant family (v θ k,n ) i∈[ [1,I]] all belong to C 2 (N R ), we verify thatφ k θ ∈ C 2 (N R ). Moreover, using the definition ∂ x v θ k,n (0) = 0, similar arguments as those used in the proof of Proposition 3.2 ensure that the boundary inequality at junction point holds true. More precisely, using Kirchhoff's condition satisfied by u k−1 and the initial values because the positive derivative of v θ k,n .
We now focus on the remaining inequality involving the operator L k i on each edge. For k = 1, the initialization condition L 1 which is clearly satisfied because of our definition of v θ 1,n (0) and because v θ 1,n is an increasing function.
Let us now turn to the case k > 1 and fix k ∈ [ [2, n]]. Our induction hypothesis asserts that Dropping any reference to the branch index i, using this notation together with the notations used in the proof of Proposition 3.2, we have In conclusion, we ensure that, for any k The same type of computation may be performed to prove that the family φ k In particular since ψ k θ (0) = 0 and remembering our prescribed initial condition on v θ k,n (0), we conclude that The result of the proposition follows then by letting θ tend to zero in the right hand side. with where we have set Since by proceeding to integration between 0 and R and integrating by parts the gradient term using Using the results of Propositions 3.1 and 3.2, the ellipticity of a and the assumptions (H ′ ) on the coefficients (a, b, On another hand, from the results of Propositions 3.1 and 3.2, we have see that: ∀x ∈ [0, R], where C 0 and C 1 are given respectively in (24) and (30). Hence, we are in position to use Grönwall's lemma, which gives 3.5. The main result of Von Below [19] revisited. The results of the preceding subsection lead us to gather uniform estimates of the sequence (u k ) k∈[[0,n]] and its partial derivatives. As shown below, similar arguments as those used for the proof of Theorem 2.2 in [15] give us insurance of the convergence of the elliptic scheme (E k ) k∈[[0,n]] . In turn, this allows us to state the following theorem -which is somewhat a refined version in the case of a star shaped networkof the main result obtained by Von Below in [19].
Theorem 3.5. Assume that the data D ′ satisfy assumptions (H ′ ). Then the parabolic system Moreover, there exist constants (C 0 , C 1 , C 2 , C 3 ), depending only on R, T , and the data D ′ , such that with the expressions of C 0 , C 1 , C 2 , C 3 given respectively by where C stands for the universal constant of Proposition 3.2; Proof. The proof uses exactly the same arguments of the proof of Theorem 2.2 in [15] that is given in the quasi-linear parabolic context with a fully non-linear Kirchhoff's boundary condition at the junction point {0}. For the convenience of the reader we shall give the main issues of the proof, avoiding to linger too much on the details.
Uniqueness (point-wise) is a straight forward consequence of the comparison Theorem 2.4 of [15] that remains applicable in our linear framework.
Let n ≥ (⌊|c| ∞ ⌋ + 1) ∧ |c| 2 ∞ . Consider the subdivision (t n k = kT n ) 0≤k≤n of [0, T ], and (u k ) 0≤k≤n the solution of the elliptic scheme E k defined in (22). From estimates obtained in Propositions 3.1-3.2 and 3.4, we obtain that there exists a constant M > 0 independent of n, such that: Define the following sequence (v n ) n≥0 in C 0,2 [0, T ] × N R , which is piecewise differentiable with respect to the time variable: Hence, the uniform upper bounds in (59) yield that there exists a constant M 1 independent of n, depending only on the data of the system, such that for all i ∈ [[1, I]]: Using Lemma 2.2, we deduce that there exists a constant M 2 (α) > 0, independent of n, such that for all i ∈ [[1, I]], we have the following global Hölder condition: We deduce then from Ascoli's Theorem that up to a sub sequence denoted in the same way by Since v n satisfies the following continuity condition at the junction point: We now focus on the regularity of v at the interior of each ray R i . We prove that v ∈ C 1+ α 2 ,2+α (0, T ) × • N * R ) and satisfies on each edge: Using once again (59), there exists a constant M 3 (independent of n) such that for each i ∈ [[1, I]]: Hence, we get up to a sub sequence denoted abusively using the same subscript n: , the set of infinite continuous differentiable functions on (0, T ) × (0, R) with compact support. We obtain therefore that, We now prove that for any ψ ∈ C ∞ c (0, T ) × (0, R) : Using that (u k ) k∈[ [1,n]] is the solution of (22) and satisfies on each ray R i we obtain: Using assumption (H ′ ), the Hölder equicontinuity in time of (v n i , ∂ x v n i ), we obtain that there exists a constant M 4 (α) independent of n such that: For the Laplacian term, we write, for all i ∈ [[1, I]], for each (t, x) ∈ (t n k , t n k+1 ) × (0, R): Using again the Hölder equicontinuity in time of (v n i , ∂ x v n i ), the uniform bound on |∂ 2 x u i,k | [0,R] and that the coefficients a i are almost everywhere differentiable with respect to the variable x, we obtain with an integration by parts: We conclude that for any ψ ∈ C ∞ c (0, T ) × (0, R) , Using Theorem III.12.2 of [13], we get finally that for all , and we deduce that v i satisfies on each edge: Remark now, from the estimates (59), that ∂ t v n i and ∂ 2 x v n i are uniformly bounded by n. Since t → ∂ t v i (t, x) ∈ C (0, T ) and t → v i (t, x) is Lipschitz continuous on [0, T ] uniformly w.r.t.
x ∈ [0, R] (this can be seen because t → v n i is equi-Lipschitz continuous and there is uniform . The same argument may be used to obtain ). Close arguments would lead us to show that v satisfies the linear Kirchhoff's boundary condition at the junction point {0}:
Proof of the main result
In this entire section, we work under the assumption (H) for the data D.
Let n ∈ N * . We introduce the following grid of [0, K] : G n K := {l p := Kp n | p ∈ [[0, n]]}. We consider the following sequence (u p ) p∈[[0,n]] built by induction, constructed so that at each step p ∈ [[0, n − 1]], u p solves the following backward parabolic scheme (in the variable l) on the star-shaped network N R : The sequence (u p ) p∈[[0,n]] is initialized with the initial backward condition For any p ∈ [[0, n − 1]] let us define for a while which remains positive as long as B p ≥ |f | ∞ .
In regard of all the previous conditions, we may then set the following constant Proof. Recall that from Theorem 3.5, we have that t → u p (t, 0) ∈ W 1,∞ [0, T ] . Note also that the constant M (g) of the statement corresponds to the constant (31) of Proposition 3.3, but taking now into account the parameter l.
For p ∈ [[0, n − 1]] we make use of the estimate (58) in Theorem 3.5 using the definitions (62)-(63) and λ = n coming from the problem P p . We have the corresponding where we made use of the inequality Clearly Proof. From the result of Theorem 3.5, there are constants L p such that Since the coefficients and their weak derivatives are uniformly bounded by l, and |u p | is uniformly The idea is to follow the arguments exposed in Theorem 2.2 VI in the monograph [13]. More precisely, Theorem 2.2 VI in [13] states in the context where the coefficients (a, b, c, f ) are continuously differentiable and a ≥ a > 0 is elliptic, then can be estimated in terms of the quantities sup |v(t, , the ellipticity constant a, the upper bounds of the coefficients (a, b, c, f ) and their derivatives, and the supreme of |∂ t v| at the boundary, namely For the solution (60)), we cannot apply directly the result of Theorem 2.2 VI in [13] on each branch because the coefficients involved in (P p ) and the values of u p at the boundary x = 0, x = R possess only a Lipschitz continuous regularity w.r.t. the time variable t. However, we may smooth by convolution the terms t → u p (t, 0), t → u p (t, R) together with the coefficients (a, b, c, f ). Thenusing standard notations for the convolutions with ε as upper index -we may consider a solution with smooth Dirichlet boundary conditions u p,ε (·, 0), u p,ε (·, R) on the time-edge of the parabolic cylinder. Well-known results (see for example Theorem 3.4' in [13]) ensure that the solution w p ε satisfies the conditions of Theorem 2.2 VI in [13]. Now note that the smoothed data is uniformly bounded w.r.t. ε in C 1 norm ; namely using transparent notations, Also, it is easy to check after some lines of calculation -for e.g. using arguments similar to those in the proof of Theorem 2.2 VI of [13] but in our much simplest case -that w p ε,i converges to . Therefore, -by using first the convergence in is also estimated in terms of the quantities |a, b, c, f, u p (·, 0), u p (·, R)| W 1,∞ , |∂ x u p | ∞ , and |u p | ∞ ; the same holds true for |u p i (., R)| ⌊W 1,∞ ([η,τ ])⌋ . We refer to equation 2.6 in the proof of Theorem 2.2 VI in [13] for the exact expression of the upper bound that is uniform w.r.t [η, τ ] × K: recall that |∂ x u p | ∞ and |u p | ∞ are uniformly bounded by p and that we have obtained an uniform bound for Similarly From the result of Propositions 3.3 and 3.2 or using a standard interpolation lemma, we get the following:
4.6.
Uniform bound for the term n|u p+1 − u p | ∞ . Our concern is to obtain an uniform bound for the term: Importantly, note that we obtain an uniform bound for n|u p+1 − u p | ∞ only for all p ∈ [[0, n − 2]] and not for p = n − 1 (contrary to the bounds gathered for |u p |, |∂ t u p |, |∂ x u p | and |∂ 2 x u p | that hold for all p ∈ [[0, n]]). Because of the lack of first order compatibility conditions w.r.t l at the boundary, it does not seem reasonable to expect that the bound below should be satisfied for p = n − 1.
Proof. We will show by induction that, for a well chosen constant B p to be produced later and that can be chosen independently of p ∈ [[0, n − 2]] and n, the following map is a super solution. At x = R, the condition is trivial, whereas at the junction point we remark that it is sufficient to satisfy: Let p ∈ [[0, n − 2]] fixed and first choose B p satisfying: that is finite in view of our previous estimates.
Making use of the uniform upper bounds obtained for u p and its derivatives, we are going to see that it is also possible to produce the constant B p so that the condition on each ray R i to Because of the Lipschitz regularity of the coefficients with respect to the variable l and the upper bounds obtained for the derivatives of u p , we have that for all (t, x) ∈ (0, T ) × (0, R), Therefore, by choosing we obtain that κ n p is a super solution. The same arguments may be applied for a construction of a sub solution of the form −u p+1 i − (t + 1)B p n with the same constant B p and we see that this constant can be chosen independent of p ∈ [[0, n − 2]] and n − 1.
Gathering both facts together yields the announced result.
Proof of Theorem 2.4.
Proof. Uniqueness is a direct consequence of the comparison Theorem 2.6.
Using once again (69), there exists a constant B 3 independent of n, such that for each i ∈ [[1, I]]: Hence, we get -up to a sub sequence denoted abusively in the same way by n: x, l) ψ(t, x, l)dldxdt.
We now prove that for any ψ ∈ C ∞ c (0, T ) × (0, R) × (0, K) : Using that (u p ) p∈[[0,n]] is the solution of (60) and satisfies on each ray R i : combined with the Lipschitz regularity of the coefficients and free terms (a, b, c, f ) w.r.t the variable l (Assumptiom H) and the uniform upper bound obtained in (69), we obtain that there is a constant B 4 , independent of n, such that: In turn, this leads to the expected result, namely: x, l) ψ(t, x, l)dldxdt = 0. Now using the key Lemma 2.3, which gives a result on the interior regularity for weak parabolic solutions that depend on the parameter l, we conclude that on each ray R i , v i belongs to Moreover, from the estimates (69), recall that ∂ t v n i and ∂ 2 x v n i are uniformly bounded by n. Since t → ∂ t v i (t, x, l) ∈ C (0, T ) and t → v i (t, x, l) is Lipschitz continuous on [0, T ] uniformly w.r.t. (x, l) ∈ [0, R] × [0, K] (this can be seen because t → v n i is equi-Lipschitz continuous and there is uniform C 0,1,0 convergence of v n i to v i ), we obtain that t → ∂ t v i (t, x, l) is bounded on (0, T ) uniformly w.r.t variables x and l and independently of K. Therefore, ∂ t v i ∈ L ∞ (0, T ) × (0, R) × (0, K) (using the arbitrary choice of K ∈ (0, K)). The same argument may be used to obtain ∂ 2 x v i ∈ L ∞ (0, T ) × (0, R) × (0, K) . We conclude finally that v ∈ C 1+ α 2 ,2+α, Therefore, for any fixed q ∈ (1, +∞) and by reflexivity of L q (0, T ) × (0, R) × (0, K) , we get there exists an extraction sequence (n q ) such that: in L q (0, T )×(0, R)×(0, K) where ξ i denotes an element of L q (0, T )×(0, R)×(0, K) . Because of the strong convergence of (v n q i ) to v i and the almost-everywhere uniqueness of weak derivatives, we identify ξ i = ∂ l v i a.e in (0, T ) × (0, R) × (0, K) and ∂ l v i ∈ L q (0, T ) × (0, R) × (0, K) . This shows that the weak limit ∂ l v i belongs to Recall first that for all p ∈ [[0, n − 1]]: n(u p+1 (t, 0) − u p (t, 0)) + I i=1 α i (t, l n p )∂ x u p i (t, 0) − r(t, l n p )u p (t, 0) = φ(t, l n p ) + β n p , ∀t ∈ (0, T ). We conclude then that the limit v is in the class C (∂ x u(t, x, l) − ∂ x u(t, z, l))dz + 1 y − x y x (∂ x u(t, z, l) − ∂ x u(s, z, l))dz (∂ x u(s, z, l) − ∂ x u(s, x, l))dz.
Using the uniform Hölder condition in time satisfied by u, with respect to the time variable t, we have: (∂ x u(t, z, l) − ∂ x u(s, z, l))dz ≤ 2ν 1 |t − s| α |y − x| .
Assuming that |t − s| ≤ ( 3R 2 ) 1+γ γν 3 ν 1 1 α ∧ 1, minimizing in y ∈ [0, R], for y > x, the right side of the last equation, we get that the infimum is reached for and then: where the constant C(ν 1 , ν 3 , γ), depends only on the data (ν 1 , ν 3 , γ), and is given by: For the cases y < x, and x ∈ [ R 2 , R], we argue similarly. We conclude by exchanging the roles of l and t, to obtain the required Hölder condition in time satisfied by ∂ x u, with respect to the variable l, and that completes the proof.
Proof of Theorem 2.8.
Proof. For the sake of conciseness, we omit the details and outline the main arguments needed for the proof.
Uniqueness: Let u and v be two solutions of (20) in the class C This implies that for all (t, (x, i), l) ∈ [0, T ] × N γ × [0, γ]: Therefore, for all (t, (x, i), l) ∈ [0, T ] × N γ × [0, γ], On the other hand, if Existence: For the sake of simplicity, we will introduce a system in the form (1), on the domain: having a Dirichlet boundary conditions at x = γ rather than the Neumann boundary conditions.
More precisely consider the following system: x, l) − a i (t, x, l)∂ 2 x u i (t, x, l) + b i (t, x, l)∂ x u i (t, x, l)+ c i (t, x, l)u i (t, x, l) = f i (t, x, l), (t, x, l) ∈ (0, T ) × (0, γ) × (0, γ), ∂ l u(t, 0, l) + Leaving aside some technical details, we affirm that it is possible to apply the classical barrier method in order to obtain a global bound for the gradient that involves only the gradient at the boundary x = γ. Hence, whenever the compatibility condition u i (t, x, γ) = g i (x, γ), (t, x) ∈ [0, T ] × [0, γ] is satisfied, the system (71) is uniquely solvable in the class C Denote u γ the unique solution of (71). Following the same argument as those in the proof of Theorem 2.4, we get that, up to a sub sequence, (u γ ) converges locally uniformly, as γ goes to +∞, to some map u which solves (20). In order to prove this statement, there is need of the following : -the upper bound given in Section 4 4-1, 4-2 and 4-3 implies that u ∈ C | 2023-04-18T01:16:30.899Z | 2023-04-17T00:00:00.000 | {
"year": 2023,
"sha1": "211b590dfa9702adec3a7228b59de43d5c0a74fc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "211b590dfa9702adec3a7228b59de43d5c0a74fc",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
238834074 | pes2o/s2orc | v3-fos-license | Research on the Reading Comprehension and Aesthetic Experience of Poster Design Based on Gadamer’s Philosophy
Based on Gadamer’s philosophical viewpoint, the research scope starts from the domain of philosophical interpretation and extends to the field of poster design communication and aesthetics. In this paper, on the one hand, the author explains that art and beauty are basic ways of existence, and artistic experience surpasses the natural science method and is close to philosophical experience and historical experience; On the other hand, the paper clarifies and explains the understanding and being understood in poster design, as well as the understanding characteristics of aesthetic experience and artistic experience. While analyzing some hermeneutic phenomenon, this paper discusses the aesthetic function of art, and applies the combination of this function and aesthetic perception to the category of design. In consideration of the relationship between the two, this paper studies and discusses how to enhance the effect of poster design through effective methods, which would only deepen viewers’ understanding of the work, but also enable viewers to enjoy the beauty of different degrees, so as to achieve the purpose of promoting information exchange and emotional communication, and further deepen the design connotation of the work. The communication function of poster design works needs to be realized in the application and appreciation, and its content expression needs to be realized in the subject understanding activities. Designers need to feel the unity of the meaning and form of the works in the poster design practice and experience, and explore the purpose and intention of the works of poster design in the aesthetic experience of art, thereupon then get the understanding of reality and the perception of the future of design, and then achieve the lofty pursuit of artistic aesthetics.
is a contemporary German philosopher and aesthetician, and one of the founders and main representatives of modern philosophical hermeneutics and hermeneutic aesthetics.
His main philosophical ideas are respected and cited for reference by many disciplines including art. In his main work Truth and Method, taking the experience of art as a breakthrough he tries to start from the experience of art in order to understand the artistic experience and spiritual science beyond self-consciousness. These propositions and viewpoints can inspire our artistic thinking and design practice. Gadamer regards aesthetics as a part of philosophical hermeneutics. He thinks that art reveals our existence, and art and beauty are basic ways of existence; Artistic experience transcends the method of natural science, and is close to the category of WANG Zhi-jun, Master of Arts, Associate Professor, Academy of Fine Arts, Shanxi University. philosophical experience and historical experience, so, art becomes the starting point of hermeneutics. The language of art is established through the following facts. It makes dialogue with everyone's self-understanding, and it always makes dialogue with the help of its own simultaneity in the present way. "Symbol does not simply abandon the confrontation between the conceptual world and the perceptual world. In other words, symbol also reminds people of the disharmony between form and essence, performance and content" (Gadamer, 2004, p. 101).
"Whether an artistic work has the nature of language" is still a question of understanding: can artistic language be understood? Poster image reading is based on visual culture. As people understand and review a design work, they would obtain their own perspective and focus. Such reading is a kind of creative reading. With the advent of the era of picture reading, image has become one of the important means for people to gain information. Gadamer raised the aesthetic experience of image to the height of philosophy, which helps us to abandon the dependence on absolute truth and treat traditional text and vision with a pluralistic and open attitude, including poster design.
Horizon Fusion: Image Form and Connotation Translation of Poster Design
We all know that all expressions of world experience are delivered by language. The first thing to express meaning is to show language. Any rational explanation that can be understood must have language feature.
Obviously, what philosophical hermeneutics studies is still a kind of language event, that is, the problem of translating one language into another. Therefore, what philosophical hermeneutics studies is how to deal with the relationship between two languages. Of course, the translation here is based on the comprehension. As a kind of communication art, poster design works convey its content and significance to viewers in a special language form.
In design works, the material of artistic language is diverse, and the meaning of expression is multiple. The language of poster design expresses rich, multiple and specific meanings. The dialogue between people and poster images is generally placed in a world of dialogue and hermeneutics. The difference between the artistic language of design and the general written language is that viewers can get infinite meaning in visual reading.
The understanding of artistic works is a process of "fusion of horizons", which is the fusion of the horizons of the aesthetic subject and the works of art itself. This fusion creates a new horizon beyond the two. The difference between artistic language and general concept lies in its infinite meaning. Gadamer believes that the language of art means the overload of the meaning of the work itself. The difference between the language of art and everything that can be translated by concept rests with its finiteness, which is also based on this overload of meaning.
Gadamer's philosophical hermeneutics holds that the real understanding is the "horizon fusion" between the text and the viewer. This fusion will produce a kind of historical truth and the truth of historical understanding, so as to reach the degree of historical validity in understanding, namely "effective history". In all poster design languages, because of the identity of purpose conveyed by image, text, color, theme, carrier and other elements, posters have the most direct experience of this "effective history". Formed by the interaction between viewers and understanding objects, the fusion of horizons promotes the works to generate new meanings constantly. With the continuous expansion of viewers' horizons, the meaning of art breeds infinite possibilities. Facing an artistic work, the general understanding only focuses on the subject's objective understanding of the object, while Gadamer regards understanding as the two-way interaction between the subject and the object. This means that the understanding of poster design language is no longer the unilateral mapping of the subject to the object, but a kind of understanding in a broad sense, which is the mutual infiltration of the ideas of people and works, and then achieve the "fusion of horizons" as Gadamer said. It also shows that readers' understanding of the language of poster works is open and inclusive. It should be noted that in order to avoid the deviation and misunderstanding of the viewers in the image interpretation, the designer should take into account the differences in cultural background, world outlook, nationality, language, cultural form and dialogue relationship. Designers need also to pay attention to the reading comprehension in the way that focuses on images of the visible world, and focus on the image meaning in different viewers' perspective with the language way of invisible factors considered. Only in this way can designers fully understand the real demands hidden in a series of works, and express reasonable logic through metaphor behind the images of different propositions. Although some logics are non-realistic reflection, they are all image languages in line with visual logic.
Furthermore, in design, we often regard the object of understanding as another subject in the dialogue with "I", which really makes hermeneutics break through the traditional epistemology-just as Gadamer expressed in Truth and Method: "tradition is not something we inherit, but something created by ourselves, because we explain the process of tradition, and we're involved in this process, so we further limit the tradition" (Gadamer, 2004, p. 363). Traditional epistemology often assumes that there is an eternal essence behind the phenomenon of being recognized. Gadamer makes us re-understand tradition that tradition is no longer regarded as a dead past, but as a living and real process taking place in us. Cognitive subject should abandon its subjectivity completely, and pay attention to the object being recognized in the attitude of pure spectator, so as to achieve the objectivity of knowledge insistently. As the object of understanding, poster has been transformed into mutual understanding among subjects in hermeneutics; In epistemology, no matter how the subject understands the object, it will not change the object; But in hermeneutics, the subject and the object have an interactive effect in the dialogue. This kind of serious dialogue always makes both sides change, and finally reaches the fusion of horizons.
The Mirror Image of Language: Image Reading Comprehension and Experience Generation
In essence, any artistic work has language characteristic, which is the basis for all artistic works to be understood and the reason for all artistic works to exist and spread. "All art conventions are the conceptual forms of creating and expressing some kind of vitality or emotion" (Langer, 2013, p. 40). The creation of poster images can be regarded as the transformation and symbiosis of words and images. This kind of creation embodies a series of relationships between the subject and object, visual symbols and meaning, such as reproduction, meaning indication and graphic information transmission. Among them, the words and themes are signifiers. In the connection with images, images assume the role of speakers. Although the language ways of words and visual figures are different, we can get inspiration and harmony from the indication of signifier. In semiotics, the relationship between words and ideas, discourse and thought, all revolve on the same hinge, which connects symbols with index images, arbitrary codes with "natural" codes (Mitchell, 2012, p. 72). The understanding of graphic image is also included in the category of psychology. Wittgenstein once explained hieroglyphics with the mode of linguistic image theory. Gadamer said: "if I want to create a kind of philosophical hermeneutics, then its prehistory has shown that the science of 'understanding' constitutes its starting point" (Gadamer, 2004, p. 380).
In addition to the science of understanding, a hitherto unknown thing also needs to be added, that is, the RESEARCH ON THE READING COMPREHENSION AND AESTHETIC EXPERIENCE OF POSTER DESIGN BASED ON GADAMER'S PHILOSOPHY 520 experience of art. Art, like all the sciences of history, is a way of interpreting with experience, through which we directly participate in the understanding of the form, content and theme of posters.
Experience, named Erlebnis in German. According to Gadamer's research, although the word "experience" began to appear in the works of some writers and theorists after the 1830s, it became a concept different from "experience" after the 1870s. Generally speaking, in the creation of posters, we often associate the expression of graphics with literary language. No matter in what style and theme, the main expression language of poster works depends on visual graphics or words. As a language way of information expression, the motivation of visual graphics in posters to be experienced and interpreted must be clear and accurate, and the effective interpretation can be achieved between the transformation and presentation of meaning. Gadamer believes that understanding is a part of universal human experience and takes place in many fields of human life. "If something has not only been experienced, but also has the significance of continued existence, then it belongs to experience. What becomes experience in this way completely obtains a new state of existence in terms of artistic expression" (Gadamer, 2004, p. 079).
If something is known as experience or evaluated as an experience, the accumulation of its own meaning will make it form a unified meaning whole. Gadamer said: "the relationship between life and experience is not the relationship between something general and something special. The unity of experience, defined by its intentional content, exists more in a direct relationship with the whole or totality of life" (Gadamer, 2004, p. 086).
There are different kind of experiences, and the experience here is the continuous existence of the subject and a present progressive tense. It is direct, or a "direct giving"; It gains and absorbs in directness. At the same time, its "directness is prior to all interpretation, processing or communication, and only provides clues for interpretation and materials for creation" (Gadamer, 2004, p. 382). We understand that the theme of a poster can have many ways of thinking and visual ways. Although the graphics on each poster have its basic visual schema, its creative motivation or language mirror can be interpreted through the viewer's viewing and thinking, which requires us to expand our divergent thinking on the graphics as much as possible when facing the theme and function. And then in this way, we choose the accurately expressed language way that can elicit people's deep thinking and touch their heart, as well as inspire a kind of symbolic meaning through the connection between things. This kind of association can be associated with life experience, accumulation of knowledge, or direct connection and indirect reasoning.
Synchrony and Relevance: Implicit Narration and Meaning Reconstruction
The meaning expressed in an artistic work not just equals to the meaning that the artist wants to express in the process of creating the work. We can't restore the expression of an artistic work to what the author actually thinks in the work. Gadamer believes that artistic works is an organic unity, which also has its own timeliness.
Vision itself is a collection of various experiences, including image experience, psychological experience, perceptual experience and life experience. The world has wrapped our visual and psychological perception with various experience systems by means of images. The formal language and narration of posters are a system of meaning. As a structurally rich visual whole and logical statement, it constitutes the discourse and spiritual direction of images. James Elkins, a famous American art historian, believes that "pictorial symbols are believed to carry meaning and have a clear internal structure of symbols in narration and as a whole" (Elkins, 2020, p. 87).
RESEARCH ON THE READING COMPREHENSION AND AESTHETIC EXPERIENCE OF POSTER DESIGN BASED ON GADAMER'S PHILOSOPHY 521
For Gadamer's philosophical hermeneutics, the reason why the way of understanding art is so special and important lies in its simultaneity and contemporaneity. He thinks: "perhaps the intention of the author of an artistic work is to convey creative ideas to the public of his time, but the real existence of an artistic works lies in the meaning of the work itself, which fundamentally transcends any historical restrictions. In this sense, works of art have a kind of immediacy that is not limited by time" (Gadamer, 2004, p. 394). Among them, the way of statement is sometimes explicit, sometimes implicit. Gadamer's discussion on the essence of art is from the perspective of ontology. The essence of artistic works has the characteristics of time and randomness, which is the specific significance of the works to the specific aesthetic subject in a specific environment. He believes that art is closely related to human existence and self-understanding; As a kind of game, art opens a free world for us; The continuous openness of art makes the relationship between art and reality more prominent. It is reasonable that art is higher than reality but not divorced from reality. In any case, when we say that poster design works tell us something, so it belongs to the embryonic form of something we must understand, then our conclusion is not a metaphor. On the contrary, it has effective and arguable significance.
Common sense refers to such well-known things that all people can see in their daily life. They are combined into a complete collective, which is related to truth and statement, as well as the way and form of statement. (Gadamer, 2004, p. 035) Art that only focuses on appearance can only produce lies under any circumstances, because visual gaze cannot observe the material state in depth, let alone penetrate the mental state completely, and the things covered by world representation are far beyond what it reveals. (Ascott, 2012, p. 082) So, what is aesthetic experience? In short, aesthetic experience is an internal state that has been integrated and transcended, which is composed of the aesthetic subject and the object. "Before we get experience, we must first enter into such a naked vacuity. This kind of experience can't be conceptualized… Having aesthetic experience means passing through the domain of cognition and entering the domain of power" (Gadamer, 2004, p. 089).
Experiencing art and understanding art belong to the same process in essence. This process consists of three links or stages, namely, from perception to appearance, from appearance to the destruction, and then from destruction to the reconstruction. Everyone's interpretation of an artistic work contains his own personal knowledge and experience. The pre-structure of understanding, or prejudice, demonstrated by philosophical hermeneutics, is more applicable to artistic experience. Therefore, it is a kind of self-encountering to understand what artistic work tells. However, artistic experience, as an encounter with reliable things, has a surprising familiarity. It is a kind of experience in authentic meaning which must constantly recover the task contained in experience: integrating this experience into people's overall understanding of the world and self.
The Call of Order: Aesthetic Experience and Meaning Overload
Gadamer said: "aesthetic experience is not only juxtaposed with other experiences, but also represents the essential type of general experience. The artistic work in this kind of experience category is a self-made world.
Just like this, aesthetic experience also abandons every connection with reality. The prescriptiveness of artistic works seems to lies in the aesthetics experience. That is to say, the power of artistic works makes the subject of experience get rid of his life contact and return to the whole of his existence at the same time. In the experience of art, there lies a kind of fullness of meaning, which not only belongs to this special content or object, but also RESEARCH ON THE READING COMPREHENSION AND AESTHETIC EXPERIENCE OF POSTER DESIGN BASED ON GADAMER'S PHILOSOPHY 522 represents the whole meaning of life. An aesthetic experience always embodies an experience that contains an infinite whole. It is precisely because aesthetic experience does not form an open unity of experience process with other experiences, but directly represents the whole, so the meaning of this experience becomes an infinite meaning" (Gadamer, 2004, p. 090). Every experience is produced in the continuity of life, and is synchronically associated with the whole of its own life. In poster design, as experience deeply enters into the whole of life consciousness, it would immediately dissolve or blend in like ice and snow. In this way, experience realizes a kind of transcendence that surpasses all prejudices and transcends every meaning that people think exists.
Gadamer believes that every work of art should be understood, and understanding is the ontological existence of the whole world; The understanding characteristics of aesthetic experience and artistic experience indicate many hermeneutic phenomena.
The aesthetic experience of a work always contains the experience of an infinite whole. Gadamer believes that the artistic works from the past, strange world and spread to our present time not just are the objects of the appreciation of history in aesthetics, and not only indicate what they express at that time, but also demonstrate the thoughts of today. he said: "We can correctly conclude that art will never be satisfied with a 'pure aesthetic' way like a flower or an ornament" (Gadamer, 2004, p. 204). Therefore, in the study of works of art, Gadamer transformed the so-called aesthetic problem into the problem of artistic experience. Obviously, what he focuses on in his artistic works is the whole world experience of human beings, not just a kind of aesthetic pleasure and experience. From this perspective to understand the poster design works, we will experience the content of the world we live in.
Through the historical investigation of the concept of experience, Gadamer found that what kind of affinity exists between the general structure of experience and the way of aesthetic existence. Aesthetic experience is not only a kind of experience in all kinds of experience categories, but also reflects the essence of experience itself.
Just as this kind of work of art is a self-made world, aesthetic experience is a kind of experience far away from all realistic connections. Artistic works seem to be stipulated as a kind of aesthetic experience, that is to say, the power of artistic works makes people who join in aesthetic experience out of the net of life and return to his whole existence. So is the aesthetic experience of poster design. Poster design activities always initiate from the start of aesthetic experience, but aesthetic experience does not end with the completion of poster design activities. In the sense of reading and appreciation, the final products of poster design activities actually become the beginning of aesthetic experience in another round, or start the possibility of a new round of aesthetic experience. The higher the artistic achievement of a work is, the more aesthetic experience opportunities it generates and expands, and the wider the aesthetic space is. There is always a fullness of meaning in the experience of excellent poster works, which not only belongs to the special content or object, but also belongs to the whole meaning of the reader's feeling.
Conclusion
Experience is the core issue in aesthetics, especially in art aesthetics. Paul de Man once said that the real theme of aesthetics is experience, which is a process. The aesthetic experience of poster design is not only the driving force, but also the hub of art appreciation, criticism and communication. Without experience, the creation of poster design is unimaginable; Similarly, without experience, the appreciation and criticism of poster design is RESEARCH ON THE READING COMPREHENSION AND AESTHETIC EXPERIENCE OF POSTER DESIGN BASED ON GADAMER'S PHILOSOPHY 523 unimaginable. Therefore, the aesthetic experience in poster design is a spiritual activity throughout the whole process of creation, appreciation, consumption and communication. In the aspect of appreciating poster design works, traditional reading is a kind of individual reading behavior with full autonomy, which is distinctly independent and personalized in each link of the reading process. Maurice Merleau-Ponty once said, "when we are obsessed with the world of deep perception, we are not narrowing our vision, nor limiting ourselves to things such as stones or water. Instead, we have found a proper way to gaze at the autonomous and primary richness in artistic works, discourse works and cultural works" (Maurice, 2002, p. 87). In other words, when people read the poster design works, they will feel what the poster design works are saying and referring to, but this kind of meaning is not always clear and obvious. People always use their own imagination to supplement this meaning and make it a complete meaning. Gadamer believes that "the experience of beauty, especially in the domain of art, is a call for an eternal order that is likely to be restored" (Gadamer, 2004, p. 065). A poster design work achieves its purpose in appreciation application. Therefore, the theoretical significance of regarding the understanding process of poster design work as dialogue goes far beyond the understanding itself. In short, the content of poster design works is actually the meaning realized by the viewer in the subject understanding activities. Therefore, the designer should put his heart and soul into touching, feeling and comprehending the life image and its deep meaning in design practice and experience. In the aesthetic experience of art, only by grasping the unity of meaning and life in poster design, can we grasp the comprehensible content in the perceptual whole, and find the controlling factors in our reaction structure, as well as track the purpose and intention of poster design works, so as to experience a kind of perception of realistic significance and future. | 2021-09-09T20:47:38.789Z | 2021-07-28T00:00:00.000 | {
"year": 2021,
"sha1": "9e25d3492c02e6018d1211ad7a5475e1ddca6799",
"oa_license": null,
"oa_url": "https://doi.org/10.17265/2159-5836/2021.07.010",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "dacc015f8369ffa42c373ceb841e16451de5e6af",
"s2fieldsofstudy": [
"Art",
"Philosophy"
],
"extfieldsofstudy": [
"Psychology"
]
} |
258783661 | pes2o/s2orc | v3-fos-license | Capital Budgeting Analysis in Assess the Feasibility of Vaname Shrimp Cultivation Investment
. Vannamei shrimp cultivation business is one type of business that has the potential to be profitable considering the high level of demand every year and of course also requires a fairly large investment. A true investment is an investment made through a feasibility analysis process to determine whether the business activity to be carried out is profitable or harmful. Good investment decisions will result in good business even though the financial decisions taken are not good, on the contrary the wrong investment decisions will be the wrong decisions that will be detrimental even with the best financial policies. The purpose of this study is to determine the feasibility of investment in vaname shrimp farming located in South Lampung Regency, both traditionally, semi-intensively and intensively. This study uses primary data and secondary data. Primary data is obtained from the results of field surveys, while secondary data is obtained from related agencies. The analytical method used in this research is the capital budgeting technique with the approach of NPV, IRR, Gross B/C, and PI. The results of the study using the four methods of capital budgeting above show that investment activities in vaname shrimp farming, both traditional, semi-intensive and intensive, are feasible. The most feasible and profitable method of vaname shrimp cultivation is intensive, which of course also requires the largest investment.
INTRODUCTION
Indonesia is one of the countries that have the natural resources potential in the field of marine and fisheries which are very large with a sea area reaching 5.8 million KM2 and potential fishery resources reaching 53.9 million tons per year consisting of capture fisheries, marine aquaculture, freshwater fisheries, and fisheries pond cultivation. One type of aquaculture business that has great opportunities to grow and develop is vaname shrimp pond cultivation. Vannamei shrimp is also one of the mainstay commodities that have high economic value because it is quite resistant to disease, its growth is quite fast, usually around 100 days, and is known to have a fairly low feed conversion value. Vannamei shrimp is not yet optimal in its production level, while the demand is very high, both locally and internationally. The Central Statistics Agency data shows that local demand for shrimp is quite high, reaching 463,777 thousand tons (Central Statistics Agency, 2018), while for exports the opportunities are still very large, with an average demand increasing by 5.34% per year. Global shrimp demand reached 2.7 million tons in 2018 and is expected to reach more than 3.3 million tons by 2022 (Ministry of Marine Affairs and Fisheries, 2019). This condition shows that investment in vaname shrimp cultivation is quite attractive and has a large profit opportunity, however, it would be better before making an investment decision, a capital budgeting analysis should be carried out first to assess whether the vaname shrimp farming business to be run is feasible or not, until investment that is implanted on target and benefits business actors.
Investment has an important role in determining the progress of a business. Investment can be interpreted as a commitment to use funds for a certain period to obtain payments in the future that will compensate investors with the timing of the use of the funds submitted, the expected inflation rate during the investment period, and the uncertainty of future payments (Reilly and Brown, 2012). ). The right investment decisions can help improve a company's financial health. A true investment is an investment made through a feasibility analysis process to determine whether the business activity to be carried out is profitable or detrimental. Brealey, et. al. (2015) said that good investment decisions will result in good business even though the financial decisions taken are not good, on the contrary, the wrong investment decisions will be the wrong decisions, which will be detrimental even with the best financial policies. Therefore, it is necessary to do a capital budgeting analysis to assess whether or not a business will be carried out.
Capital budgeting decisions are related to investment decisions in long-term projects. Capital budgeting is often used interchangeably with capital expenditure or capital investment. Any expenditure that generates cash flow benefits for more than one year, it is a capital expenditure. For example, the purchase of new equipment, expansion of production capacity, purchase of other companies, research & development, and so on. Capital budgeting involves spending large amounts of cash to generate future returns on investments that have been made. Once, capital budgeting decisions are made, they are often difficult to reverse. Therefore, it is very necessary to carefully analyze and evaluate the proposed capital budgeting decisions (Goel, 2015).
Capital budgeting is a very important process, so it can help in making investment decisions. A good capital budgeting decision for business actors has an important role because it is inappropriate with the main purpose of investment, namely maximizing profits which of course require large resources and long-term commitment. The decision-making process cannot be manipulated, because it will cause losses after the decision is made (Hall and Millard, 2010). Capital budgeting cannot stand alone but is a process called the capital budgeting process. Capital budgeting is a step-by-step process of evaluating and selecting long-term business investments that are consistent with the goals of business actors to maximize wealth (Gitman, et. al., 2015). The capital budgeting process is a gradual activity designed to assist in selecting a feasible and profitable investment project proposal to do (Mollah, Rouf, and Rana, 2021). Leon et. al. (2008) said that capital budgeting is a process of evaluating cash flows for the proposed project by considering risks & uncertainties and making decisions on proposed investment projects. Therefore, more careful action is needed in selecting proposed investment projects so that they can provide benefits and not cause losses.
This research was conducted to determine the feasibility of vaname shrimp farming business, using a capital budgeting analysis technique consisting of the Net Present Value (NPV), Internal Rate of Return (IR), Gross Benefit Cost Ratio (Gross B/C), and Profitability Index (PI) approach. Although there are many approaches to capital budgeting analysis, the four approaches above are considered the most appropriate for conducting a feasibility analysis of investment in vaname shrimp farming and are widely used by experts compared to other approaches (Maroyi and van der Poll, 2012;Ryan and Ryan, 2002). ; Arnold and Hatzopoulos, 2000;Graham and Harvey, 2002;Dedi and Orsag, 2007;Verma et al., 2009;Batra & Verma, 2017).
METHODS
This research is applied research with a quantitative approach. Applied research is research whose findings are used to solve problems in an organization on time (Sekaran & Bougie, 2016). A quantitative approach is an approach that is based on the philosophy of positivism, used to research a particular population or sample, and data collection using research instruments, data analysis is quantitative or statistical, to describe and test established hypotheses. This research was conducted in South Lampung Regency in 2022 using survey methods, interviews, and direct field observations. The data used in this study are primary and secondary. Primary data were obtained by conducting direct observations and interviews with vaname shrimp farming business actors in the Sragi and Ketapang Districts, while secondary data were obtained from the Department of Marine Affairs and Fisheries of South Lampung Regency and the Department of Maritime Affairs and Fisheries of Lampung Province.
For additional information, researchers also use existing information such as articles, journals, books, and websites. To assess the feasibility of investing in vaname shrimp aquaculture related to the research objectives, several capital budgeting methods are used, namely NPV, IRR, Gross B/C, and PI. Goel (2015); Witoko, et al. (2018), explains the concepts and each of these methods' criteria: (1) NPV is the difference between profits and costs that have been calculated at their current value and a certain interest rate. NPV is calculated by discounting future cash flows (both cash inflows and outflows) using a target cost of capital and looking at the difference between the present value of net cash inflows and cash outflows. A positive NPV value indicates that the proposed investment project is profitable and feasible; (2) IRR is an interest rate that shows the total net present value (NPV) equal to the total investment cost of the project. IRR is calculated by finding the discount rate that equates to the present value of cash outflows and cash inflows. This rate of return is then compared with the rate of return required to determine the feasibility of an investment project. The IRR value which is greater than the interest rate illustrates that the proposed investment project is profitable and feasible; (3) Gross B/C is a comparison between the receipts or benefits of an investment with the costs that have been incurred. A Gross B/C value greater than 1 point that the proposed investment project is profitable and feasible; and (4) PI is a method that compares the value of future net cash flows with the current investment value. PI values greater than 1 tell that the proposed investment project is profitable and feasible.
All of the capital budgeting methods in table 1 above will be used to determine whether the investment in vaname shrimp farming in South Lampung Regency is feasible or not to be carried out according to the criteria that have been set for each method.
RESULTS
In the interest of performing a capital budgeting analysis to assess the feasibility of investing in vaname shrimp farming, it is necessary to first identify the investment needs and income from the harvest of vaname shrimp farming business. Vannamei shrimp farming business which is the basis or primary commodity in South Lampung Regency requires various business investment costs, including fixed costs consisting of pond building investment, pond equipment costs, employees, transportation services, pond maintenance, land rent, and variable costs incurred consists of feed, lime, seeds, probiotics, vitamin C, molasses and so on. The vaname shrimp aquaculture business system consists of three forms, namely traditional, semi-intensive and intensive. Each of these vaname shrimp farming systems requires different investment costs which of course will provide different results and benefits as well. Based on the results of interviews with business actors and managers of vaname shrimp cultivation in South Lampung Regency, as well as assistant officers from the Department of Maritime Affairs and Fisheries of South Lampung Regency, it can be concluded that investment needs for vaname shrimp cultivation systems are traditional, semi-intensive and intensive, as shown in the following Table 2-4. The traditional vaname shrimp cultivation business relies more on nature and the lack of pond farmers, so the costs required are also relatively cheaper than other methods. The data in the table above explains that each hectare of vaname shrimp farming land costs approximately Rp. 120 million with an estimated yield of 5 quintals per harvest period. Every year, the harvest of vaname shrimp can reach 3-4 times with the size of the shrimp harvested ranging from 40-65 per kg and the selling price in the shrimp market not far from 75,000 to 100,000 per kg. The vaname shrimp culture system with the semi-intensive method requires a higher cost when compared to the traditional method which of course is also supported by greater yields because it already uses technology such as the use of waterwheels. In addition, the depth of semi-intensive ponds is also deeper than traditional shrimp ponds, which is two times which allows the stocking of shrimp seeds to be denser. For each hectare of land used with a seed stocking density of approximately 140,000 capable of producing 2 tons or more for each harvest period, assuming the shrimp cultivation period is 2½ months and is in normal conditions, meaning that it is not exposed to viruses that come from the water used. The size of shrimp harvested by this method is quite varied, ranging from 60-70 fish per kg with a selling price of around 65,000 -85,000 per kg. The intensive vaname shrimp farming system requires a larger investment compared to traditional and semi-intensive systems, which is more than 830 million because it uses a larger number of windmills and also has deeper ponds than traditional and semi-intensive methods. The results obtained are also greater, which can reach 7 tons for each harvest period based on the explanations of shrimp farming business actors in South Lampung Regency with a seed stocking density of 450 thousand. The size of the shrimp harvested is also quite large, that is around 40-50 per kg with selling prices ranging from 80,000 -100,000 per kg. The intensive system has a harvest period of 3 times in one year with a cultivation period of up to 3 months for each harvest period.
Intensive systems take a long time to clean the pool, rearrange the pool to be completely ready for reuse, usually around 1 month. Furthermore, an analysis of the income from the production of vaname shrimp will be carried out which is of course needed in conducting a capital budgeting analysis. The income from vaname shrimp cultivation is largely determined by the period of cultivation, whether traditional, semi-intensive, or intensive shrimp farming systems, seed stocking density, the size of shrimp harvested, and the price of vaname shrimp in the market. Based on estimates of vaname shrimp production per hectare per year and shrimp price data based on shrimp size obtained from various sources, it can be estimated that the total income of vaname shrimp farming business actors in South Lampung Regency, as shown in table 4 below: The calculation results of the income from the production of vaname shrimp above are based on the assumption that during the three years the average shrimp yield for traditional, semi-intensive, and intensive aquaculture systems is relatively the same under normal conditions, meaning that there are no viruses that attack the ponds of business actors for a long period. Shrimp harvest for each method according to the explanation above. By using data on investment needs and the value of income from the production of vaname shrimp farming as described above and the average interest rate on fishery loans set by commercial banks as stated in Indonesian banking statistics, a capital budgeting analysis can be carried out using the Net Present, Value (NPV), Gross Benefit Cost Ratio (Gross B/C), Internal Rate of Return (IRR) and Profitability Index (PI) approach to assess whether vaname shrimp farming business in South Lampung Regency is feasible or not to run. Taking into the calculation of the results above, it can be seen that the most profitable vaname shrimp cultivation system is the intensive method. Although it requires a fairly large investment, it also provides much greater returns, so it is very feasible to be carried out by vaname shrimp farming business actors in South Lampung Regency. Vaname shrimp is one of the most popular types of shrimp in Indonesia because almost all shrimp farming business actors cultivate vaname shrimp, including in South Lampung Regency. The focus of developing aquaculture in South Lampung Regency is vaname shrimp, especially in Ketapang and Sragi sub-districts with a large area of land for vaname shrimp cultivation. Vaname shrimp is one of the mainstay commodities that have high economic value, because it is quite resistant to disease, and its growth is quite fast, however its production is still not optimal, while the demand is quite high, both locally and internationally. Vaname shrimp cultivation business has great potential to be developed and profitable business opportunities. In the way to ensure that investment activities in vaname shrimp farming are feasible, capital budgeting analysis is carried out by using the Net Present Value (NPV), Gross Benefit Cost Ratio (Gross B/C), Internal Rate of Return (IRR), and Profitability Index (PI) approach. This analysis is important to do considering the amount of investment invested is quite large for vaname shrimp cultivation activities, especially semi-intensive and intensive ones.
The results of capital budgeting analysis using the NPV, Gross B/C, IRR, and PI approach point that investment activities in vaname shrimp farming in South Lampung Regency, in the term of traditional, semi-intensive, and intensive, are feasible. The most profitable method of vaname shrimp cultivation is the intensive system, although it requires a large investment when compared to the semiintensive and traditional systems, the potential profit is also very large. Overall, the results of this analysis denote that investment activities in vaname shrimp farming provide great benefits if carried out properly by business actors, especially for cultivation with intensive and semi-intensive systems. Vannamei shrimp has great potential to be developed considering the increasing demand for this type of shrimp commodity both domestically and abroad. The results of this analysis can be used as information in making investment decisions for business people who are interested in vaname shrimp farming.
CONCLUSIONS
Cultivation fishery has great potential to grow and develop considering that Indonesia is an archipelagic country that has a fairly wide sea. One type of cultivation fishery which is a mainstay commodity that has high economic value is vaname shrimp. Vannamei shrimp is the most popular type of pond-cultured shrimp in Indonesia because almost all shrimp farmers in Indonesia cultivate vaname shrimp, including in South Lampung Regency. Although the size is relatively small when compared to other types of shrimp, the taste is no less delicious and the market demand is also quite high, both domestically and internationally. In addition, vaname shrimp is also able to live in a wide range of salinity, can adapt to low-temperature environments, has a high survival rate, and has good disease resistance, making it easy to cultivate. Vannamei shrimp cultivation systems are usually grouped into three consists traditional, semi-intensive and intensive. Each cultivation system requires different investments and will also provide different yields. This study aims to determine the feasibility of investing in vaname shrimp aquaculture business located in South Lampung Regency. To be able to achieve the research objectives, a survey was conducted by using interview techniques and direct field observation of the vaname shrimp farming business actors in two sub-districts with large shrimp production, namely Ketapang and Sragi sub-districts. | 2023-05-19T15:02:51.378Z | 2023-03-26T00:00:00.000 | {
"year": 2023,
"sha1": "a32239a98c10dcb324d76bccf40b9944d7eb7c25",
"oa_license": null,
"oa_url": "https://doi.org/10.33087/ekonomis.v7i1.742",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7387042cef2cd8b17f5c9a830dbd9345c864d023",
"s2fieldsofstudy": [
"Business",
"Agricultural and Food Sciences",
"Economics"
],
"extfieldsofstudy": []
} |
270169350 | pes2o/s2orc | v3-fos-license | Knowledge mapping of AURKA in Oncology:An advanced Bibliometric analysis (1998–2023)
AURKA, also known as Aurora kinase A, is a key molecule involved in the occurrence and progression of cancer. It plays crucial roles in various cellular processes, including cell cycle regulation, mitosis, and chromosome segregation. Dysregulation of AURKA has been implicated in tumorigenesis, promoting cell proliferation, genomic instability, and resistance to apoptosis. In this study, we conducted an extensive bibliometric analysis of research focusing on Aurora-A in the context of cancer by utilizing the Web of Science literature database. Various sophisticated computational tools, such as VOSviewer, Citespace, Biblioshiny R, and Cytoscape, were employed for comprehensive literature analysis and big data mining from January 1998 to September 2023.The primary objectives of our study were multi-fold. Firstly, we aimed to explore the chronological development of AURKA research, uncovering the evolution of scientific understanding over time. Secondly, we investigated shifting trends in research topics, elucidating areas of increasing interest and emerging frontiers. Thirdly, we delved into intricate signaling pathways and protein interaction networks associated with AURKA, providing insights into its complex molecular mechanisms. To further enhance the value of our bibliometric analysis, we conducted a meta-analysis on the prognostic value of AURKA in terms of patient survival. The results were visually presented, offering a comprehensive overview and future perspectives on Aurora-A research in the field of oncology. This study not only contributes to the existing body of knowledge but also provides valuable guidance for researchers, clinicians, and pharmaceutical professionals. By harnessing the power of bibliometrics, our findings offer a deeper understanding of the role of AURKA in cancer and pave the way for innovative research directions and clinical applications.
Introduction
Cancer, a significant global public health concern, poses formidable challenges due to its widespread prevalence and high mortality rates [1].Therefore, it is of utmost importance to prioritize efforts aimed at discovering effective interventions to mitigate its detrimental impact on human health [2].Aurora Kinase A (AURKA), a pivotal molecule involved in the regulation of cellular processes, has Abbreviations: AURKA, Aurora Kinase A; WOS, Web of Science; AKIs, AURKA Inhibitors; Chemo, Chemotherapy; Immune, immunotherapy; HDACi, Histone Deacetylase Inhibitors Vascular Endothelial Growth Factor Inhibitors; ARIs, Androgen Receptor Inhibitors.emerged as a key player in cancer research [3].
Belonging to the Aurora kinase family, which encompasses AURKA, AURKB, and AURKC, AURKA, also known as STK15 or BTAK, is a serine/threonine kinase primarily associated with the precise control of mitosis and has garnered substantial attention in cancer research [4].This attention stems from its critical role in maintaining genome stability and ensuring accurate cell division.AURKA exerts its biological function by governing chromosome alignment and segregation during mitosis [5].Through phosphorylation of various mitotic proteins, including histone H3 and TPX2, AURKA orchestrates the progression of mitosis and guarantees the faithful separation of chromosomes into daughter cells [6].Dysregulation of AURKA can lead to chromosomal instability, aberrant cell division, and aneuploidy, all of which significantly contribute to the initiation and progression of tumors [7][8][9].Beyond its fundamental involvement in mitosis, AURKA has been implicated in diverse cellular processes, such as centrosome maturation regulation, spindle assembly checkpoint signaling, DNA damage response, and cell migration [10,11] The abnormal expression or activity of AURKA has been detected across various types of cancer, including breast, lung, colorectal, pancreatic, hepatocellular, and ovarian cancers, rendering it an attractive target for anticancer therapies [12][13][14][15].Nevertheless, while the transformative potential of AURKA is increasingly recognized, its clinical application spectrum remains relatively limited [16].Furthermore, despite the extensive research conducted on AURKA, there remains a dearth of systematic curation and quantitative analysis regarding its impact on cancer.Consequently, synthesizing the existing body of literature surrounding AURKA could yield valuable insights into the current state of research on AURKA, its notable contributions, and future directions.
Bibliometrics, as a scientific discipline, assumes a crucial role in analyzing and quantifying scientific publications [17].It employs mathematical and statistical methods to evaluate various facets of scholarly literature, such as publication trends, citation patterns, collaboration networks, and research focal points [18].In line with this, our study aims to comprehensively analyze the available literature on AURKA's involvement in cancer from 1998 to 2023 by leveraging bibliometric techniques.Through an exploration of the characteristics of these publications, we seek to elucidate the hotspots and evolutionary trends of AURKA in cancer research utilizing knowledge mapping methodologies.This endeavor not only promises to enhance our understanding of this molecule but also serves as a guiding resource for future investigations in this field.
By conducting a rigorous bibliometric analysis, our study endeavors to bridge the existing knowledge gap and generate novel insights into AURKA's intricate involvement in cancer.Additionally, this comprehensive analysis will equip researchers, clinicians, and policymakers with a panoramic view of the current landscape of AURKA research, facilitating informed decision-making and fostering further advancements in cancer therapeutics.
Data acquisition and selection
Web of Science (WOS), a robust digital repository known for high-quality academic articles and widely recognized as suitable for bibliometric analyses, served as our primary data source in this study.To ensure comprehensive and accurate data retrieval, the indices selected were SCI-EXPANDED and SSCI.Our final search strategy encompassed a set of synonymous terms including AURKA, STK15, STK6, BTAK, and ARK1.These terms were combined with search parameters related to cancer such as "cancer*", "tumor*", "oncology", and various relevant subtypes (e.g., "neoplasm*", "carcinoma*", "lymphoma*", "sarcoma*", "leukemia*") to capture a wide range of publications.The time span covered from January 1998 to September 2023, with the cut-off for data retrieval set on September 1, 2023.The literature types were confined to 'Articles' and 'Reviews.'By utilizing the Topic Search (TS) field, we aimed to retrieve literature that contained the specified terms (AURKA) within the titles, abstracts, keywords, or other relevant sections.This approach allowed us to cover various aspects of AURKA molecular research in a broader range of publications.The initial search yielded 2212 journal articles.After rigorous data cleaning and selection processes, a total of 1921 valid papers were identified, comprised of 1751 original research articles and 170 review articles.Detailed information regarding these processes is provided in Table 1 and Fig. 1. (TS= ("Aurora Kinase A″ OR "Aurora-A kinase" OR "Aurora A kinase" OR "AURKA" OR "STK15" OR "serine/threonine kinase 6 (STK6)" OR "breast tumor amplified kinase (BTAK)" OR "aurora-related kinase 1 (ARK1)") AND TS= ("cancer*" OR "tumor*" OR "oncology" OR "neoplasm*" OR "carcinoma*" OR "lymphoma*" OR "sarcoma*" OR "leukemia*")
Data analyses and visualization
In this study, we applied a combination of bibliometric analysis, bioinformatics techniques, and meta-analysis to investigate the research landscape surrounding AURKA in the field of oncology.The tools utilized included Vosviewer, CiteSpace, R package "bibliometrix," and Stata (version 15.1) for data integration and visualization.First, through bibliometric techniques, a comprehensive review and quantitative analysis of existing literature on AURKA in oncology were conducted.Various aspects such as authorship, Fig. 1.Publications screening flowchart.
Q. Zhou et al. keywords, publications, countries, institutions, and references were examined to gain insights into the evolution and intrinsic connections of AURKA research within the oncological landscape.To explore the biological significance of co-occurring molecular keywords with AURKA, VOSviewer was employed to extract total keywords from 1921 articles.These keywords were then compared with human genome data to identify biologically relevant associated genes.Subsequently, GO/KEGG clustering analysis was performed using the R package clusterProfiler to further understand the functional implications of the identified keywords and associated genes [19].Additionally, protein-protein interaction (PPI) networks were constructed using Cytoscape software (version 3.10.1)to elucidate the interactions and relationships among proteins related to AURKA.Furthermore, we conducted a meta-analysis utilizing prognostic data pertaining to AURKA and cancer patients, which is thoroughly outlined in Supplementary Table 1.In order to ensure comprehensive analysis, with a specific focus on prognosis-related data associated with AURKA, we extensively integrated data from PubMed.This integration allowed us to expand our dataset and bridge any potential gaps in article coverage that may exist when relying solely on the Web of Science database.By employing these comprehensive methodologies, we were able to gain profound insights into the multifaceted role of AURKA in cancer research and identify potential avenues for future exploration.
General overview of included studies
This study provides a rigorous analysis of 1921 research articles with an H-index of 116 focusing on the Aurora kinase A (AURKA) in the field of oncology.The annual average publication rate was 77 articles, involving contributions from 12,602 authors, 512 institutions, and 68 countries.These publications were distributed across 614 different journals and accumulated a total of 59,942 citations from 5185 distinct journals.From the trend curve depicting the overall publication trend, it is evident that the volume of publications in this field exhibits a steady growth pattern.Notably, there were two minor peaks in publication activity observed in 2015 (132 studies) and 2021 (183 studies), with a consistent increase in publication output before 2015, as depicted in Fig. 2.This indicates that research related to AURKA in the field of oncology has experienced continuous and robust development.
In 2021, the number of publications reached its highest point, with the United States, China, and Germany emerging as the three most influential countries in terms of publication output.However, it is important to note that the United States and Germany have played pioneering roles in studying the significance of this molecule in cancer from an early stage.On the other hand, China has demonstrated consistent growth in publication volume over the past decade, injecting new research vigor into this field.To surpass the impact of earlier literature, as measured by metrics such as H-index and SOTC, further advancements are necessary.Therefore, we Fig. 2. Annual distribution of literature in AURKA research in cancers.
Q. Zhou et al. anticipate that more in-depth investigations will unveil profound breakthroughs associated with AURKA in the domain of oncology.
Distribution of countries and institutions
Analyzing the countries and institutions contributing to the published literature (as seen in Fig. 3 and Table 2), the findings reveal that the United States contributed 661 articles, China contributed 681, and Germany contributed 139.However, it is important to note that the United States and Germany have been involved in research in this field for a longer period, leading to higher citation rates, particularly with a focus on highly cited articles.Examining the top 10 institutions in terms of publication volume, Sun Yat-Sen University (China), University of Texas MD Anderson Cancer Center (USA), and Millennium Pharmaceuticals Inc. (USA) were the top three contributors.Additionally, it can be observed from Fig. 3 that countries and institutions with higher publication output also exhibit stronger collaboration intensity.
Analysis of journals and authors
The field of oncology has experienced a growing research focus on the AURKA molecule, as evident from an increasing number of dedicated high-quality journals.Fig. 4 presents that a total of 614 journals have contributed to disseminating research related to AURKA in oncology.Our analysis further supports adherence to Bradford's Law [20], with fewer core journals having a higher publication volume, as demonstrated by the distribution of publications across different zones detailed in the Supplementary Table 2. Table 3 showcases the top 10 journals that have made significant contributions in this area.Among these journals, "Oncotarget" stands out as the leading publication with 48 articles, representing approximately 2.5 % of the total publication output.This underscores the journal's dedication to publishing cutting-edge research on AURKA.Following closely behind, "Clinical Cancer Research" and "Oncogene" rank second and third, respectively, with 45 and 39 articles each.These journals have also played a pivotal role in fostering advancements in our understanding of the AURKA molecule within the field of oncology.
To further explore the representative scholars and core research forces of AURKA in oncology research, we performed an analysis of the author collaboration network as shown in Fig. 4.Among the 12,602 authors, the author with the highest number of publications has accumulated 37 papers.According to Price's law, authors with more than 5 publications are considered core authors in this field [21].There are a total of 114 core authors who have published a total of 834 papers, accounting for 43.4 % of the total publication output.This is close to the "half" (50 %) standard proposed by Price, indicating that a relatively stable author collaboration group has formed in this field.Additionally, applying Lotka's inverse square law to evaluate core authorship (the 114 authors with five or more publications accounted for approximately 1 % of the total of 11,014 authors with one publication, close to the expected proportion of 1/5 2 ), it is also in line with Lotka's inverse square law.Consistent with Lotka's Law, our study found that around 87.4 % of the authors published only one paper, indicating that the majority of scholars adhere to the principle of having a single publication [22].Interestingly, this scarcity of highly prolific authors in the field of AURKA research within oncology suggests a significant presence of newcomers exploring this specific area.Among the highly productive authors, the first-ranked author is Jeffrey A. Ecsedy from Millennium Pharmaceut Inc, Dept Translat Med, Cambridge, MA USA.As of September 2023, he has published a total of 37 papers, with 2631 citations and an average citation count of 71.11 per paper.The second-ranked author is Quetin Liu from Sun Yat Sen Univ, State Key Lab Oncol South China, with 25 publications and 902 citations, averaging 36.08 citations per paper.The third-ranked author is Subrata Sen from the University of Texas M.D. Anderson Cancer Center, with a cumulative publication count of 25 and a citation frequency of 3950 (see Table 3).
Co-cited references and reference burst
In order to gain insights into the highly cited papers in this field, a total of 28 references that received at least citations were identified from the pool of 59,942 cited references.Among these highly cited references, Table 4 presents the top 10 articles that accumulated more than 137 citations.Notably, the article published by Hongyi Zhou as the first author and Subrata Sen as the corresponding author in "Nature Genetics" in 1998 (n = 362) emerges as the most frequently co-cited paper among all the references analyzed.Moreover, within the top 10 list, three articles are review papers while six are research articles.
We utilized Citespace's reference analysis to partition the entire network map into 8 clusters based on different research themes.Each cluster was assigned a distinct term label, corresponding to the distribution of time periods shown in Fig. 5A.As illustrated in Fig. 5B, the clusters include Cluster #7 (STK15), Cluster #6 (polymorphism), Cluster #3 (alternative splicing), Cluster #4 (centrosome), Cluster #0 (AZD1152), Cluster #1 (alisertib), Cluster #5 (bioinformatics analysis), and Cluster #2 (therapeutic target).The literature within different time periods not only focuses on specific research hotspots during those periods but also lays the foundation for future studies.It is evident that targeting AURKA for therapeutic purposes is a current research hotspot.For citation burst analysis,
Investigation of keywords and trend topics
To gain further insights into the hot topics and frontiers of AURKA in cancer research, we conducted co-occurrence network analysis using VOSviewer on the keywords that encapsulate the core and essence of the literature.A total of 160 keywords, with a frequency exceeding 5, are visualized in the clustering view of Fig. 6A.The size of each circular node corresponds to the frequency of occurrence of the respective keyword, reflecting its significance as a research hotspot in the field.The connecting lines between nodes indicate the strength of association, with thicker lines indicating a higher frequency of co-occurrence in the same publications.To provide a detailed understanding of specific keywords, we compiled high-frequency keywords with frequencies exceeding 19 in Table 5.Interestingly, the distribution of high-frequency keywords in AURKA cancer research exhibited similarities to Zipf's Law, with a small number of keywords having high frequencies and the majority having lower frequencies (see Supplementary Fig. 1) [31].Additionally, to capture the evolution of hot topics and frontiers in this domain over time, we employed R biblimatrix to create a trend-topic map (Fig. 6B).
Association gene clustering analysis
To explore the biological significance of the molecular keywords co-occurring with AURKA in the literature, we utilized VOSviewer to extract a total of 6513 keywords from 1921 articles.Among these, 682 keywords had a frequency exceeding 5 times.By comparing them with human genome data, we identified 316 biologically relevant associated genes.Next, we employed the R package clus-terProfiler for GO/KEGG clustering analysis.The GO/KEGG functional enrichment results are presented in Fig. 7A and B, which displays bar charts and dotplots of the top 10 enriched biological processes (BP), molecular functions (MF), cellular components (CC) and KEGG pathways related to AURKA in cancer research, based on their P-values.
Protein-protein interaction network related to AURKA in cancer research
To investigate the protein-protein interaction (PPI) patterns of AURKA in cancer research and unravel the complex biological processes and regulatory mechanisms within tumor cells, we initially utilized the STRING online database (https://string-db.org/) to construct a PPI network using a molecular list of AURKA-associated keywords.Subsequently, we imported the network data into Cytoscape for further analysis.Within Cytoscape, we employed the cytoNCA plugin (version 2.1.6)to calculate the betweenness centrality and ranked the interacting molecules based on this measure.Fig. 8 visualizes the top 10 and top 50 molecules, color-coded accordingly.Notably, TP53, HSP90AA1, MYC, AKT1, STAT3, JUN, CDK1, SRC, GSK3B, and BUB1B emerge as potentially core proteins through which AURKA exerts its role in tumorigenesis.
Prognostic meta-analysis of AURKA expression in cancer patients
A comprehensive meta-analysis was conducted to assess the prognostic significance of AURKA expression in cancer patients.A total of 33 studies were identified through an extensive search on PubMed and WOS databases, involving 3973 patients across 18 different tumor types .These studies examined the association between AURKA expression levels, measured by immunohistochemistry or RT-PCR, and overall survival outcomes.Using Stata software (version 15.1), a random-effects model forest plot (Fig. 9) was generated to analyze the heterogeneity among the tumor subgroups.
General overview
Before 2003, the number of relevant papers was in single digits each year, indicating that the field was in its infancy.Research on the top 10 journals.These two journals have high prestige and influence in AURKA research in oncology, as they can widely disseminate research findings and attract attention and citations from colleagues.However, other journals with impact factors below 10 still play an important role and make unique contributions in specific research directions or subfields.Nevertheless, we encourage higher-quality papers to be published in top-tier journals in the field of oncology to drive progress and innovation.We also encourage researchers to conduct more in-depth studies and contribute further to the field of oncology.
Looking at the top three authors deeply involved in this field, Jeffrey A. Ecsedy's team is dedicated to studying the pharmacological mechanisms of AURKA molecular targeted drugs and conducting preclinical and translational research [65][66][67]; Professor Quetin Liu's research focuses on the basic and translational research of cell cycle-related proteins and the mechanisms related to the development of tumor stem cells.He has made significant contributions in exploring the regulation of AURKA in cell mitotic cycle and tumor drug resistance [68][69][70][71]; Subrata Sen conducts functional characterization of genes regulating mitotic chromosome segregation in Source: Web of Science.mammalian cells, elucidating their role in inducing chromosomal instability in cancer and exploring the functional interactions of critical mitosis-regulating genes in the Aurora kinase family, while also collaborating closely with Professor Quetin Liu [7,26,[72][73][74].
In conclusion, regarding the research on AURKA in the field of oncology, it is evident that by sharing research findings and engaging in collaboration, we can accelerate our understanding of AURKA and its related signaling pathways.This, in turn, can provide more effective strategies and drug targets for cancer treatment and prevention.
Knowledge foundation
Considering that the formation of knowledge often relies on the organic combination of co-cited literature, we employed co-citation analysis to explore the knowledge foundation of AURKA in cancer research [75].Through an examination of the top 10 co-cited articles, we identified a significant accumulation of knowledge between 1995 and 2005.
In 1995, Huw Parry et al. investigated aurora gene mutations in Drosophila, revealing the failure of centrosome separation and the formation of monopolar spindles [8].This study laid the groundwork for understanding the functional implications of aurora gene mutations.Furthermore, Gregory D. Plowman et al. (1998) identified two human homologous genes of aurora kinase, with aurora2 being overexpressed and amplified in various tumors, indicating its potential as an oncogene and highlighting centrosomal-associated proteins as potential targets for cancer therapy [24].During the same period, Subrata Sen et al. (1998) demonstrated STK15 gene amplification in multiple cancers, suggesting its critical role in centrosome amplification, chromosomal instability, and cellular transformation [23].Moving forward to 2003, Ashok R.Venkitaraman found that amplification of the AURORA-A gene in epithelial malignancies leads to overexpression and resistance against paclitaxel, resulting in abnormal mitosis and cell multinucleation [27].In the same year, Hideyuki Saya et al. discovered the interaction between Aurora-A and Ajuba in regulating mitosis, underscoring its significance in initiating mitotic processes [29].Additionally, Subrata Sen's team in 2004 revealed that Aurora kinase A phosphorylates p53, promoting its degradation and facilitating carcinogenic transformation by downregulating cell cycle regulation and checkpoint responses [26].Karen M. Miller et al. ( 2004) identified VX-680 as a highly selective small molecule inhibitor of Aurora kinases, demonstrating its effectiveness in inhibiting tumor growth and inducing regression of various tumors at tolerable doses [28].The remaining three co-cited articles, published in 2003 [5], 2005 [25] and 2007 [30], are comprehensive reviews.different stages of cell division, highlighting their association with cancer.Hideyuki Saya (2005) focused on the regulation of key events by Aurora-A and Aurora-B during the G2 to M phase transition in the cell cycle, summarizing the aberrant molecular mechanisms of Aurora-A in tumorigenesis.Chuanmao Zhang et al. (2007) summarized the latest research progress on the Aurora kinase family in cell division, tumor occurrence, and targeted therapy.
In summary, these 10 co-cited articles offer profound insights into the normal and abnormal biological functions of AURKA in cell division, molecular mechanisms of its expression and functional perturbations in tumorigenesis, as well as potential molecular targeted therapeutic strategies for cancer treatment.Undoubtedly, these studies serve as essential clues and theoretical foundations for further investigations into AURKA in the field of cancer research.
Research front and hotspot
The utilization of co-citation clustering and burst detection algorithms provides valuable tools for unraveling the frontiers of AURKA research [76].The emergence of biological informatics analysis and targeted therapies centered around AURKA signifies significant advancements within the field of oncology.Furthermore, by utilizing omics studies to investigate the interplay between AURKA and other molecules, as well as pathway crosstalk, researchers gain critical insights into the biological mechanisms driving tumor development.These findings contribute to the translational research efforts aiming to bridge the gap between preclinical studies and clinical trials, highlighting the primary research focus on AURKA within the field of oncology.
Additionally, keyword-based text mining serves as a recognized approach for unveiling emerging trends and hot topics in scientific research [77].From the analysis of Fig. 5 and Table 6, it is evident that the keywords primarily fall into five different categories, namely molecules, diseases, phenotypes, drugs and methodologies.Among the high-frequency keywords associated with AURKA, the top three molecules mentioned are p53, TPX2, and AURKB.The most commonly mentioned cancer types are breast cancer, liver cancer, lung cancer, prostate cancer, and colorectal cancer.The closest phenotypes related to AURKA are apoptosis, cell cycle, cell proliferation, mitosis, and centrosome.The most frequently mentioned drug is mln8237 (also known as Alisertib), a selective inhibitor of AURKA.In recent years, there has been a significant emphasis on various methodologies, including bioinformatics, molecular docking, and network pharmacology.
Through bioinformatics analysis, we can delve into the multidimensional data and complex networks associated with the keywords of molecules related to AURKA.Consistent with the aforementioned phenotype keywords, AURKA is involved in regulating processes related to cell cycle progression, mitotic kinase activity regulation, and protein-protein interaction complex formation.Fig. 7B demonstrates the top 10 enriched Hallmark pathways obtained from the KEGG enrichment analysis.These pathways include Hippo, PI3K-AKT, FOXO, MAPK, and other signaling pathways closely associated with tumor occurrence and development.Overall, this gene clustering analysis provides insights into the biological context of the molecular keywords co-occurring with AURKA, highlighting their involvement in essential cellular processes and signaling pathways implicated in tumorigenesis.PPI protein interaction analysis reveals that p53 acts as a hub gene in the molecular interactome of AURKA, which is consistent with the findings reported by Subrata Sen's team [78].The functional interactions between Aurora kinases and p53 family proteins modulate the activity and subcellular localization of each other and their downstream effector proteins, thereby coordinating diverse cellular pathways [26].Dysregulation of these interactions in cells undergoing tumorigenic transformation has significant functional consequences on inducing chromosome instability and developing various tumor-associated phenotypes, including therapy resistance [79][80][81].
Based on the rigorous meta-analysis of 33 original articles investigating the relationship between AURKA expression and overall survival prognosis in various cancer types, our findings highlight the promising potential of AURKA as a prognostic marker.Despite observing considerable heterogeneity among different tumor types, the consistent association between AURKA overexpression and unfavorable prognosis in cancer patients is evident.This meta-analysis emphasizes the importance of AURKA as a potential indicator for assessing patient outcomes and target therapy in various types of cancer.
In summary, these findings have significant implications for advancing personalized medicine and improving patient outcomes in oncology, highlighting the diverse aspects and research focus related to AURKA in molecular, disease, phenotype, drug, and methodological studies.
Promising role of AURKA inhibitors (AKIs) in cancer treatment
AURKA, an oncogene that is commonly overexpressed in various cancers, exerts its effects by modulating multiple molecular targets and signaling pathways, leading to genomic instability and promoting various hallmarks of cancer such as increased cell proliferation, survival, migration, invasion, and stemness [82][83][84][85][86][87].As a result, AURKA represents a promising therapeutic target.Numerous AURKA kinase inhibitors have been identified, including specific inhibitors targeting Aurora-A kinase, pan-Aurora kinase inhibitors, and naturally derived AKIs [5].Many of these inhibitors are currently undergoing preclinical and clinical evaluations, with some displaying encouraging findings (for review, see Refs.[3,88]).Among these inhibitors, MLN8237, also referred to as alisertib, stands out as a second-generation highly selective small molecule inhibitor of Aurora-A kinase developed by Millennium Pharmaceuticals.MLN8237, which evolved from its predecessor MLN8054 [89], attenuates the activity of Aurora-A kinase by binding to it within cells, thereby impeding spindle assembly and chromosome segregation during mitosis.Notably, MLN8237 has demonstrated significant anti-tumor activity in diverse xenograft tumor models [90][91][92][93].Alisertib, being the only AKI that has progressed into Phase III clinical trials, holds promise for the treatment of cancers.Detailed information regarding the clinical trials involving MLN8237 can be found in Table 6.The data presented in Table 6 indicates that beyond the use of alisertib as a standalone therapy, combination approaches involving AKIs exhibit potential as effective anti-tumor strategies.These combinations encompass AKIs combined with Q. Zhou et al. radiotherapy or chemotherapy, targeted therapy, and immunotherapy.Additionally, research efforts are focused on developing even more selective AKIs and multi-target inhibitors, posing both opportunities and challenges in the field of AURKA molecular targeted drugs [88].In summary, AURKA inhibitors provide a potent tool for investigating the intricate connections between molecular mechanisms, clinical manifestations, and disease treatments.
Strengths and limitations
Bibliometrics is a research methodology that plays a crucial role in the evaluation and understanding of scholarly literature.By quantitatively analyzing bibliographic data, such as citation counts, co-authorship patterns, and journal rankings, bibliometrics provides insights into the impact and significance of academic publications.This methodology enables researchers to identify influential works, track trends in research fields, and assess the overall scholarly contribution of individuals or institutions.
In our study, we have taken bibliometrics to new heights by integrating advanced bioinformatics techniques.These techniques, including functional clustering and network analysis, allow us to uncover deeper layers of meaning behind key keywords.By exploring the interrelationships between different concepts, we gain a more comprehensive understanding of how ideas and knowledge are interconnected within the scientific literature.It is worth mentioning that we have qualitatively and quantitatively evaluated our research findings using the major laws of bibliometrics (see Supplementary Table 3).Furthermore, we enhance the clinical relevance of our findings by incorporating data from the PubMed database for quantitative meta-analysis related to prognosis.Specifically, our research focuses on investigating the prognostic value of AURKA overexpression across various tumor types.This integration of bibliometrics with bioinformatics and clinical data adds significant value to the field by providing valuable insights that can inform both researchers and clinicians.It also has implications for pharmaceutical development professionals who seek evidence-based guidance in their work.
However, it is important to acknowledge the limitations of our research.One potential limitation is that our search strategy may inadvertently omit relevant literature.Meanwhile, the use of multiple Boolean operators (such as OR) in our search strategy may introduce bias by including irrelevant or tangentially related publications, despite our efforts to develop comprehensive search strategies and conduct rigorous screening.Additionally, our bibliometric analysis relies heavily on data obtained solely from the Web of Science (WoS) database, which may limit the generalizability of our findings to other databases or sources.
Conclusion
AURKA plays a critical role in the development and progression of cancer.This study presents an objective analysis using bibliometric methods to comprehensively evaluate the research on AURKA in the context of cancer, encompassing both hematological malignancies and solid tumors.The increasing number of publications reflects the growing interest among researchers worldwide regarding the involvement of AURKA in cancer biology.Leading countries in this field include China, the United States, and Germany, emphasizing the importance of collaboration and knowledge sharing between nations and institutions.Understanding the mechanisms through which AURKA contributes to cancer development provides valuable insights into the underlying etiology and facilitates the identification of molecular markers for early diagnosis and prognostic prediction.Additionally, the translation of selective AURKA inhibitors from preclinical studies to clinical research offers promising strategies for personalized cancer treatment.Whether utilized as monotherapy or in combination with conventional anti-cancer modalities, AURKA-targeted inhibitors hold potential advantages in improving therapeutic outcomes for various types of cancer.Moreover, leveraging bioinformatics tools to conduct comprehensive molecular exploration of AURKA within the framework of big data enhances our understanding of its biological significance.Therefore, based on current knowledge and cutting-edge advancements in AURKA research within the field of oncology, undertaking further investigations in basic biology and clinical translation will significantly contribute to refining our comprehension of AURKA's role in the pathogenesis and therapy of cancer.
Fig. 3 .
Fig. 3.The geographical distribution and cooperation relationships of countries (A) and institutions (B) on research of AURKA of cancers.
Fig. 4 .
Fig. 4. The top journals (A) and cooperation relationships of authors (B) in AURKA of cancers research field.
AURKA in the field of oncology was relatively new, lacking sufficient literature and research findings.From 2003 to 2015, the average annual publication rate was 63.3 papers, suggesting increased attention and research on AURKA in oncology.This may be attributed to the rising incidence and mortality rates of cancer, leading more researchers to focus on exploring AURKA as a potential therapeutic target or biomarker.During the period from 2016 to 2023, the annual publication rate of AURKA research in oncology exceeded 100 papers, indicating widespread attention and extensive investigation into the gene/protein.This reflects in-depth research on the functions, regulatory mechanisms, and potential roles of AURKA in tumor development and treatment.The high publication rate also suggests a certain degree of accumulation and maturity in the field, with a wide range of research teams and collaborative networks providing a solid foundation for further research.
Fig. 5 .
Fig. 5.The reference co-citation analysis maps (A) in cluster view (B) and visualization map of the top 25 references related to AURKA of cancer with the strongest citation bursts (C).
Q
.Zhou et al.
Fig. 7 .
Fig. 7. Enrichment analysis of biofunctions and pathways associated with genes co-occurring with AURKA.(A) Gene Ontology (GO) analysis, and (B) Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis.
William C. Earnshaw et al. (2003) provided a comprehensive overview of the regulatory mechanisms of Aurora kinase family members during
Fig. 9 .
Fig. 9. Forest plot for overall survival in different subgroup of cancers.Source: Web of Science and PubMed.
Table 1
Summary of data source and selection.
Table 2
TOP10 Countries and Organizations in AURKA of cancers research field.
Source: Web of Science.C/P, average number of citations per article
Table 3 TOP10
total of 398 references were included, representing significant growth in citations during specific time frames.Fig. 5C presents the top 25 entries, with one paper titled "Targeting AURKA in Cancer: molecular mechanisms and opportunities for Cancer therapy" published in Molecular Cancer in 2021 by Ruijuan Du et al., exhibiting the highest burstness (strength = 36.69)from 2021 to 2023 [3].
Journals and Authors in AURKA of cancers research field.Q. Zhou et al. a
Table 4
Top 10 co-cited references related to AURKA of cancer.
Q.Zhou et al.
Table 5
High-frequency keywords in AURKA of cancers research field.
Table 6
Alisertib in clinical trials. | 2024-06-01T15:06:41.659Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "cfb7aa84c010e367befa3412fe66d40259aa4d9a",
"oa_license": "CCBYNC",
"oa_url": "http://www.cell.com/article/S2405844024079763/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "15911d39de939f950ebadf7e32ed9864292bf09e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
8266552 | pes2o/s2orc | v3-fos-license | Public preferences for vaccination and antiviral medicines under different pandemic flu outbreak scenarios
Background During the 2009-2010 A(H1N1) pandemic, many people did not seek care quickly enough, failed to take a full course of antivirals despite being authorised to receive them, and were not vaccinated. Understanding facilitators and barriers to the uptake of vaccination and antiviral medicines will help inform campaigns in future pandemic influenza outbreaks. Increasing uptake of vaccines and antiviral medicines may need to address a range of drivers of behaviour. The aim was to identify facilitators of and barriers to being vaccinated and taking antiviral medicines in uncertain and severe pandemic influenza scenarios using a theoretical model of behaviour change, COM-B. Methods Focus groups and interviews with 71 members of the public in England who varied in their at-risk status. Participants responded to uncertain and severe scenarios, and to messages giving advice on vaccination and antiviral medicines. Data were thematically analysed using the theoretical framework provided by the COM-B model. Results Influences on uptake of vaccines and antiviral medicines - capabilities, motivations and opportunities - are part of an inter-related behavioural system and different components influenced each other. An identity of being healthy and immune from infection was invoked to explain feelings of invulnerability and hence a reduced need to be vaccinated, especially during an uncertain scenario. The identity of being a ‘healthy person’ also included beliefs about avoiding medicine and allowing the body to fight disease ‘naturally’. This was given as a reason for using alternative precautionary behaviours to vaccination. This identity could be held by those not at-risk and by those who were clinically at-risk. Conclusions Promoters and barriers to being vaccinated and taking antiviral medicines are multi-dimensional and communications to promote uptake are likely to be most effective if they address several components of behaviour. The benefit of using the COM-B model is that it is at the core of an approach that can identify effective strategies for behaviour change and communications for the future. Identity beliefs were salient for decisions about vaccination. Communications should confront identity beliefs about being a ‘healthy person’ who is immune from infection by addressing how vaccination can boost wellbeing and immunity. Electronic supplementary material The online version of this article (doi:10.1186/s12889-015-1541-8) contains supplementary material, which is available to authorized users.
Background
The 2009 A/(H1N1) influenza pandemic was less markedly severe than previous strains such as the H3N2 virus in 1968 [1]. The groups that were most at-risk from infection were those aged below 19 years [2], pregnant women and individuals with underlying illnesses such as diabetes, asthma, respiratory diseases, immune suppression and renal disease [3]. One dose of pandemic vaccine conferred good protection against the infection in approximately 70% of cases [4]. However, despite the effectiveness of the vaccine, the public demand for vaccination was low and many people were not vaccinated. For example, in the UK uptake of vaccination among clinically at-risk groups was 37.6% [5]. For those who contracted pandemic influenza, antiviral medicines were recommended as a treatment, and the provision of antiviral medicines (also as a preventive measure) was a major component of emergency plans in many countries [6]. Data from the UK National Pandemic Flu Service (NPFS) indicated that of the 1.8 m courses of antiviral medicines that were authorised, only 1.16 million were collected and many patients failed to complete a full course [7]. This suggests that there is a need to develop effective communications to improve uptake and to consider how best to advise the public on the nature of the disease, why they should seek prevention (vaccination) or treatment (antiviral medication), who should seek it and when.
Evidence shows that the factors that have been found to promote uptake of vaccination included being vaccinated for seasonal flu [8][9][10], perceiving that the outbreak was severe and resulting in high morbidity and mortality [11,12], high levels of worry and anxiety [13], being in a priority group [14] and believing that the vaccine was effective and safe [14,15]. In addition, social influences were important; for example knowing someone who had the disease and knowing that others had a favourable view of the vaccine [11] as well as trust in the source of information [11,[15][16][17]. Factors that have been found to act as barriers to uptake of pandemic influenza vaccination were: believing that the outbreak was not serious [16,17], and not identifying oneself as being at-risk [17]. Fears about the safety and side effects of the vaccine were also a barrier to H1N1 vaccine uptake [8,14,[18][19][20]. It appeared that the public preferred to take the risk of harm posed by the disease over any harm that might be caused by being vaccinated [21,22]. The scant research in the UK and elsewhere about the public's response to antiviral medicines in the last pandemic suggests that the public knew relatively little about antiviral medicines and had limited experience of their use [23]. Frequent travellers had more positive perceptions of antiviral medication as a result of prior usage [24,25] and research with pregnant women found a tension between women's desire to protect the foetus from harm and worry about the safety of taking medicines when pregnant [26].
Research conducted with the public in advance of an outbreak can inform the type of messages that are likely to be effective in promoting acceptance of these recommended behaviours [27,28]. Such past research has investigated hypothetical scenarios of varying degrees of severity and advice on a variety of precautionary behaviours including hand-washing, covering the mouth, vaccination and seeking medical attention [29][30][31][32]. Results showed that the public was largely unfamiliar with the term 'pandemic' and tended to believe that pandemic influenza was similar to seasonal influenza [29,31]. Most people do not know whether the symptoms of pandemic flu are different from pandemic influenza and are unsure how to recognise the signs [29,32,33].
This body of research suggests that, in a future pandemic, the public would benefit from more knowledge about the health threat and about who will be at-risk from infection, how the infection spreads, how to self-diagnose, short and long term consequences of the illness if precautionary measures are not taken, and the potential side effects of vaccination and antiviral drug treatments [30][31][32]34,35], including safety and efficacy tests for a new vaccine that would be rapidly deployed [30]. In some instances, trust was found to be an important component in acceptance and compliance with recommended behaviours; however, trust in public officials has been found to be weak compared with trust in medical professionals [31,34,[36][37][38]. Although the research described above has identified a range of factors promoting pandemic vaccination, there is less about those factors influencing uptake of antiviral medicines.
While research has often focused on the public's response to advice during severe or moderate pandemic outbreaks little is known about how the public would respond to advice in an explicitly uncertain situation where the risk is less clear cut. For example, Teasdale & Yardley [32] studied the public's response to advice in scenarios where the consequences were described as moderate or severe; Elledge and collegues [31] investigated mild and severe scenarios for avian flu and McGlone et al studied [39] responses to a severe scenario. Understanding how the public responds when the progress and impact of a pandemic is uncertain will be important because it is during the emergent, uncertain stages of a pandemic that the public will be asked to consider the potential risk of contracting pandemic influenza and to take precautionary measures to reduce the likelihood of personal infection and spread.
The majority of studies that have investigated how the public respond to precautionary advice has rarely been informed by a theoretical understanding of behaviour change. Using a theoretical framework helps to integrate empirical findings and elucidate processes of change and mechanisms of action of effective communication and other intervention strategies. A useful framework for this purpose is the COM-B model summarising factors necessary for behaviour to change across behavioural domains [40] (Figure 1). The initials stand for 'capability', 'opportunity', 'motivation' and 'behaviour', and the model recognises that behaviour is part of an interacting system involving all these components. Changing behaviour will involve changing one or more of them in such a way as to put the behavioural system into a new configuration and minimise the risk of it reverting. Because of the interacting nature of these components, one may increase, for example, motivation by increasing capability (e.g. knowledge and skills) and opportunity (e.g. access to resources and social influence).
We adopted the COM-B model in our approach to the uptake of pandemic flu vaccination and antiviral medicines because changing the incidence of any behaviour in a group or population is likely to involve changing more than one driver of behaviour.
By specifying the factors that need to change for a behaviour to occur, the model can identify the kinds of interventions that are likely to be effective. The model postulates that for any behaviour to occur a person must have the psychological and physical capability to perform the behaviour; the physical and social opportunity to engage in it, and must be motivated to do so at the relevant moment compared with some other behaviour. Psychological and physical capability refers to the range of capacities such as knowledge, physical and mental skills and facilities such as strength and stamina. Opportunity can be physical and social and refers to environmental factors that permit the behaviour including access, availability, time and financial resources and social factors such as the cultural milieu we operate in. Motivation reflects the brain processes that direct behaviour which may be reflective (evaluations and plans) or automatic (emotions and impulses arising from associated learning). COM-B has been elaborated into 14 theoretical domains, the Theoretical Domains Framework (TDF) [41].
The study aimed to systematically identify facilitators of and barriers to being vaccinated and taking antiviral medicines in uncertain and severe pandemic influenza scenarios using the COM-B framework. An uncertain scenario was used in addition to a severe scenario because in the early stages of a pandemic there is often uncertainty about how the situation will unfold, how rapidly the infection will spread, or what impact this could have on the population. Hence it is important to understand how people respond to precautionary advice in these conditions of uncertainty, how they make sense of the risk, and what types of precautionary measures they express preference for.
Design and recruitment
Semi-structured focus groups and interviews were conducted with a diverse sample of the general public. To ensure that participants were from a range of social and ethnic backgrounds we recruited from a variety of organizations in London and Southampton including children's centres, AgeUK lunch clubs, community centres, students from a university, voluntary organisations and support groups for those with underlying conditions such as diabetes, COPD (Chronic Obstructive Pulmonary Disease) and PSC (Primary Sclerosing Cholingitis). Advertisements were placed in these centres explaining the purpose of the study, who was eligible, how to participate and offering a small monetary compensation for participation. The managers of the centres where interviews were held advertised the study and made rooms available for the focus groups to take place.
Ethical approval for the study was granted by University College London (Reference: 5081/001) and the University of Southampton (Reference: 7387) ethics committees.
Sample
Sampling was purposeful and individuals who varied in their risk status were recruited. Of the 71 participants, 23 were men and 48 were women; Details of the demographic profile are shown in Table 1. Thirty-five were from designated at-risk groups of whom 10 had an underlying condition, and six were pregnant. Of the 36 participants not designated as being at-risk, nine were specifically recruited because they were mothers with young children. Thirty-eight of the participants were vaccinated for seasonal influenza regularly (of whom 20 were from clinical at-risk groups) and two had been vaccinated for seasonal influenza for the first time this year. Eighteen people who did not consider themselves to be at risk had been vaccinated at least once before for seasonal influenza. Reasons for being vaccinated among those who were designated as not being at high risk included recommendation by a GP, and being offered the vaccine at work. 12 participants had received monovalent H1N1 vaccine and three had antiviral medicines during the 2009-2010 pandemic.
It should be noted that groups were not always mutually exclusive. For example, some individuals who had been recruited as 'elderly' (over 65 years of age) also reported that they had other underlying conditions that would put them in another at-risk category.
Materials
Two scenarios were developed: an uncertain and a severe scenario. The severe scenario was based on that used by Teasdale and Yardley (2011) which described a severe level of risk, severe health consequences and the national impact of the pandemic. The uncertain scenario was developed to reflect the early conditions that occurred during the 2009/10 pandemic. This described an uncertain situation, uncertain health consequences and uncertain public impact of the pandemic (see Table 2).
Short messages promoting the uptake of vaccinations and antiviral medicines for pandemic influenza were developed to reflect evidence from prior research that identified barriers to uptake but also to reflect the key drivers of behaviour as defined in the COM-B framework. These were presented as advice from official sources (see Table 3).
Procedure
Data collection took place in London and Southampton from November 2013 to March 2014. Nine focus groups, three paired interviews and six individual interviews were conducted by the first two authors at the centres from which participants were recruited. Written informed consent was obtained from all participants who received a small monetary compensation for their involvement. Interviews lasted between 20 and 65 minutes and were audio recorded with the participants' consent.
An interview schedule structured into two sections was used to guide the discussion. The first section was to establish what participants knew about pandemic influenza, vaccinations, and antiviral medicines for pandemic influenza, and personal experiences of pandemic influenza. The second section focused on responses to two scenarios and advice concerning vaccinations and antiviral medicines. Participants were asked to imagine that they were in a given situation and to consider what they would think, feel and do if this were to occur. The Uncertain scenario (Table 2) was always shown first, followed by the advice about antiviral medicines ( Table 3). The Severe scenario ( Table 2) was shown second followed by the advice on vaccinations (Table 3), and then antiviral medicines. All participants were debriefed in full at the end of the interview and reassured that these were fictional scenarios.
Data analysis
Audio recordings were transcribed verbatim and NVivo 10 was used to code and to maintain a trail of memo and theme development. Analysis was iterative and each transcript was read and re-read numerous times by the first two authors independently. Transcripts were coded line by line and analysed comparatively to identify similarities and differences [42]. A data audit was conducted by the first two authors to clarify meanings, remove duplicated codes and identify data that did not match the coding scheme [43]. Inductive analysis was used to identify responses to the uncertain and severe scenarios. Deductive analysis was used to identify facilitators and barriers to following recommended advice to be vaccinated and take antiviral medicines. In addition, code names were assigned to the six COM-B components: physical and psychological capabilities; automatic and reflective motivations, and social and physical opportunities (see Additional files 1 and 2code frames). For the purposes of analysis, the Theoretical Domains Framework [41] was used. This is a variant of the COM-B which subdivides the themes into 14 detailed components that map directly onto COM-B. These are: 'knowledge'; 'skills'; 'memory, attention and decision processes'; 'behavioural regulation'; 'social/professional role and identity'; 'beliefs about capabilities'; 'optimism'; 'beliefs about consequences'; 'intentions'; 'goals'; 'reinforcement 'emotion'; 'environmental context and resources'; and 'social influences' a .
The facilitators and barriers to being vaccinated and take antiviral medicines were reviewed separately. Responses to accepting advice were also investigated according to two broad categoriesthose designated as being in a priority group (35 peoplemen and women over 65 years, pregnant, underlying illnesses) and those not designated as being in a priority group (36 peoplemen and women under 65 years, mothers with young children).
Results
Responses to the uncertain and severe scenarios differed: in the uncertain scenario participants were hesitant and ambivalent about following advice because the risk was unclear whereas in the severe scenario the need to act seemed more obvious and almost all claimed they would comply with the official advice.
The focus of this paper is on facilitators and barriers to uptake of pandemic influenza vaccination, because participants knew relatively little about antiviral medicines and were less able to discuss them. Responses to advice about antiviral medicine were more limited, as the participants were largely unfamiliar with these medicines, but were broadly similar to responses to advice about being vaccinated; any differences are highlighted after the responses in common are presented.
Responses to the scenarios: procrastination vs. call to action The most common response to the uncertain scenario was to 'wait and see' or 'do nothing yet'. There were two reasons given for this: the situation was likened to the swine flu outbreak, which was not considered to be serious, and it was thought to be distant -both emotionally and physically -and hence, less worrying: Personal risk was perceived to be low, even amongst those in a designated priority group. Although there was Flu virus has spread to where you live, 1 in 2 of those coming into close contact with an infected person catch flu.
Scientists do no yet know how badly the flu virus will affect people in the UK -doctors are trying to learn about the virus as fast as they can, but do not know if it will be mild or serious.
Most people who catch flu feel very ill for around a week. Almost 1 in every 10 people who catch flu need hospital care, and 1 in every 50 healthy people who catch flu die.
When the virus reaches the UK, we don't know whether life will carry on much as usual or whether there will be serious problems with services such as the NHS, schools or vital supplies.
Life cannot continue as usual. Most schools close, there is very high sickness absence at work and so there are problems with essential supplies, and health care services are not coping and have to be prioritised for the most seriously ill. Table 3 Advice to take antiviral medicines and to be vaccinated used in the research
Antiviral medicines Vaccinations
PEOPLE WITH PANDEMIC FLU are advised to take antiviral medicines to reduce their symptoms, and the length of time they are ill.
You are advised by your GP to get vaccinated at once to protect you and your family from getting pandemic flu.
PEOPLE IN A PRIORITY GROUP will be provided with antiviral medicines to prevent them from catching flu.
Vaccines for pandemic flu have been through the same careful tests as vaccines for seasonal flu and are safe to use some evidence to suggest that the uncertainty was experienced as disconcerting, the majority did not see the need for vaccination or antiviral medicines. Rather, participants suggested that they would do more of the behaviours they already practiced such as following good hand and respiratory hygiene and taking more Vitamin C: You'd step up your vitamin C etc. and your cod liver oil. (Male, over 65 years) I would be watching more people touching-for me personally, washing my hands er you know being aware if someone sneezes I'd probably ask them to cover their face. (Mother with young children) In this situation, it was thought to be important to 'keep an eye on the media' to find out what general advice was being given.
In contrast, the most common response to the severe scenario was to take action. The 'call to action' occurred because this situation was thought to be serious. 'Serious' was often interpreted in terms of the disease being emotionally and physically close rather than in terms of the absolute number of people who were ill, hospitalised or had died.
If it is your neighbour -it is reallybeing really ill with flu, if they got it and their baby got it it's near to you and you know people and I would feel influenced I think. Well I've got a baby at home and my elderly mum lives next door I should get it because I don't want to put them at-risk by me getting or vice versa but if it is on the news and they are telling you in Chinayou know whatever I am thinking 'whatever, am I at-risk? Is my family at-risk? It's on the TV. I don't know -am I going to get this? (Female, not at-risk) In severe scenarios, there was a high level of anxiety and an awareness of personal susceptibility. As one woman with young children commented 'this is normal people and they are dying'. The need to take novel precautionary measures, of any kind, was less likely to be disputed: I think people follow any advice [in this scenario] that is given from an authority figure anyway, even if it was poison…(Female, not at-risk)
Barriers and facilitators to vaccination uptake
Five of the six components in the COM-B model accounted for participants' responses (Table 4).
Capability Knowledge
The majority of participants knew little about pandemic influenza and many were unsure of the meaning of the word 'pandemic'. Overall, few people linked 'pandemic influenza' to the A/H1N1 pandemic influenza outbreak of 2009-2010. They tried to make sense of it by likening it to other more familiar phrases such as 'epidemic', inferring that it was probably a more widespread and more serious form of influenza: I just thought pandemic flu was all kinds of flu, I didn't…oh well I actually thought maybe pandemic sounds like a flu that is outbreaking and very dangerous and they want to keep it under control.
(Pregnant woman)
Only two people in the study spontaneously referred to the fact that pandemic influenza is a novel strain of virus. When this information was presented, people found the notion of it being a novel strain helpful in explaining the threat it posed beyond seasonal influenza: It's just the word they use when it is worldwide and it is spreading from chickens in China or something, but other than that I didn't know what it meant, that it was new, why don't they just say new? I mean they want new, it's the new one for which there isn't any vaccine yet; that should be said. (Female, over 65 years) In the absence of this new information, some thought that pandemic influenza could be like seasonal influenza.
What are the symptoms? Are there different symptoms from swine flu and ordinary flu? What would you look out for? How would you know you had one from the other? They could be the same. (Male, not at-risk)
Memory
Some participants spontaneously linked the word 'pandemic' to bird flu or swine flu but many did not. Recall of the swine flu pandemic was low, partly because only four participants in our sample had contracted it, and partly because few knew anyone who had. A prevalent comment was that media had exaggerated the risk of swine flu: It's almost like you get kind of a mixed picture of what it actually is, and then, it will be reported in a way that people will think it's…that they're not going to be able to avoid catching it or something, and then like… but then, the next day, it will be like, oh, actually, there's only one person in Yorkshire that's got it… (Female, not at-risk)
Automatic motivations
In the uncertain scenario, the participants expressed little concern about the pandemic outbreak. Most participants were not worried and so many could not see a need to be vaccinated or take antiviral medicine: …there's nothing to do yet. I feel like this is worrying about nothing (Male, not at-risk) …it's a good first step, I guess, you know, to try and get the word out there that this could potentially be a problem, but this wouldn't be the deciding factor [to be vaccinated]. (Pregnant female) Having been offered the seasonal influenza vaccine previously was put forward as a reason for considering pandemic influenza vaccination -'it would never stop me because I have been having them [seasonal flu jab] for years and years' (Female, underlying illness).
Reflective motivations
In the uncertain scenario, participants tended to make a 'risk assessment' (e.g. male, not at-risk) and 'weigh up the • Tending to the view that they will not be infected or will make an easy recovery from pandemic influenza
Social role
• Responsibility for other family members, including unborn Anticipated regret • Concern that the outbreak could be more serious than expected and have not been vaccinated risk in my mind, the side effects of the vaccine versus am I going to lose my life or be significantly impacted by it' (e.g. female, underlying condition). Participants deliberated about the consequences of being ill with influenza as opposed to the consequences of being ill with side-effects from the vaccine. In doing so, they drew on their current status as a healthy person who would not need to be vaccinated; on their role in society as a responsible person who should be vaccinated to prevent family members (especially children) from becoming ill; and on feelings of anticipated regret if the virus became worse and they had failed to be vaccinated.
Beliefs about consequences
Participants tended to believe that pandemic influenza was similar to seasonal influenza, which was not considered to be a serious illness. If participants thought that the consequences of being ill with pandemic influenza were minimal there was little incentive to take precautionary measures.
A week being ill [with flu] isn't the end of the world. I think if I thought it was going to be much worse than that, you know, I would be more concerned and more likely to have the vaccine. (Female, underlying illness) There was a view that the consequences of being vaccinated were potentially worse than becoming ill from influenza. In many cases this was related to concerns about side effects or a belief that it was possible to contract influenza from the vaccine itself. These views were not shaped by personal experience.
What I feel about vaccines is that you actually get a virus or notwhat you get is a small amount so you are not supposed to get an illness. I am not sure that is true. I have heard that many people do get ill after having the vaccine… (Mother with young children) Only a minority of participants were openly critical of vaccine safety or efficacy but where such concerns were expressed they were given as reasons not to be vaccinated. In expressing scepticism about the safety of a newly developed vaccine the participants drew on beliefs or representations of how drugs are developed and made available to the public, and argued that a pandemic flu vaccine cannot meet the standard safety criteria due to its 'sudden' production: Every other drug has been tested for years and years before it can go on the shelf. How can they suddenly produce something in six months and put it on the shelf? I'd be very suspicious of that. (male, not at-risk) By contrast, a facilitator of vaccine uptake was the belief that a vaccine would be protective. This was of particular relevance to those who were aware that they could have complications as a result of becoming ill, for example, pregnant women who were concerned to protect their babies: 'It's only because I'm pregnant that I'm more worried, because otherwise I wouldn't [be]'.
A further facilitator of vaccine uptake was anticipated regret: a tendency to consider that the situation could become worse and that there could be negative consequences from not being vaccinated early enough. As this young man who was not at-risk said: 'It would be a brave man to say no, I'm not taking anything at all when everyone around you is dropping'.
Social identity
Those who were accepting of vaccination and antiviral medicines tended to view themselves as less healthy and acknowledged that they could be at risk of infection from pandemic influenza. They were frequently in contact with medical professionals and followed their advice and routinely took medication and the seasonal influenza vaccination. Many were from a seasonal influenza priority group and regarded the decision to get vaccinated or take medicines as 'normal': I think if you are already in a group such as us, who are already taking loads of medications, constant checks and tests, you tend to be a bit more accepting. Whereas if you don't take medications, you're normally quite healthy and you are suddenly being told 'we want you to have this, we recommend you take it'. (Female, underlying illness) Pregnant women considered themselves to be temporarily in the at-risk category, although most commented that they would prefer not to take medicines in case of harm to the foetus but would do so if a medical professional recommended it.
By comparison, those participants who were less accepting of vaccination advice tended to perceive themselves as 'fit and healthy' and have had less frequent contact with medical professionals. Notions of being 'fit and healthy', rarely becoming ill and having a strong immune system were invoked to deny the need for vaccination because they were unlikely to be at-risk. A range of behaviours such as, eating healthily and exercising were believed to confer this immunity.
…look after yourself, eat healthier and do a bit of exercise and try and keep away from people with viruses and that sort of thing and um I do that without sort of getting neurotic about it. (Male, underlying illness) Three types of behaviours were commonly cited as a way to stave off infection: social distancing, lifestyle related activities, and improving basic hygiene. More than half of participants spontaneously mentioned distancing behaviours as a means to reduce the risk of being infected, e.g. avoiding crowds, not travelling on public transport, and staying at home: I think people will stay indoors, and people will not congregate -meetings or anything like that, supermarkets, trains… (Male, not at risk) About one third cited lifestyle behaviours as a means of staving off infection such as eating properly, drinking more water, exercising and supplementing their diet with vitamin C, cod liver oil or orange juice. Finally, improved hygiene behaviour was often mentioned such as using hand gels, washing hands more frequently, cleaning surfaces and covering one's face when sneezing or coughing.
Using alternative behaviours to vaccination related to the view that medicine should be avoided where possible and that it was better to allow the body to fight off diseases 'naturally'. Arguably, some people preferred these precautionary behaviours to vaccination because they seemed without side-effects and also more within their direct control. People who held these views could be from either an at-risk or not-at-risk group: I would be very happy for my own body to make an attempt to try and fight it because what I know about vaccines is that they break the immune system. (Female, mother with young children) I'm not a great fan of taking medicine for medicines sake really. I think that's probably the criteria that I applied and I'm just reluctant I think to take something which at the end of the day um I don't really see the benefit of really. (Male, not at-risk) Beliefs about being fit and healthy and being able to naturally fight disease contributed to a sense of optimism: the belief that that they were less vulnerable than others to being infected with pandemic influenza:
Social role
Pregnant women were aware of their social role to protect their unborn child but others also commented that their social role as a protector of their family or as a role model to family members would influence them in the direction of being vaccinated: If you are a family person and you have got children that are under sixteen, for example, it's up to you to decide whether they would have this vaccination, and if you say no, I'm not going to let them have it and they die, that's a big responsibility on you. (Male, not at-risk) …this is a collective thing (Female, not at-risk) …it's not just about you is it, it's about everyone else (Pregnant woman) However, only a minority of participants believed that they had a social responsibility to be vaccinated in order to prevent the circulation of the virus within the wider society. Virtually no participant referred to the notion of herd immunity and to the duty of every citizen to vaccinate to reduce others' risk of infection. Thus, it could be argued that the risk of pandemic influenza was primarily understood as a personal rather social issue, with little attention being paid to the social aspects of a pandemic outbreak.
Physical opportunities
The main physical opportunity that appeared to promote uptake of vaccination was access to advice and treatment. Participants anticipated that vaccination would be readily available at GP surgeries or at pharmacies. However, surgeries were considered to be a 'hub for infection' which should be avoided: You are going into an environment where you are prone to get flu because there is different people, so I'd be scared. I think I'd be like can't you just post it through the door, like send it, I don't know, I wouldn't go to the centre. Would you? (Mother with young children) The anxiety about attending a surgery prompted one participant to suggest that mobile dispensaries should come to local neighbourhoods 'to bring the medication to you' (Male, not at-risk). In addition, there was concern about the difficulty of booking an appointment in a timely fashion because of pressures on the health service.
Social opportunities
Social influences included recommendations from trusted sources, especially health professionals, taking account of the behaviour of respected others, and the influence of the media.
Participants believed that they would actively seek advice from their GP in a pandemic situation and would put faith in the recommendations made by them because 'I am not a medic and therefore I follow his advice' (Male, not at-risk). However, in an uncertain scenario some participants commented that they would seek additional supporting evidence on the internet. Nevertheless, if a GP made a strong recommendation to be vaccinated, most participants would follow their advice: If it is very, very strongly recommended [in uncertain scenario], well then I would go and beat the surgery door down and get a vaccine, but um if the advice isn't that strong well then I'd leave it for a bit and see how I get on. (Male, underlying illness) Participants were also likely to respond to sources of informal advice, for example close friends and family, an authority in the workplace or a local community leader. This was particularly evident among a group of elderly Somali women and a group of men in a close-knit area of Central London who said that they would actively seek the advice of community leaders.
Participants acknowledged that the media will play a role during a pandemic outbreak and they expected that they would get information 'from reliable newspapers not the Sun or Metro' (Pregnant woman). A common expectation was that the media would exaggerate the situation because 'you hear it on the news and you obviously have to take it with a pinch of salt because the news media are always out for a story' (Male, not at-risk).
Group identity
Identifying as being part of a group was a factor in decision-making about vaccination. This was because several people with underlying conditions belonged to support groups either in person or on-line. These groups would sometimes discuss the need for vaccination …the people in the online forum talk about flu vaccination…. I know from reading online that it covers people like me (Female, underlying illness) However, despite being aware that one was part of an at-risk group, some people who were in the at-risk groups distanced themselves emotionally from the need to be vaccinated. One female participant who had Primary Sclerosing Cholangitis c argued that she would only think of herself as being vulnerable if the people who were infected were from the same country and demographic as herself: I think does the risk of getting the vaccine outweigh the risk of the impact on my life. I guess when it is a million miles away and very few people are getting it and it's a different age demographic to me, I probably think actually I am not going to take that risk [of being vaccinated]….. (Female, underlying illness)
Additional factors that may influence uptake of antiviral medicines
Beliefs about antiviral medicines tended to be ill-informed, for example, considering that they were antibiotics and that they would be delivered in injection form.
Many were unsure whether they would recognise the signs of pandemic influenza, for example, 'What are the symptoms? Are there different symptoms from swine flu and ordinary flu? What would you look out for? How would you know you had one from the other?' (Male, not at-risk) Most of the participants commented that the advice to take antiviral medicines seemed 'sensible' and compared with vaccination fewer concerns were raised. Overall there was less resistance to uptake because 'if you were feeling ill and feeling like death, you would take anything' (Male, not at risk).
Discussion and conclusions
The aim of this study was to systematically identify facilitators of and barriers to being vaccinated and take antiviral medicines in uncertain and severe pandemic influenza scenarios using the COM-B framework. The influences on vaccination and antiviral uptake were wide-ranging, including various aspects of capability, motivation and social opportunity, with some evidence that addressing one aspect could impact on others in the system. For example, social opportunity in the form of recommendations from respected others influenced reflective motivations in the form of beliefs about vaccine efficacy. This suggests that the influences on vaccine and antiviral uptake are multidimensional and that communications to promote uptake are likely to be most effective if they address several components. Identity as a healthy or at-risk individual influenced whether or not people thought they were vulnerable to contracting pandemic influenza and whether they believed that practicing alternative protective behaviours could be as effective as vaccination. Feelings of vulnerability were engendered by being labelled as being in a clinically at-risk group (having an underlying illness, being older or pregnant), and by the severity of the scenario because if it was perceived to be very severe all people will be susceptible to pandemic influenza.
In contrast, those who felt invulnerable to pandemic influenza cited the rarity of being ill with flu and believed that they were, young, healthy or fit and hence had a strong immune system. Those who had constructed an identity as 'a healthy person' were less willing to follow advice to be vaccinated and did not view using biomedicine as 'normalised'. The use of alternative behaviours, especially eating well, exercising and using vitamin supplements was thought to boost immunity and hence, reduce the risk of being infected and the need for vaccination.
Beliefs about being able to boost one's natural immunity were held by those who were clinically at-risk, as well as by those who were not at-risk. This study suggests that many people in priority groups do not self-identify as being vulnerable and may, therefore, not make the connection with messages aimed at them. Such a disconnection could explain why only 37.65% of those in priority groups in the UK were vaccinated during the last pandemic [5]. More may need to be done to ensure that those in a priority group are able to identify themselves as being more susceptible to the effects of pandemic influenza than others.
Promoters of and barriers to uptake cannot be considered separately from the context of the scenario: in a high risk scenario intentions to follow advice to be vaccinated or to take antiviral medicines were high whereas in the uncertain scenario there was hesitancy and ambivalence and it was in this situation that the full range of doubts, concerns and misperceptions emerged.
COM-B as a framework for analysis was a useful starting point for identifying the range of factors associated with uptake of vaccination and antiviral medicines. The barriers and facilitators of uptake could be classified within the framework which allowed an explanation of behaviour across several components. Many of the factors discussed have been identified in previous studies; for example this study supports previous research that one of the most consistent predictors of vaccine uptake is the habit of being vaccinated for seasonal influenza [8][9][10]14,19], that the role of emotion (automatic motivations) is highly relevant [11] and that a barrier to vaccine uptake is negative beliefs about the vaccine such that the consequences of being vaccinated are perceived to be as or more problematic than the consequences of becoming ill with pandemic influenza [18,19,21,33,44,45].
However, comparison between studies is made difficult because different researchers select a small sub-set of predictor variables to examine; only a minority make use of a model of behaviour to explain why these variables were selected (exceptions are Teasdale & Yardley 2011 [27], Myers & Goodwin 2012 [46], and Kok et al 2010 [12]) or accommodate different levels of severity.
COM-B is a theoretical starting point for understanding behaviour within specific contexts and to make a 'behavioural diagnosis' of what needs to change to alter behaviour. It is at the centre of the Behaviour Change Wheel [40] -a tool to guide intervention design by identifying which intervention functions are likely to be most effective. It is beyond the scope of this paper to enumerate the range of potential interventions but a few examples are described below: The study indicated that identity as a 'healthy person' was a barrier to being vaccinated. Messages that address these beliefsfor example, explaining how no-one is immune to a new strain of flu and that being vaccinated can enhance health by boosting immunity may be effective in increasing uptake. A further barrier to uptake was a belief that lifestyle behaviours such as eating healthily and exercising could confer immunity and make people less vulnerable to contracting pandemic influenza. Communications that address these beliefs might include information about why people are vulnerable to a new strain of influenza and about the effectiveness of vaccines in reducing the risk of infection or in boosting immunity.
Although the participants were purposively sampled to represent a range of risk profiles, a limitation of this research was that the sample may not reflect the views of the wider population because it was not representative and focus groups may attract people who are particularly interested in the topic area. Furthermore, it is not clear whether being in the habit of being vaccinated or not being vaccinated conditioned responses to different scenarios. This could be explored in future research.
Future research needs to take account of the extent to which messages about vaccination can be transparent in addressing concerns about the vaccine; for example being more open about how the vaccine is developed. In addition, we should investigate whether messages that address identity are effective in promoting uptake of vaccination. In particular, to examine whether positively framed health messages that focus on wellbeing are more effective than messages about risk reduction for individuals who do not self-identify as being vulnerable to infection.
The promoters and barriers to being vaccinated and taking antiviral medicines are multi-dimensional, and communications to promote uptake are likely to be most effective if they address several components of behaviour. The benefit of using the COM-B model is that it is at the core of an approach that can identify effective strategies for behaviour change or communications for the future. People from at-risk groups do not always perceive themselves to be at-risk because they have constructed an identity as a healthy person who is immune from infection because they follow a healthy lifestyle. Communications should confront these identity beliefs by addressing how vaccination can boost wellbeing and immunity.
Endnotes a The original TDF was developed by an international panel of 32 experts in behaviour change who identified 128 constructs from 33 behaviour change theories and simplified them into domains. Usability was developed with an international team of implementation scientists. The TDF has been validated and refined by an international panel of 36 experts in behaviour change.
b Participants are referred to by gender and whether they are in an at-risk group (over 65 years, pregnant, underlying illness) or not in an at-risk group (included mothers with young children). c PSC is a disease of the liver and people with this condition are recommended to have the influenza vaccine because they have lowered immunity as a result of the treatments they receive. | 2016-05-08T10:53:00.974Z | 2015-02-27T00:00:00.000 | {
"year": 2015,
"sha1": "c69c44ead7d0e170cf78b817f44a0cf784e4b334",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-015-1541-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7534740f8bfa4e29286632e7d92622f175fc6412",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249227781 | pes2o/s2orc | v3-fos-license | Effect of Initial Temper on the Warm Forming Characteristics of a High Strength 7000-series Al-Zn-Mg-Cu Alloy
In this work, the formability of a developmental 7000-series copper containing aluminium alloy was assessed at room temperature (RT), 150°C, 175°C and 200°C in pre-aged (PA), peak-aged (T6) and overaged (T76) tempers using Nakazima tests with stereoscopic digital image correlation (DIC) strain measurement. The limit strains were identified using a novel curvature-based approach to detect the formation of an acute neck. The tensile mechanical properties in these warm forming processing routes were characterized with and without a paint bake cycle. Finally, a thermo-mechanical tensile simulator was used to evaluate the constitutive response of the PA and T76 tempers as a function of strain-rate and time at 175°C. Formability results found the selected PA temper to have a good room temperature formability and a mild positive response to the selected warm-forming cycles. The T6 and T76 tempers both exhibited increases in formability in response to warm forming. The PA temper had a significant positive response to short-duration warm forming and subsequent paint baking, with the yield strength increasing from 420 MPa to 512 MPa following this thermal cycle. For the T6 temper, the warm-forming cycle showed a trend characteristic of retrogression and re-aging, with the warm-forming cycle dropping the yield strength from 566 MPa to 534 MPa and the subsequent paint-bake re-aging to 554 MPa. The effect of aging during pre-heating prior to warm forming on the warm constitutive response of the PA and T76 tempers was also investigated. Both tempers exhibited rather different aging responses to short-duration thermal cycles. In the PA temper, this manifested as an increase in at-temperature yield strength and loss of hardening rate. In contrast, the T76 temper exhibited a drop in strength since this temper is already over-aged prior to warm forming. Both the PA and T76 tempers showed comparable at-temperature strain-rate sensitivity.
Introduction
While the strength -to-weight ratio of 7000-series aluminium alloys is attractive for automotive light weighting applications, the manufacturing of structural automotive components is challenging due to their limited room temperature formability. The 7000-series alloys are also susceptible to stress corrosion cracking (SCC), which must be accounted for via process route design prior to use in automotive applications, e.g. by over-aging as a means to reduce SCC [1], [2], [3]. To address formability limitations, there has been a keen interest in warm forming (WF), [4], [5], [6], [7], [8], [9], as well as hot stamping [10], [11], [12], [13] of high strength 7000-series alloys in recent years. In hot stamping, the sheets are first solutionized by heating above approximately 470°C to dissolve the precipitates, after which they are formed non-isothermally in a cooled die system. In warm forming, IOP Publishing doi: 10.1088/1757-899X/1238/1/012087 2 sheets are formed isothermally or non-isothermally at temperatures above approximately 150°C, but below the alloy recrystallization temperature. An advantage of hot forming is the enhanced formability and reduced springback but the parts require a secondary ageing heat treatment to obtain a peak strength T6 condition. An advantage of warm forming is that secondary ageing treatments can be avoided since the material does not require solutionization prior to forming. The starting temper can be selected such that the secondary ageing that will occur during warm forming and the subsequent paint bake cycle can produce a final temper near the peak strength or intentionally over-aged to increase resistance to SCC.
Sotirov et al. [14] have identified formability limits for peak-aged AA7075-T6 sheet and considerable benefit was found for warm forming in the 200°C -230°C temperature range. Kumar et al. [4] demonstrate that the limiting draw ratio of AW7020-T6 benefits from warm forming up to 250°C, but with significant degradation in the as-formed yield and ultimate tensile strengths after 200°C. In the study of warm formability of AA7075-T6 by Wang et al. [7], a decrease in mechanical properties was observed when warm forming above 230°C which was attributed to precipitate coarsening (over-aging); however, they note some strength loss is recovered during a paint-bake cycle as compared to the asreceived material that saw a mild decrease in strength after paint-baking.
At present, there is not a clear indication of which initial temper should be used for 7000-series warm forming, for automotive applications. This research investigates the (i) warm formability and (ii) final property characteristics, such as strength and ductility, of a developmental 7000-series alloy, designated herein as AA7xxx, for three different initial tempers comprising (i) a pre-aged (PA) temper, a peak-aged (T6) temper and (iii) an over-aged (T76) temper. The isothermal formability of each temper was characterized at room temperature (~23°C), 150°C, 175°C and 200°C under near plane-strain loading. Tensile samples were extracted from the flanges of the PA and T6 samples after warm forming to understand the effect of the forming process on the subsequent mechanical properties with and without a 30 minute, 177°C simulated paint-bake. In parallel with the formability experiments, the warm constitutive response of the PA and T76 tempers is characterized at 175°C, as a function of time at temperature and strain-rates relevant to the forming experiments to provide insight into the observed formability trends. Based on the culmination of these tests, final recommendations are made for the warm forming of this developmental AA7xxx alloy.
Material
The starting sheet material used throughout this study was a 2 mm thick, developmental 7000-series aluminium alloy, designated AA7xxx, by Arconic in the T76 condition. The T76 is an overaged temper which imparts improved stress corrosion cracking (SCC) resistance [2], [3]. The nominal composition for the developmental AA7xxx alloy is given in Table 1. To produce the pre-aged (PA) and peak-aged (T6) tempers, a batch of the AA7xxx alloy was solutionized at approximately 473 °C for 10 minutes in a fluidized sand furnace (FB-08C Fluidized Bath Furnace, Techne®), water quenched, and subsequently aged. The aging process for the PA temper [10], [15] was a two-step process of 48 hours of natural aging at room temperature followed by 4 hours at 100°C of artificial aging in the fluidized sand furnace. The T6 (peak aged) treatment was taken as a single step 24 hour aging process at 120°C. Between any lulls in heat-treating, samples were held in dry ice (-78.5°C) to prevent unintentional aging.
Representative stress-strain curves for the PA, T6 and as-received T76 tempers are included in Figure 1 Fribourg et al. [16] have explored this behaviour in detail for AA7449, an Al-Zn-Mg-Cu class alloy, and showed that this flat region after yielding emerges and becomes more prominent as the extent of overaging increases. This trait is thought to be associated with the transition from shearable to nonshearable precipitates with overaging.
Limiting Dome Height Testing and Analysis
Formability characterization of the PA, T6 and T76 tempers was undertaken at room temperature (23°C), 150°C, 175°C and 200°C. Testing was accomplished using a double-acting hydraulic press with tooling heated by closed-loop controlled embedded heater cartridges, as described in [9] and [17]. The specimens were heated in-situ, with typical heating curves measured at the specimen flanges plotted in Figure 2c). The initial heating rate corresponds to the placement of the specimen within the tooling. At approximately 60 seconds, the tooling is closed, and the clamping load is applied, giving rise to an increase in heating rate, followed by asymptotic heating to the target test temperature. A minimum of three repeated LDH tests were completed for each test condition. The warm-forming cycle entailed a 3 minute heating time of the blank to the target temperature after clamping, with the blank within 20°C of the targeted temperature during approximately the final 60 seconds of heating. The forming duration at elevated temperature varied as a function of dome height. To minimize variation, all samples were kept at elevated temperatures for 60 seconds from the time testing began prior to water quenching after testing was completed.
A 101.6 mm diameter Nakazima punch was used in the formability testing per the ISO12004-2:2008 standard [18], as shown in Figure 2a), with a corresponding annular Nakazima die set with an inner diameter of 106 mm and die entry radius of 6.35 mm. The formability was evaluated using a near plane-strain dog-bone type geometry with a gauge width of 76.2 mm and entry radius of 20 mm [18] under limiting dome height (LDH) testing. Samples were tested in the transverse sheet direction (TD), i.e. the sheet rolling direction was perpendicular to the sample major axis and major strain direction. A clamping force of 330 kN was applied to prevent material draw-in. The punch speed during testing was In the formability testing, the Nakazima samples were tested to failure and stereoscopic digital image correlation (DIC) techniques were used to capture the deformation and corresponding strain fields on the sample surface not in contact with the punch over the duration of each test. A sample strain profile prior to cracking is included in Figure 2b). Deformation of speckled samples was tracked at 140 frames per second using two 4 megapixel FLIR Gazelle Camera Link cameras. The camera, lens and test setup produced a pixel density of approximately 14 pixels/mm. Image analysis was completed using the Correlated Solutions Vic3d© version 8 software package. The following key analysis parameters were used: a subset of 25 -35 pixels, a step size of 3 pixels, and a Gaussian strain filter of 5 pixels. These analysis parameters influence the minimum strain resolution and control the range over which averaging occurs [11], [19] and thus their effect is more significant in the presence of large strain gradients. Necking onset, corresponding to reaching the material forming limit, was determined using the acquired DIC data in conjunction with a necking detection scheme referred to herein as the "curvature approach". Necking is assumed to have initiated once changes in the surface curvature occur due to initiation of an acute neck. The curvature approach applied in this work is termed the Enhanced Curvature Method (ECM) and was built upon the method developed by DiCecco et al. [9], [17]. For comparison purposes, limit strains are also computed using the ISO12004:2-2008 methodology.
Aging Response of PA and T6 Tempers to Warm Forming Process
Tensile testing was performed on as-thermally processed samples to characterize their response to the various aging treatments. ASTM-E8 sub-sized tensile coupons [20] were machined from the flanges of the warm formed LDH samples in the PA and T6 tempers. The gauge length and width of the sub-sized specimens were 25 mm and 6 mm, respectively. The samples were tested under two conditions: (i) asextracted from the flanges following warm forming, and (ii) as-extracted with a subsequent simulated paint-bake of 177°C for 30 minutes in the fluidized sand bath furnace. Three repeated tensile tests were completed for each condition.
Tensile testing was completed in the sheet rolling direction on a 100 kN load frame (MTS Criterion 45) and DIC techniques were used to extract nominal strain data, using 25 mm virtual extensometers, from each tensile sample. The crosshead speed for all tests was 0.25 mm/s.
Warm Constitutive Response of PA and T76 Tempers
Constitutive characterization of the AA7xxx-PA and -T76 tempers was completed at 175°C for nominal strain rates of 0.01/s and 0.1/s. The PA and T76 tempers were chosen since they were expected to exhibit the strongest and lowest response to elevated temperature deformation, respectively.
Testing was performed using a closed-loop thermo-mechanical simulator (Gleeble-3500, Dynamic Systems Incorporated) with the ability to resistively heat samples up to 10,000°C/s. A modified ASTM-E8 sub-sized tensile sample was used to minimize thermal gradients along the gauge length, in which the grip length was increased to 60 mm and the gauge length was reduced to 20 mm. The sample width was nominally 6 mm. Further sample details are given in [21]. Virtual extensometers using DIC techniques were used to measure strain histories over each test.
The heating profile entailed a linear ramp from room temperature to 175°C over 10 seconds, followed by a hold time at the prescribed temperature, after which isothermal tensile testing was performed. In this work, testing was done with hold times of 5 s and 180 s. The intent of the two hold times was to assess any changes in the elevated temperature constitutive response that may arise from aging prior to forming during heating to the forming temperature 175°C.
Formability Results
Major true limit strains as a function of temper and forming temperature are plotted in Figure 3, as well as dome heights measured at the time of failure using DIC analysis, with the PA (red) results in a) T6 (blue) results in b), and T76 (black) results in c). The minor limit strains ranged from 0.016 to 0.030 across all conditions.
The limit strains calculated using the enhanced curvature method (ECM) are represented by the solid bars and the limit strains determined using the ISO12004-2:2008 method (hereafter referred to as the ISO standard) by the dashed bars. For each temper, some disagreement is observed between the quantitative values using the ECM and ISO methods; however, both approaches generally showed the same trends. As such, the following discussion focuses primarily on the ECM limit strains. In Figure 3, the RT limit strain of the PA temper is approximately 0.21, while the T6 and T76 tempers are lower at 0.18 and 0.12, respectively. These rankings are consistent with the RT workhardenability trends in Figure 1, in that the tempers with the higher degree of work hardening exhibit the highest RT formability. For the PA temper, the maximum limit strains are achieved between 150°C and 175°C, depending on the limit detection scheme applied. Nevertheless, at 200°C, both limit strain measures show a reduction of formability of the PA temper relative to RT as well as a marked increase in measurement scatter. The trends in the T6 formability (Figure 3b) roughly mirror those of the PA temper, with the T6 formability improving up to 0.22 major true strain at 175°C before plateauing or degrading at 200°C based on the ECM data. The T76 temper exhibits an improvement in formability from RT to 150°C, from 0.12 to 0.17 major true strain; however, the formability of the T76 temper lies below that of the PA and T6 tempers for all temperatures.
Although the forming limit strains were improved with warm forming for all tempers, the measured dome heights at failure decreased with increases in temperature for all tempers. Comparison of the data in Figure 3 reveals that the dome heights for the PA temper are slightly higher than those for the T6 temper, while the T76 samples had inferior dome heights. The PA temper also offers excellent RT formability which increased by 10% with warm forming at 150°C.
Secondary Aged Tensile Properties of PA and T6 Tempers Following Warm Forming
Engineering stress-strain curves from tensile samples extracted from the warm-formed (WF) PA and T6 LDH specimens are plotted in Figure 4 for each forming temperature (open symbols). Corresponding data from samples that were paint-baked after forming are also plotted and are distinguished by solid symbols. The room-temperature curves from Figure 1, corresponding to the material condition prior to forming, are replotted for comparison purposes (triangular symbols). For the PA condition, a decrease of 16 MPa in YS relative to RT is observed in the stress-strain response after exposure to the lowest WF temperature of 150°C (Figure 4a)), for which the flanges were at the target temperature for under 120 seconds. After the paint-bake, a significant aging response is observed and the strength of the warm formed PA samples approach that of the RT T6 condition (seen in Figure 4 d-f). The observed drop and subsequent recovery in strength may be indicative of retrogression and re-aging [22] for the 150°C WF+PB cycle.
After WF at the higher temperatures of 175°C and 200°C, the PA temper shows a positive aging response to both WF and WF+PB, in that the strength increases relative to the RT PA condition, while The trends in YS, UTS, UE and TE (total elongation) are summarized in Figure 5 a) -d). After a 200°C forming cycle and paint bake, the ductility in UE and TE of the PA and T6 tempers approximate that of the as-received T76. The yield and tensile strengths also appear to converge towards the overaged T76 properties, although are higher after paint-bake.
From a SCC susceptibility standpoint, the trends towards overaging in the PA temper following the 200°C+PB route may be optimal [2], [3], while offering improved strength, albeit with no formability improvements relative to the lower warm-forming temperatures. The RT and 150°C forming routes, by contrast, offer excellent formability for a 7000-series alloy; however, corrosion studies are required to assess the SCC resistance following this processing route. A softening in the stress-strain response of the T6 temper relative to the RT baseline is observed after warm forming (Figure 4 d-f) which is attributed to a retrogression response [16]. This effect is most significant at 175°C, with a drop in YS and UTS of 31 MPa (6%) and 32 MPa (6%), respectively. A positive re-aging response is noted following PB of the T6 samples for all WF cycles, with the most significant re-aging occurring again at 175°C.
With consideration to formability and final strength characteristics, the T6 temper formed at 175°C offers improved formability without a significant degradation in strength from over-aging and is [3], while offering improved strength relative to the as-received AA7xxx-T76 alloy. At 200°C, the formability benefits are more muted, but signs of overaging are more prominent.
Warm Constitutive Behaviour of the PA and T76 Tempers
To further understand the effects of the various starting tempers during warm forming, the engineering stress-strain curves for the PA and T76 tempers from thermo-mechanical testing at 175°C are plotted in Figure 6 a) and b), respectively. The tensile properties, comprising YS, UTS, and UE, are summarized in Table 2. Material rate sensitivity was examined by performing these warm stress-strain tests at strain rates of 0.01 and 0.1/s. In addition, the effect of aging during heating prior to forming was examined by introducing a hold time at elevated temperature prior to isothermal tensile testing. The warm constitutive behaviour of the two starting tempers is dramatically different. The T76 temper begins to diffusely neck immediately after yielding. By contrast, the PA temper exhibits conventional workhardening at 175°C for all hold times and strain rates. Figure 6 -Engineering stress-strain response of the PA (a)) and T76 (b)) tempers at 175°C. Note the T76 temper begins to diffusely neck immediately after yielding in figure b). Conversely, the PA temper shows conventional positive work hardening.
Given the lack of work hardening in the elevated temperature T76 data, there is effectively no plastic uniform elongation, and yet the T76 temper positively responded to warm forming. In Figure 6 b), a positive rate sensitivity is observed for each hold time, whereby there is an approximately 7% increase in yield strength from 0.01/s to 0.1/s. This positive rate sensitivity is the driving force for promoting stability during diffuse necking, leading to the formability improvements in the T76 temper with warm forming. For the PA condition, a positive rate sensitivity (increase in strength) of approximately 6% between the 0.01/s and 0.1/s conditions was observed, independent of hold time and the shape of all tensile curves were relatively consistent, per Figure 6a). At temperature, the warm formability (ECM-based data) saw a maximum improvement of only 10% at 150°C and almost no difference to RT at 175°C. In contrast to the T76 temper, the PA temper had strong RT work-hardenability and showed correspondingly good formability. In addition, the PA temper exhibited consistently higher formability than the T76 temper for all temperatures, which is largely attributed to the prominent work-hardening in the PA condition.
For the PA temper, the effect the longer (180s) hold time at 175°C prior to tensile testing is an increase in yield strength of about 24 MPa, compared the shorter (5s) data. This increase is attributed to precipitation strengthening (aging) prior to commencing the tensile test. In contrast, the overaged T76 temper experiences additional over-aging with increases in hold time, and exhibits an at-temperature decrease in yield strength of 9 MPa from a 5 s hold time to a 180 s hold time at a nominal rate of 0.01/s.
Conclusions
The warm formability and final strength characteristics of a developmental AA7xxx alloy were assessed in the PA and T6 tempers, as well as a baseline T76 temper as a function of temperature. The PA temper offered the highest room temperature limit strain of 0.21, compared to 0.18 for the T6 temper at RT. The PA temper saw a 10% improvement in formability up to 150°C to reach a limit strain of 0.23, while the highest formability of the T6 temper was 0.22 at 175°C. The overaged temper saw some benefit from warm forming; but was in general the least formable temper for all conditions.
Tensile testing of samples exposed to the warm forming time-temperature cycle was completed with and without a paint bake. It was observed that the PA temper showed a strong positive response in strength to the combination of WF and PB at 150°C and 175°C. With the 200°C WF+PB cycle, the PA temper had tensile properties approaching the baseline T76 properties, indicating this cycle may lead to overaging of the initial PA temper. The T6 temper showed a softening response with warm forming, notably at 175°C, consistent with retrogression and exhibited recovery in strength following the simulated paint-bake.
Warm tensile testing of the PA and T76 tempers at 175°C and corresponding formability results suggest the positive strain-rate sensitivity of the alloy at elevated temperatures may promote stability during diffuse necking, hence the positive WF response of the T76 temper, despite diffusely necking immediately after yielding in uniaxial tension testing at 175°C. Time at 175°C had little influence on the strain-rate sensitivity, but did lead to in-situ aging (strengthening) for the PA temper and coarsening of the T76 temper. | 2022-06-01T20:07:59.150Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "fe1e49ab31ca4ea9190d73c30932bf7e06c6098c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1238/1/012087",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "fe1e49ab31ca4ea9190d73c30932bf7e06c6098c",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
235412632 | pes2o/s2orc | v3-fos-license | Delayed elimination communication on the prevalence of children's bladder and bowel dysfunction
To determine the prevalence of bladder and bowel dysfunction (BBD) and its relationship with delayed elimination communication (EC) in children. A cross-sectional study was carried out in kindergartens and primary schools in mainland China. A total of 10,166 children ranging from 4 to 10 years old were included. A total of 10,166 valid questionnaires were collected, and 409 children were diagnosed with BBD. The overall prevalence was 4.02% (409/10,166) and decreased with age, from 6.19% at age 4 to 1.96% at age 10. With the prolonged use of disposable diapers (DDs), the commencement of usage of EC in a child was significantly put off or delayed by parents, and the prevalence of BBD amongst these children increased (P < 0.001). The prevalence of BBD among children who stopped using DDs within the first 12 months and after more than 24 months was 2.79% and 4.38% respectively. Additionally, the prevalence among children who started EC within 12 months after birth and those who never engaged in EC was 1.36% and 15.71% respectively. Early introduction of EC and weaning of DD usage has a positive correlation with lower prevalence of BBD in children in China.
bowel has been postulated, due to their close anatomical proximity, and both share innervation of the parasympathetic S 2 -S 4 and sympathetic L 1 -L 3 nerve roots 2 . However, no neurological basis for the combined dysfunction of the two organs has been obviously recognized.
Elimination communication (EC) is also known as 'natural infant hygiene' and sometimes referred to as 'baby-led potty training' or 'assisted infant toilet training' . Elimination refers to the act of defecation or urination. Elimination communication is a two-way process in toilet training. When the child shows cues of elimination, such as crying, squirming, straining, wriggling, grimacing, fussing and vocalizing, the caregiver can coordinate this elimination process with audio cues (soft whistle or hum) whilst holding or sitting the child with thighs apart over the toilet to complete this process rather than to allow them to eliminate in their DDs 5 .
The timing of toilet training may vary somewhat between countries and cultures. It has been suggested that children less than two years old are not considered ready for toilet training because of immature bladder and bowel control as well as a lack in physical skills to go to a toilet and remove their own clothes by themselves. The American Academy of Pediatrics (AAP) had suggested that children should use DDs until they are considered ready for toilet training 6 . However, whether DDs should be used before and until the maturation of voiding and defecation skills remains debatable. Furthermore, whether early active defecation guidance and discontinued DD usage combined with commencement of EC and potty training can effectively decrease the occurrence of BBD in children remains to be answered.
Traditionally, in China, many families may just use a cloth or split-crotch pants ( Fig. 1) instead of DD or after weaning off DDs. They also practice "Baniao" (a Chinese term) which describes lifting of the child in a semi-squatting position with their thighs apart over the toilet or potty (Fig. 2). Some of the intrinsic connection may should be discussed in more detail, particularly the internal relationship between DD use and delayed EC or delayed toilet training, as well as the relationship between BBD prevalence and EC start and end times. DD use cessation was the period of cessation of diaper use at the discretion of the caregiver proactively, from assisted infant toilet training gradually to the start of toilet training independently. Bladder control was considered achieved when the child is aware of the need to void, able to express the need with verbal and/or non-verbal communication, managed to stay dry and have no urinary retention 7 .
Although the physiological mechanisms of BBD have been previously examined in a number of studies, no definite conclusions have been drawn. It has also been reported that excessive dependence on DDs and subsequent delayed EC may be associated with BBD symptoms, such as voiding frequency and/or urgency, an unstable bladder, incontinence and other defecation problems 8 . It is therefore worth exploring, whether voiding and defecation problems in young children may be related to prolonged usage of DD and delayed EC, particularly before the age of 2 years. Epidemiological surveys of BBD prevalence and its risk factors can help elucidate the pathogenesis.
Figure 1.
With split-crotch pants, it is more convenient for parents to wean off diapers in a timely manner and to allow the children to control their own urination and defecation. (Photo taken and provided by the first author and the colleagues).
Methods
General information and methods. From March 2018 to June 2019, an epidemiological survey was performed in 12 cities with high population densities located throughout mainland China: Shenzhen (south), Xiamen (southeast), Zhengzhou (central), Xi'an (west) and Harbin (north). Nineteen kindergartens and 18 primary schools in total were selected by means of systematic sampling; classes were randomly chosen according to the children's age distributions in the kindergartens/primary schools.
The investigation was approved by the Research Ethics Committee of the First Affiliated Hospital of Zhengzhou University and informed consent forms were obtained from all patients or legal guardian for participation in this study, including agreement of publication of identifying information or images in an online open-access publication without name,gender,age and other personal private information. Before the survey, medical staff engaged in questionnaire collection and guidance to fill in the questionnaires were trained to avoid the respondents' misunderstanding and uncertain recall of the time of EC and DD use. As derived from parents or caregivers filling in the validated BBD questionnaire, the number of children surveyed per school was more than 200. The cross-sectional paper survey consisted of a self-administered anonymous questionnaire completed by the parents and/or caregivers. If some parents were unsure about some questions, taking the questionnaires home to complete them and returning them within one week was allowed. The procedure for filling out the questionnaires and the BBD diagnostic criteria were in accordance with the ICCS guidelines 3 .
The main contents of the questionnaire included the following: ① General information: sex, age, date of birth, height, weight, details of primary caregivers (including parents, grandparents and babysitters); ② DD usage (whether DDs were used; age when DD use was stopped; whether DDs were used during the day, at night, or all day; and adverse reactions); ③ the start time of EC and the onset of toilet independence; ④ the current voiding and defecation behaviours; ⑤ start time of urination training/bowel training, time of urination/defecation independence. The information and contents of the questionnaires were kept confidential and only known by medical staff participating in this study.
Inclusion, exclusion and diagnostic criteria. The inclusion criteria were as follows: children aged 4 to 10 years old with normal urinary anatomy and those who had not experienced surgery for the urinary system, pelvic organs, or nervous system. BBD was defined according to the 2013 ICCS guidelines, which describe a combination of functional bladder and bowel disturbances. Accordingly, any cases who meet this definition and conform with the information collected by questionnaires will be classified as BBD group. The specific types of voiding and defecation abnormalities associated with BBD were diagnosed according to the International Classification of Diseases 10th Revision (ICD-10) and Diagnostic and Statistical Manual of Mental Disorders 5 (DSM-V), and the functional defecation disorder part of BBD spectrum was based on the Rome III criteria 2,3 . In the kindergartens and primary schools we surveyed, all children have a routine physical examination once or twice a year, including routine urine, abdominal and urinary ultrasonography, so according to the results of a prior medical examination, the exclusion criteria were as follows: children who have organic diseases, taking medication for, or had surgery of the urinary, gastrointestinal and/or nervous system. Children with current www.nature.com/scientificreports/ UTIs were also excluded from the study because concurrent symptoms such as urgency, voiding frequency and urge incontinence may be related to the UTI, resulting in interference or false positives in our diagnosis of BBD.
Statistical analysis and processing. Statistical analysis was performed using the Statistical Package for Social Sciences (SPSS), version 11.0 for Windows (IBM Corp., Armonk, NY). The quantitative data with a normal distribution are expressed as the x ± s. Fisher's exact tests and χ 2 tests were used for group distribution comparisons, and the multigroup rates were compared using χ 2 tests, trending χ 2 tests and logistic regression.
Results
Questionnaire collection. A total of 11,050 questionnaires were distributed, and 10,166 valid questionnaires were included in the analysis, comprising 5118 (50.34%) males and 5048 (49.66%) females. The response rate for the questionnaires was 92% (10,166/11,050). Table 1 and Fig. 3. As shown in Fig. 3, the prevalence of BBD decreased with age (P < 0.001). The average BBD prevalence was 3.73% (191/5,118) in boys and 4.32% (218/5,048) in girls. No significant difference in prevalence of BBD between boys and girls was noted (P > 0.05).
The prevalence of BBD and its associations with different genders and ages are shown in
The prevalence of BBD was related to the length of DD usage and the EC starting time ( Table 2). Results showed: ① According to the Spearman correlation coefficient analysis (r = 0.236 and P < 0.001), with prolonged usage of DDs, there was a significant delay in starting EC and significantly increased the prevalence of BBD(P < 0.001). ② The prevalence of BBD was significantly lower amongst children who never used DDs (P < 0.001) as compared to those children who used DDs until 12 or 24 months of age. However, BBD prevalence was also significantly lower amongst children who stopped DD usage before 12 months when compared to those who used DD until after 24 months (P < 0.001). The BBD prevalence was also higher amongst children who used DDs during the daytime or for the entire day compared with those that never used DDs (P < 0.001). Moreover, children using DDs during the day are more likely to have BBD (P < 0.001) when compared with children who only use DDs at night. ③ With regards to the introduction of EC, prevalence of www.nature.com/scientificreports/ BBD was lower amongst children who started EC within 12 months of age compared with those who started EC after 12 or 24 months (P < 0.001).
Risk and protective factors for BBD prevalence.
To screen out the main influencing factors affecting the prevalence of BBD, the following factors were simultaneously introduced into the logistic regression model for analysis (Table 3).
The results of logistic regression analysis (Table 3) showed: ① The continuous prolonged use of DDs (for over 12 and 24 months) and daytime usage of DDs are risk factors for BBD in children (OR > 1, P < 0.05); ② DD usage only at night seemed to be a protective factor against BBD (OR < 1, P < 0.05); and ③ starting EC within 12 or 24 months of age was a protective factor against BBD(OR < 1, P < 0.05). Table 2. Relationship between the prevalence of BBD and DD usage and EC. *P < 0.05 was considered statistically significant for every two subgroups. *To verify whether there were differences among the subgroups, we further conducted comparisons between each of them. The letter 'a' represents the first subgroup, including those who never used DDs/never engaged in EC; the letter 'b' represents the second subgroup, including those who used DDs for less than 12 months, used DDs only in the daytime or started EC within 12 months after birth; the letter 'c' represents the third subgroup, including those who used DDs for 13-24 months, used DDs only at night or started EC at 13-24 months after birth; and the letter 'd' represents the fourth subgroup, including those who used DDs for over 25 months, used DDs for the entire day or started EC after 24 months of age.
Discussion
Disposable diapers are undoubtedly one of the greatest inventions of the twentieth century and is widely used around the world. In China, it has become increasingly popular over the past few decades. With their strong water absorption ability, children using DDs can sometimes go a whole day without needing to use the toilet. Based on this strong absorption ability and also disposable features, DDs are now also used for some Chinese medical workers who need to work long hours to combat COVID-19 and sometimes amongst astronauts. It is well known that the prevalence of BBD gradually decreases with increasing age due to the natural maturation of the body. However, with the popularisation of DD usage, it is not known whether this has had any effect on the prevalence of BBD. To our knowledge, there is very limited literature addressing this and our study is one of very few studies involving such a large study population addressing issues of DD usage, EC and/or prevalence of BBD 7,9 . This kind of study is made possible in China as DD usage only became popularised in recent decades and there are still many families who have not yet resorted to DD usage. Therefore, although this is not a multicentre study, the population base provides good comparison for DD and non-DD usage. Also, as culturally most families in China had been using the technique of early EC in toilet training before the introduction of DDs, and there seemed to be a shift towards postponed introduction of EC in children on DDs, it is also interesting to see whether this has any effect on toilet training in children and, in this study, on the prevalence of BBD. Our results, using the Spearman correlation coefficient analysis (r = 0.236, P < 0.001) had clearly demonstrated the interaction between the prolonged use of DDs and delayed EC. Logistic regression showed that the prolonged use of DDs during the daytime is a risk factor for BBD (OR > 1, P < 0.05). As children usually only engage in EC training during the daytime, DD usage during the day meant less time practicing EC and may have a negative impact. This was also supported by the finding that DD usage only at night and not day was a protective factor for BBD. Furthermore, starting EC within 24 months of age was a protective factor for BBD and even more so if before 12 months of age.
Elimination Communication is not actually a new concept. It is means of assisted infant toilet training, especially useful for little children who can't walk to the toilet by themselves, and is described as the process in which caregivers assist and enable children to meet their basic cleanliness and health needs for toileting from early infancy via verbal and nonverbal communication, or the initiation of potty use regardless of its frequency 7 . The specific implementation method of EC can involve placing the child in a certain posture to help urination or defecation, such as thighs up and apart, buttocks down, leaning against the adult abdomen, and then allowing the baby to urinate in the toilet or in the urinal above (as in Fig. 2). Whilst doing this, parents or caregivers may issue a verbal cue such as a "hush" sound until the baby urinates or defecates. There has been documentation of practice of EC dating back centuries and even in more recent years, in most resource-limited regions where DD is not widely and easily availability or affordable. However, where DD is readily available, it has quickly become popular and widely used due to its strong water absorption capability and disposable convenience, and has become a necessity for solving voiding and defecation problems during childcare 10,11 . The popularisation of DD usage is not without its downsides, such as its impact on the environment as well as other social/family issues. In China, children are often left under the care of caregivers such as grandparents whilst parents are at work during the day. Due to the convenience of DDs, caregivers (as well as parents) have a tendency for prolonged use of the DD and thus delaying the introduction of EC and toilet training. It is postulated that overlooking early EC and/or assisted infant toilet training may decrease opportunities for increasing the bladder's normal urine storage capacity and urethral and anal sphincter tension through everyday behaviours as well as remove a training method by which children can gradually develop self-controlled voiding and defecation behaviours 11,12 . However, the opinions regarding EC start time are not consistent between different countries and regions. In China, for children who do not use DDs, can sometimes start as early as 1 to 2 months of age. Caregivers are able to observe whether a baby needs to defecate based on the baby's facial, vocal, and physical features or expression. The aim is to minimize contamination of the child's body by their urine or faeces and to reduce the cleaning of cloth diapers.
In the 2003 edition of the AAP guidelines for children's toilet training, it is recommended that training should be started when the infant's nerves, muscles, language and bladder sphincter can be controlled, usually between 18 months and 4 years of age 6 . More recent articles have challenged this as it has been suggested that an effective period for cultivating spontaneous voiding and defecation from a physiological point of view should be around 6 months of age 5,13 . At age 6 months of age or older, if the caregiver responds to the baby's excretion request, communication between the caregiver and the infant can be established, and this is referred to as 'auxiliary baby stool training' or EC 14,15 . Although children cannot fully learn to void and defecate spontaneously before age 2 years, the nerve reflex or biological rhythm formed in the drainage tract can cultivate the habit of spontaneous voiding and defecation. The normal functions of urine storage, voiding and defecation are dominated by sympathetic, parasympathetic and somatic nerves, and they are ultimately coordinated by the spinal cord, brainstem, midbrain and higher cortical structural pathways 16 . Beginning in the neonatal period, the functions of the bladder and bowel are regulated by nerve pathways connected to the cerebral cortex, while infancy voiding and defecation control functions are gradually cultivated through acquired learning 17 , including EC.
It is known that the internal and external urethral sphincters (EUS) are vital for urinary control. The internal urethral sphincter functions as a unit with the trigone and bladder base to store urine and is controlled by the sympathetic nervous system via the hypogastric nerve (T 10 -L 2 ), while the EUS and skeletal muscles are controlled by parasympathetic and somatic motor neurons via the pudendal nerve (S 2 -S 4 ) 18,19 . The EUS is comprised of inner smooth muscle surrounded by outer skeletal muscle, which contains both slow-and fast-twitch fibres, with the slow-twitch fibres being more important than the fast-twitch fibres for maintaining tonic force in the urethra. Contraction of the EUS, coaptation of the mucosa, and engorgement of blood vessels in the lamina propria contribute to voiding continence 20 . www.nature.com/scientificreports/ Studies have shown that more than 17% of children have long-term urinary tract symptoms at school age, and 0.7-29.6% of children have constipation and/or FI 21,22 , which is defined as the excretion of stools in places inappropriate in the social context at least once per month in children with a developmental age of 4 years or older. Healthy children accommodate a rectal balloon with a volume of only 20 ml before they have a sudden urge to defecate, while children who have problems with chronic constipation can accommodate a rectal balloon with a volume up to 120 ml before they feel the sensation to defecate, indicating that many of the children have retention of stool in the rectal vault. One way to explain faecal retention is illustrated by Siggaard as the 'iceberg phenomenon' , whereby only a small portion of the iceberg is visible floating above the surface and the majority is submerged 23 . In approximately 95% of children with FI, no organic cause can be identified, resulting in a diagnosis of functional defecation disorder 24 . The 80% of children with functional FI have the symptom associated with functional constipation (FC) with faecal impaction causing overflow incontinence, which is characterized by the involuntary loss of soft stools when passing through obstructing faecal mass 25 . In the remaining 20% of children with functional FI, there are no signs of faecal retention, classified as FNRFI 26 .
In addition to the proven influential factor of delayed EC, the potentially possible causes of BBD may be functional voiding and defecation disorders, arousal disorders from sleeping, mental and psychological issues, hereditary tendency, endocrine dyscrasia and hormonal instability, different methods of raising children, and different living and growing environments 8,28 . In order to facilitate the filling of the questionnaires and avoid recall bias as much as possible, the schools of the respondents we selected were mainly distributed in cities and towns, and therefore, it is unable to analyze the results between children in the schools of the urban and rural areas in this study. Meanwhile, the limitations like incorrect memory of the duration of the use of DD and the age the children start EC might occur in the present study, although we have tried our best to avoid recall bias during the survey and the later data analysis, which we will strive to improve in the future research. From our current study, EC can have a preventive role against BBD, and seems to have a role in reducing the progression of BBD even when introduced a little later in toilet training and may thus even have a therapeutic role in BBD management. The bladder and rectum are closely related in anatomy and function. Approximately 20-50% of children with bowel dysfunction have combined bladder symptoms, demonstrating that voiding and defecation function can affect each other. One study by Luise Borch analysed 73 children in Denmark and demonstrated that the standardized treatment regimen of BBD led to the resolution of functional defecation disorder in 70 of the 73 children (96%). Of the children with DUI, 68% had at least a 50% reduction in the number of daytime incontinence episodes attributable to the successful relief of bowel dysfunction, and 27% achieved complete continence during the daytime 2,27 . Another study found that while actively treating intestinal problems, 95% of the children experienced improved bowel function, and 68% of the children experienced simultaneous reduced urinary incontinence during the day, with some of the children's enuresis symptoms even disappearing. At the same time, the treatment of voiding dysfunction can also reduce the prevalence of constipation and FI among children 13,29 .
Conclusions
The prevalence of BBD has increased significantly during recent decades in mainland China, where DDs have exhibited a much increased utilization rate in recent years. The present study showed that prolonged DD usage (particularly during the daytime) and resulting delayed introduction of EC in toilet training, are significant risk factors for BBD in our population of children. Early introduction of EC and weaning of DD usage, is associated with a lower prevalence of BBD in children. | 2021-06-13T06:16:30.806Z | 2021-06-11T00:00:00.000 | {
"year": 2021,
"sha1": "be36c406d1bb3ab7e50926323bdf39a2b5ed9710",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-91704-3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d4ebc8020d4687b42c6587afff89cc83d85ef54d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266083798 | pes2o/s2orc | v3-fos-license | DISCRIMINATORY TREATMENT OF FULFILLMENT OF PATIENT RIGHTS IN SERVICES AT FACILITIES BY THE HEALTHCARE SOCIAL SECURITY AGENCY IN INDONESIA
Objective: The research aims to explore the fulfillment and protection of BPJS Health Patient rights from discriminatory actions in health services. This research uses a normative juridical approach with descriptive analysis. Theoretical framework: This study revolves around the discriminatory treatment in the fulfillment of patient rights within healthcare services provided by the Healthcare Social Security Agency in Indonesia. This framework is constructed upon three pivotal pillars: the concept of patient rights, the role of the Healthcare Social Security Agency, and the overarching principles of social justice. Method: This research is descriptive and of the normative legal research category. Normative legal research is a form of legal research methodology that bases its analysis on relevant laws and regulations applicable to the legal issues that are the primary focus of the research. Meanwhile, the approaches used in this research are conceptual approach, statute approach and case approach. This study utilizes secondary data derived from both primary and secondary sources . Result and conclusion: The results illustrate that health is the main thing that is the basic need of every human being. BPJS guarantees health services for all users by helping to handle the costs of health services. BPJS Health patients have the right to be protected and fulfilled in health services, although in practice sometimes the fulfilment and protection of these rights are ignored by medical personnel and health facilities. The forms of discriminatory treatment experienced by BPJS Health Patients are the treatment of unprofessional health or medical personnel, queuing for service units with a long time, providing unqualified drugs, and limiting the room quota for BPJS patients. Originality/Value: As a recommendation in the study, Health Facilities and Medical Personnel should act professionally without any discrimination in providing health services to BPJS Health participants.
INTRODUCTION
Health is one of the basic human needs, without a healthy life humans will experience pain, so they cannot carry out their daily activities properly.People who are sick will ask for help from health workers and health workers will provide health services 4 4 Yulius Don Pratama, Sangking, and Thea Farina, "Perlindungan Hukum Terhadap Pasien BPJS Kesehatan Dalam Mendapatkan Pelayanan Kesehatan Di RSUD Dr. Doris Sylvanus Palangka Raya," Journal of Environment and Management 2, no. 2 (2021): 191-99, https://doi.org/10.37304/jem.v2i2.2948.
3
Healthcare Social Security Agency (BPJS) is a legal entity that is directly responsible to the President and organizes national health insurance for all Indonesian people, especially for civil servants, pensioners, other business entities and ordinary people.The legal basis for the implementation of BPJS is therefore Law of the Republic of Indonesia Number 24 Year 2011 regarding BPJS, which was enacted on November 25, 2011. 5 Healthcare Social Security Agency (BPJS) as one of the organizers of the national health coverage, serves to reduce the risk of the community bearing health costs from their own money, in an amount that is difficult to predict and sometimes requires large costs.The community requires security in the form of monthly premiums for BPJS Health.Thus, all BPJS health members share the cost of health care, preventing it from becoming an individual burden.The existence of BPJS Health program is very helpful for the community in reducing medical expenses, so at this time many people use BPJS for health services. 6e role of the government and the duty of institutions in protecting the rights of individuals who have registered as BPJS Health participants, namely: Provide convenience for BPJS Kesehatan patients with the cooperation of BPJS Kesehatan and Health Facilities in educating patients / customers of BPJS Kesehatan participants by improving the quality of health services in hospitals; pay attention to the facilities and infrastructure of patient rights and follow up quickly on complaints / complaints of patients / customers of BPJS Kesehatan participants that there is no distinction between BPJS Kesehatan patients and / or patients who are unable to with general patients. 7ts of discrimination such as in the form of refusal, making it difficult, or differentiating services provided to BPJS Health patients.Based on a report by the BPJS Watch Advocacy Institute, throughout 2022 there were 109 cases of discrimination experienced by BPJS patients related to drug administration, re-admissions, and deactivated membership.At the health center, discrimination that is usually reported to the institution includes the provision of drugs that are not in accordance with the ration so that patients have to buy the lack of drugs with their own pockets.Meanwhile, in hospitals, the most complained cases are re-admissions where patients who are under treatment and have not fully recovered are told to go home.After that, the patient will be re-admitted to the hospital for treatment.Another discriminatory treatment is the queue of BPJS patients to the general clinic for hours. 8e aforementioned instances demonstrate that Health Facilities and BPJS Health have not completely enhanced the performance of the best community service.To make matters worse, there are a number of instances in which hospitals have refused BPJS Health patients on the premise that all available beds are occupied, until there are casualties.Discriminatory behavior like this damages the health care system in Indonesia, and indirectly this behavior does not support reform in the health sector.Therefore, the discriminatory treatment between BPJS patients and general patients is not in accordance with the BPJS principles enumerated in Article 2 of Law of the Republic of Indonesia Number 24 of 2011 regarding the Healthcare Social Security Agency (BPJS), namely humanity, benefits, and social justice for all Indonesians.
RESEARCH METHODS
This research is descriptive and of the normative legal research category.
Normative legal research is a form of legal research methodology that bases its analysis on relevant laws and regulations applicable to the legal issues that are the primary focus of the research.Meanwhile, the approaches used in this research are conceptual approach, statute approach and case approach.This study utilizes secondary data derived from both primary and secondary sources.
PROTECTION AND FULFILLMENT OF BPJS HEALTH PATIENT RIGHTS FROM DISCRIMINATORY ACTIONS
Everyone has the right and responsibility to achieve optimal health.In order to achieve a healthy existence, as stipulated in Article 28H of the Constitution of the Republic of Indonesia, there must be a continuous effort to improve the health status.The government has the authority to plan, regulate, organize, nurture, and supervise the implementation of community-wide health initiatives that are equitable and affordable.The government is responsible for upholding the health right as a basic human right.The state as an obligation holder must make an assertion.First, the state must fulfil its domestic and foreign obligations, while individuals and communities are the ones who hold the rights.Second, the state does not have the authority, but the state is responsible for fulfilling the rights of its people, both personal and community, which is a guarantee of international human rights.Third, if a state does not carry out its responsibilities and obligations, then the state has violated human rights or international law.If the violating act as intended is not carried out by the government of a country, then the burden of bearing the act will be taken over by the international community. 9 The International Covenant on Economic Social and Cultural Rights (IESCRESB) requires state parties to commit to fulfilling the health right.State parties are required to allocate an adequate budget for health.The Ministry of Health's budget for 2023 is IDR 85.5 trillion, or 47.8 percent of the total health budget of IDR 178.7 trillion.This comprises the budget for the payment of JKN contributions for 96.8 million PBI participants, totalling IDR 46.5 trillion. 10r example, if the largest budget is for military financing, then state parties must do the opposite.This commitment actually requires the obligation to adopt national instruments or laws and the implementation of the World Health Organization's Primary Health Care (PHC) strategy. 11 an obligation holder, the state has an obligation of conduct and an obligation of result.The obligation of conduct obliges the state to take steps in the realization of ESCR, while the obligation of result obliges the state to achieve certain results.Thus, it can be noted that the results achieved from the implementation of state obligations are relevant to progressive realization.Due to the nature of the gradual realization, the fulfilment of state obligations is not only seen from the results alone but must also be examined from the steps taken. 12rticularly in health services, doctors, patients and hospitals are three legal subjects involved in the field of health services.These three elements form a medical and legal relationship.The relationship formed is generally an object of health maintenance in general and health services in particular.Doctors and hospitals act as providers of health services to patients, while patients act as recipients of health services.The implementation of the relationship between doctors, patients and hospitals is always regulated by certain regulations so that there is harmony in carrying out the relationship between the parties. 13e rights of patients have actually been protected and regulated by several laws, namely the Medical Practice Act, the Health Act and the Hospital Act.Article 52 of the Medical Act states that patients in receiving services in medical practice have the right to obtain a complete explanation of medical actions, request a doctor's opinion, obtain services in accordance with medical needs, refuse medical action and obtain the contents of medical records. 14sically, there are 5 (five) guarantees of patient rights that must be fulfilled by the hospital so that legal protection of patients as consumers of services in health services can be fulfilled, namely: (1) a guarantee to obtain information when provided with health services; (2) a guarantee of security, comfort and safety of health services; (3) a guarantee of equal rights in health services; (4) a guarantee of freedom of choice over nursing services; (5) a guarantee of freedom to claim rights that are harmed. 15JS health participants who have registered and paid contributions are entitled Based on the results of research on BPJS Health patients who were hospitalized in the hospital, there were quite a lot of BPJS Health Patients who did not get their rights when they were hospitalized or health services at the hospital.Even though these patients have carried out their obligations as determined by the BPJS Kesehatan and the hospital some of these patients feel disadvantaged in the health service process.Late handling or lack of information about the patient's condition is often experienced by BPJS Health Patients so it is not uncommon for BPJS Health Patients to experience losses that should not be if the hospital carries out its obligations to patients.16 Regarding the discrimination experienced by BPJS patients in basic health services, it can be seen from the results of research based on the results of a survey conducted by the National Commission of Human Rights in 2020, which was 20.7% who had seen/experienced discrimination in basic Health Centres services.In accordance with this data, the realization and fulfilment of the right to health must basically be based on the principle of non-discrimination.17 The non-fulfilment of the right to health, which is a state obligation, can be categorized as a form of human rights violation at both the level of commission and omission.For example, the problems that arise are related to the availability of medicines, medical treatment and health services provided to patients that are not optimal.
The role of BPJS Kesehatan is to realize the right to health services for Indonesian citizens.BPJS Kesehatan covers the costs of health services both at first-level health facilities and advanced referral health facilities.Cost coverage is carried out with the principle of mutual cooperation, where participants who have more income pay 16 Shoraya Yudithia, M. Fakih, and Kasmawati, "Perlindungan Hukum Terhadap Peserta BPJS Kesehatan Dalam Pelayanan Kesehatan Di Rumah Sakit," Pactum Law Journal 1, no. 2 (2018): 164-69. 17
Unprofessional treatment of health or medical personnel
Medical personnel in charge of serving the community do not carry out their duties properly, patients complain about the poor service provided by medical personnel to patients who use BPJS cards for treatment.Medical personnel consider the service guarantees provided by the government to the community as a barrier or difference between high-economic communities and low-economic communities.In health care efforts carried out by medical personnel, one of which is that patients who suffer from illness are hospitalized in the hospital, in handling and health services there is discrimination in the services of Non BPJS inpatients with BPJS patients.
BPJS patients complain about the attitude of sympathy that is not good, the existence of services that are not good and unfriendly when handling, not in accordance with providing services that have been agreed upon in scheduling, not fast in the process of hospitalisation to choose a room, the words delivered are not right to the patients and even less swift medical personnel in handling and ignoring patients who should get emergency help, while the attitude of medical personnel towards Non-BPJS patients is very swift and friendly in serving them, this attitude is carried out by medical personnel blatantly showing differences between patients using the BPJS program and Non-BPJS patients.
Furthermore, the examination is carried out by doctors who are still in the process of education (Doctor Koas), the procedures are difficult, and there is no hospitality for user patients.Examinations carried out by koas doctors are usually done because the doctor who conducts the examination is late in arriving or the doctor cannot come to examine the patient.Difficult procedures make patients less aware of the flow that patients have to do in the administrative process. 20professionalism in health services basically stems from the capitation tariff system that is applied, which pays claims in advance every month to health facilities based on the number of patients who register at the facility without calculating the type and number of health services provided.The INACBG's tariff system is a payment system 20 M. Yusuf Sidang Amin Amin, Baharuddin Badaru, and Djanggih Hardianto, "Perlindungan Hukum Terhadap Pasien Pengguna BPJS Terhadap Pelayanan Kesehatan Di Rumah Sakit Wisata UIT Makassar," Journal of Lex Generalis (JLS) 3, no. 3 (2022): 404-17, http://pascaumi.ac.id/index.php/jlg/article/view/1116/1267.Khalid, H., Zainuddin., Poernomo, S., L. (2023).Discriminatory Treatment of Fulfillment of Patient Rights in Services at Facilities by the Healthcare Social Security Agency in Indonesia 10 with claims in packages with groupings of disease diagnoses and procedures.This tariff system favours BPJS Health to control claim costs.This system is effective but pressures health facilities to serve with claims below the basic cost of health services, so that health facilities are resistant and provide services that are sober and tend to be poor. 21
Service unit queue with long time
Based on the North Sumatra Ombudsman Report, dissatisfaction is quite high from patient complaints, namely the long waiting time for services which is more than 60 minutes and sometimes there are patients who wait up to two hours to get services.Some
Unqualified drug administration
One form of discrimination that is often encountered is discrimination in terms of health services.Discrimination in terms of health can lead to limited access to health and low-quality of services such as in the services of doctors, nurses, and also in terms of providing unqualified drugs which cause a slow healing process in patients.If traced, this drug stock error will have an impact on the patient's healing because the patient does not get the medicine as it should be, whereas if you do not use BPJS, the patient is free to get medicine and it is undeniable that the supply of non-BPJS drugs is always there and never empty for a long time.While primary clinics, general hospitals, and specialized hospitals are advanced-level facilities. 24e government subsidizes the implementation of healthcare services for the impoverished in hospitals.On the basis of numerous studies examining the break-even cost of government hospitals, only 14.7% of hospitals were able to achieve cost recovery, while 85.3% were unable to achieve cost break-even.Government subsidies reportedly cover only 5% of the deficit, so it is natural that the majority of hospitals cannot break even.The most expensive elements of claim expenses were drug costs (11-31%), accommodation costs (7-26%), room procedures (8-32%) and laboratory tests (6-19%).Khalid, H., Zainuddin., Poernomo, S., L. (2023).Discriminatory Treatment of Fulfillment of Patient Rights in Services at Facilities by the Healthcare Social Security Agency in Indonesia 13 timely payment system for health facility claims to BPJS so that health facilities do not engage in discriminatory practices.
8
Anonim, "Tindakan Nakes 'bedakan Pasien BPJS' Dikecam Publik, 'Sangat Tidak Pantas' -Pegiat: 'Itu Bentuk Kecurangan Dan Paling Banyak Terjadi Di Rumah Sakit,'" BBC News Indonesia, 2023, https://www.bbc.com/indonesia/articles/cn06g268n6vo.On this basis, the government is obligated to provide legal protection through a variety of regulations, such as the 1945 Constitution of the Republic of Indonesia, Law Number 40 of 2004 regarding the National Social Security System, Law Number 24 of 2011 regarding the Social Security Organising Agency, and Law Number 36 of 2009 regarding Health.
Anonim, "Survei Pandangan Masyarakat Terhadap Hak Atas Kesehatan Dalam Sistem Jaminan Kesehatan Nasional Di Indonesia," Komisi Nasional Hak Asasi Manusia, 2020, http://https//www.komnasham.go.id/files/20211007-surveipandangan-masyarakat-terhadap-$IO0X4.pdf.contributions, while participants who cannot afford it are borne by BPJS, 18 and implemented the INA-CBGs payment system, which is a health service bundle tariff covering all hospital cost components, from non-medical to medical services.In the INA-CBGs system, patients are categorized into episodes with associated service costs.BPJS guarantees health services for all users with the promised facilities, which help users in handling costs for health services.The facilities guaranteed by BPJS include first-level health services, namely non-specialistic health services which include service administration, promotive and preventive services, examination, treatment and medical consultation, non-specialistic medical actions, drug services and consumable medical materials, blood transfusions in accordance with medical needs, first-level laboratory diagnostic support examinations, and first-level hospitalization according to indications.BPJS Health also guarantees advanced referral health services, which include outpatient treatment including service administration, examination, treatment, and specialized consultation by specialist and subspecialist doctors; specialized medical actions according to medical indications; drug services and consumable medical materials; implantable medical device services; advanced diagnostic support services according to medical indications; medical rehabilitation; blood services; forensic medicine services; and corpse services.Furthermore, there are also inpatient services, which include non-intensive inpatient care in intensive care, as well as other health services that have been determined by the Indonesian Ministry of Health.Legal protection efforts that can be carried out by BPJS patients are such as Patients can make complaints directly or indirectly.Direct complaints can be in the form of direct face-to-face with related parties by coming to the hospital complaints section or the nearest BPJS Health Office, or through the media telephone service centre or hotline service.And indirect complaints through correspondence, SMS gateway, email, website and social media on behalf of BPJS Health. 1918 Endang Kusuma Astuti, "Peran BPJS Kesehatan Dalam Mewujudkan Hak Atas Pelayanan Kesehatan Bagi Warga Negara Indonesia," J-PeHI: Jurnal Penelitian Hukum Indonesia 01, no.01 (2020): 55-65.19Finensia Aulia Kusumastuti and Mukti Fajar ND, "Upaya Perlindungan Hukum Bagi Pasien BPJS Terkait Sistem Rujukan Rumah Sakit Di Kota Yogyakarta," Media of Law and Sharia 1, no. 3 (2020): 162-75, https://doi.org/10.18196/mls.v1i3.9495.
Khalid, H., Zainuddin., Poernomo, S., L. (2023).Discriminatory Treatment of Fulfillment of Patient Rights in Services at Facilities by the Healthcare Social Security Agency in Indonesia 9 3.2 DISCRIMINATORY TREATMENT OF BPJS HEALTH PATIENTS IN HEALTH SERVICES queues for examination, hospitalization, and surgery at Health Facilities are still frequent, sometimes BPJS patients are numbered in hospitals because they are served with separate counters and different services from other general patients.The difference between patients who use BPJS and other patients in generalPatients who utilize BPJS are typically required to wait longer than other patients.The hospital rejects patients enrolled in BPJS Health due to the lengthy duration of claims processing.Because hospitals also require financial flow.Numerous administrative and other requirements must be fulfilled by BPJS Patients.After completing the administrative procedure, they must also conduct a lengthy verification.22 23
3. 3
RESTRICTION OF ROOM QUOTA FOR BPJS PATIENTS Quality of service through facilities and infrastructure includes fulfilling the complete needs of patients, patient families and service providers to support the achievement of comfort and excellent service which includes:(1) rooms and beds according to medical indications (patient illness); (2) rooms and beds are clean, comfortable and safe; (3) a comfortable patient waiting room is available; (4) medical devices needed by patients are available and complete; (5) complete patient examination tools. 26BPJS, in several health service places there are still rooms for BPJS patients that are inappropriate and uncomfortable and there is still a dirty environment.Not only is the room quota different, even the time of stay is also different, for BPJS patients it is more rushed regarding discharge, this is also evident from the attitude shown by medical personnel to BPJS patients who are hospitalized, even though at the first time of hospitalization the response issued by medical personnel is very long in handling the 24 Sandra Aulia et al., "Cost Recovery Rate Program Jaminan Kesehatan Nasional Bpjs Kesehatan," Akuntabilitas 8, no. 2 (2016): 111-20, https://doi.org/10.15408/akt.v8i2.2767. 25Aulia et al. 26 Afghan Nanda, Aminah, and Sonhaji, "Perlindungan Hukum Terhadap Paisen BPJS Kesehatan Di RSUP Dr. Soeradji Titronegoro Klaten," Diponegoro Law Journal 6, no. 4 (2016): 1-13, https://doi.org/https://doi.org/10.14710/dlj.2016.13739. | 2023-12-08T16:13:25.606Z | 2023-12-05T00:00:00.000 | {
"year": 2023,
"sha1": "0c9f8e8fd21e5de876ab756fb8584cd66df9643a",
"oa_license": "CCBYNC",
"oa_url": "https://ojs.journalsdg.org/jlss/article/download/2053/1021",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "aa2913ff5c4b11cf616f04f2427166e7e92c76dc",
"s2fieldsofstudy": [
"Law",
"Medicine"
],
"extfieldsofstudy": []
} |
270675580 | pes2o/s2orc | v3-fos-license | MDC-RHT: Multi-Modal Medical Image Fusion via Multi-Dimensional Dynamic Convolution and Residual Hybrid Transformer
The fusion of multi-modal medical images has great significance for comprehensive diagnosis and treatment. However, the large differences between the various modalities of medical images make multi-modal medical image fusion a great challenge. This paper proposes a novel multi-scale fusion network based on multi-dimensional dynamic convolution and residual hybrid transformer, which has better capability for feature extraction and context modeling and improves the fusion performance. Specifically, the proposed network exploits multi-dimensional dynamic convolution that introduces four attention mechanisms corresponding to four different dimensions of the convolutional kernel to extract more detailed information. Meanwhile, a residual hybrid transformer is designed, which activates more pixels to participate in the fusion process by channel attention, window attention, and overlapping cross attention, thereby strengthening the long-range dependence between different modes and enhancing the connection of global context information. A loss function, including perceptual loss and structural similarity loss, is designed, where the former enhances the visual reality and perceptual details of the fused image, and the latter enables the model to learn structural textures. The whole network adopts a multi-scale architecture and uses an unsupervised end-to-end method to realize multi-modal image fusion. Finally, our method is tested qualitatively and quantitatively on mainstream datasets. The fusion results indicate that our method achieves high scores in most quantitative indicators and satisfactory performance in visual qualitative analysis.
Introduction
Since different modal medical images have their advantages and limitations, singlemodal medical images usually cannot provide comprehensive diagnostic information for clinics.Therefore, the fusion of different modalities of medical images by exploiting their respective advantages has become an important direction of medical image research.In this field, the fusion of structural medical images, e.g., magnetic resonance imaging (MRI), and functional medical images, e.g., positron emission tomography (PET) or single photon emission computed tomography (SPECT), is a research hotspot.MRI, PET, and SPECT are three common medical imaging modes.Specifically, the SPECT/PET provides functional and metabolic information through radionuclide uptake [1], while the MRI provides highresolution anatomical and soft tissue contrast information [2].The fusion of SPECT/PET and MRI images can provide images with both morphological and functional information and enable more accurate and reliable location, qualitative analysis, and quantitative analysis of lesions [3].Therefore, medical image fusion technology is widely used in clinical medicine, which plays an important role in improving the accuracy of diagnosis, optimizing treatment plan, and enhancing the effect of operation.
According to the principles of the traditional fusion methods, they can be divided into four categories: sparse representation-based methods [4], spatial domain-based methods [5], frequency domain-based methods [6], and fuzzy set domain-based methods [7].Most of the above methods use different image transformation or decomposition techniques to extract multi-scale image features and obtain the expression of images in different spatial dimensions.However, due to the manually designed feature extraction or transformation rules, these fusion methods have limited adaptivity and robustness.Additionally, these methods cannot be trained in an end-to-end manner and are not suitable for complex scenes.Moreover, some effective information may be lost in the process of image decomposition and feature extraction, which makes the fusion method fail to balance between global consistency and local accuracy.
With the rapid development of artificial intelligence, deep learning technology has been widely used in the field of medical image fusion [8].The deep learning-based methods do not need a manually designed feature extractor and can extract high-level features from the original image data through the multi-layer structure of the deep neural network to capture the potential information of the image [9][10][11].However, due to the inherent limitations in its mechanism, the deep learning-based fusion methods still have certain defects.(1) Most of the deep learning-based fusion methods using vanilla convolution have limited feature extraction capability.Compared to vanilla convolution, dynamic convolution has a relatively stronger feature characterization capability by aggregating multiple convolutional kernels.The existing dynamic convolution methods focus only on the dimension of the number of convolutional kernels in the kernel space while ignoring the other dimensions, so they have a limited ability to capture contextual cues.(2) The local connectivity and shared weights in the convolutional layer of the deep learning method make it difficult to model long-range dependencies, resulting in a lack of global consistency in the generated images.(3) The medical images usually contain different forms of noise, including artifacts, uneven brightness, and pseudo structures.However, deep learning-based fusion methods often use pixel-level loss functions that do not consider the correlation between pixels, causing the fusion results to be easily interfered with by noise and reducing the quality of the generated images [12].
To overcome the previously mentioned defects and fully unleash the potential of deep learning methods for medical image fusion, this paper proposes a multi-modal medical image fusion method that uses multi-dimensional dynamic convolution (MDC) and residual hybrid transformer (RHT), which is named as MDC-RHT.In the proposed method, the convolutional computation is performed using MDC instead of the conventional vanilla convolution.To effectively capture both the global features and the local details, a RHT module is designed, which integrates channel attention and window-based self-attention mechanisms.The multi-scale approach used in the overall network design highlights the cross-scale properties.Additionally, a loss function is formulated by combining content perception loss and structural similarity loss to improve the resemblance of the fused image to the real image in terms of perception and guarantee that the fused image preserves the structural similarity of the real image.The contributions of this paper are summarized as follows: 1.
We propose a novel multi-scale unsupervised fusion network for multi-modal medical images.In order to achieve better fusion, we construct a feature extraction module to acquire more rich information.
2.
We propose the use of MDC to comprehensively enhance the image feature extraction from the four dimensions of the convolutional kernels and learn the complementary attention of convolutional kernels in four dimensions to fully capture rich contextual cues.
3.
To solve the problem that some transformer mechanisms cannot perfectly achieve cross-window information interaction and result in artifacts in the fused results, the RHT module is designed in this paper.This module can effectively extract global features and enhance the direct interaction of neighboring window features.
The rest of this paper is organized as follows.Section 2 introduces the related work and the motivation, including a brief overview of current medical image fusion methods, a brief introduction to the vision transformer, and an explanation of the motivation of this study.Section 3 provides a comprehensive explanation of the proposed method.The experimental results and analysis are given in Section 4. Section 5 discusses the work of this article.Finally, Section 6 concludes this paper.
Multi-Modal Medical Image Fusion Methods
According to the technological principles of image fusion methods, they can be divided into two categories: traditional methods and deep learning-based methods.The traditional methods include sparse representation methods, spatial domain-based methods, frequency domain-based methods, and fuzzy set-based methods.Specifically, the sparse representation-based methods exploit the sparsity characteristic of the source images to realize image fusion by sparse decomposition and coefficient combination [13,14].The spatial domain-based methods perform pixel-level operations and fusion directly within the spatial domain of the image.Representative spatial domain-based methods include the pyramid transform [15] and the Gaussian pyramid transform [16].The frequency domain-based fusion methods transform the source images into the frequency domain and utilize the frequency domain characteristics for fusion.The representative methods include wavelet transform [17], curvelet transform [18], non-subsampled contourlet transform (NSCT) [19], and non-subsampled shearlet transform (NSST) [20].The fuzzy set domain-based methods mainly rely on the concept and operation of fuzzy sets to realize image fusion through blurring, rule building, and defuzzification [21].
The deep learning-based methods do not require manually designed algorithms and rules by using data-driven technology to automatically learn complicated feature representations [22].Zhang et al. proposed a generalized image fusion framework IFCNN based on convolutional neural networks (CNNs) [23].The method exploits an attention mechanism to guide the fusion process and improve fusion performance.However, this method only applies linear element-level fusion rules to combine convolutional features.Cheng et al. proposed a memory cell-based image fusion network called MUFusion, which collaboratively supervises the fused image by introducing a novel memory cell architecture that leverages the intermediate output [24].Liu et al. proposed a novel MIF framework, which integrates the powerful feature representation ability of a deep learning model and the accurate frequency decomposition characteristics of discrete wavelet transform (DWT) [25].Xu et al. proposed an unsupervised enhancement medical image fusion method EMFusion [26], which evaluates the information content of images by calculating their entropy.
Compared with traditional methods, deep learning-based image fusion methods have made great progress but still have shortcomings.For instance, the convolutional modules used by the current training networks overly focus on extracting local features, and they cannot model long-range dependencies of global information.Meanwhile, convolutional networks are constrained by their local receptive fields, which limits their ability to capture a comprehensive panoramic context.Additionally, due to the lack of attention mechanisms, most of the existing methods cannot effectively extract the correlation between regions and channels.With the excellent performance of transformers on computer vision tasks, an increasing number of transformer-based image fusion methods have been proposed.
Application of Transformer in Computer Vision
The application of transformers in computer vision includes image classification, object detection, semantic segmentation, image generation, etc.The vision transformer method proposed by Dosovitskiy et al. applies transformer to image classification [27].It abandons the convolution operation in CNN and fully utilizes the attention mechanism in the transformer.Zheng et al. introduced a novel image segmentation method based on the transformer model [28].By utilizing the self-attention mechanism of the transformer model, the method effectively overcomes the difficulty of establishing contextual connections across large spans.The swin transformer, as proposed by Liu et al. [29], is a hierarchical visual transformer that is designed to effectively capture both local and global information in images by using a moving window.
In terms of multi-modal image fusion, Wang et al. developed a residual swin transformer fusion network called SwinFuse [30] for image fusion.The design of this network fully exploits the capability of the swin transformer to capture global and local feature relationships in image fusion, thus generating more accurate and clear fusion images.Li et al. proposed the DFENet [31] fusion method, which combines transformer and convolutional feature learning to form a dual-branch feature enhancement network.However, the method may face issues of high computational complexity and high memory consumption when processing large-scale image data.Tang et al. presented a multi-modal medical image fusion method called MATR by using a multi-scale adaptive transformer [32].This method introduces adaptive convolution and an adaptive transformer to extract global complementary contextual information to achieve accurate image fusion.
Motivations
Current image fusion methods based on CNN and transformer have achieved good results, but vanilla convolution or ordinary dynamic convolution still has difficulties in adequately capturing feature context cues.In addition, due to the limitations of its computation principles, ordinary transformers have high computational complexity.Methods such as the swin transformer, which is developed by optimizing an ordinary transformer, can guarantee the quality of the operation while improving the computational speed.However, the shifted window mechanism cannot perfectly realize the cross-window information interaction, and the block effect will appear in the middle of the extracted features.Reference [33] pointed out that the window partitioning mechanism of the swin transformer causes obvious blocking artifacts in the extracted features, indicating that the shifted-window mechanism is inefficient for establishing cross-window connections.References [34,35] suggested that the enhanced connections between windows can improve window-based self-attention methods and reduce artifacts caused by inefficient cross-window connections.Therefore, to solve the above problems, this paper introduces multi-dimensional dynamic convolution into the proposed method.Meanwhile, a novel multi-dimensional attention mechanism and a parallel strategy are employed to learn complementary attentions for convolutional kernels along all four dimensions of the kernel space at any convolutional layer.The MDC fully exploits the potential of dynamic convolutional properties to enhance the efficiency and accuracy of feature extraction.Additionally, the RHT module is designed to combine channel attention and window-based self-attention schemes.It integrates the complementary advantages of global data statistics and powerful local fitting.To overcome the defect of cross-window information interaction, the RHT module also introduces a new overlapping attention module to enhance the interaction between neighboring window features, thereby improving the representation of window self-attention.
Based on the above discussion, this paper proposes the MDC-RHT network, which exploits the advantages of current image fusion methods and solves the problems in these methods.The MDC-RHT network adopts an end-to-end multi-scale network architecture.Its training is performed in an unsupervised manner without manually designing and adjusting the intermediate steps or feature representations, thereby reducing manual intervention and complex parameter tuning.The multi-scale network architecture preserves image details across different scales, which helps to prevent information loss and produce a richer fused image.The loss function employs both content perception (PERCE) loss and structural similarity (SSIM) loss.The PERCE loss can capture several aspects of perceptual content such as image texture, color, and structure, allowing the model to focus on high-level semantic information and global features; the SSIM loss considers three aspects of similarity, namely, brightness, contrast, and structure, and can measure the degree of structural preservation between the fused image and the real image.The combined use of two losses will constrain model training in all directions, producing high-quality fusion results.Additionally, the construction of the MDC and RHT modules is crucial, where the former effectively captures rich contextual cues, and the latter aggregates crosswindow information in addition to channel attention and window-based self-attention.This overcomes the limitations of the shift-window mechanism in certain transformers.Finally, the qualitative and quantitative evaluation results demonstrate that our method achieves excellent results in the fusion of SPECT/PET images and MRI images.
Proposed Method
In this section, the proposed method is introduced in detail.Section 3.1 gives a preliminary description of the overall network architecture of the proposed method.Section 3.2 describes each submodule of the network in detail.The proposed loss function is introduced in Section 3.3.
Overall Network Framework
As illustrated in Figure 1, the SPECT/PET images contain prominent functional information of tissues and organs.During the fusion process, the functional information, i.e., the color information, should be preserved in the fusion result.However, the MRI image is only a single-channel grayscale image.To implement image fusion, the SPECT/PET image should be transferred from the RGB space to the YCbCr space by using Equation ( 1 The overall fusion framework is composed of multi-scale modes.Different scale features are extracted through three branches to better preserve contextual interaction information and high-resolution and low-resolution information.The top branch is composed of a dynamic convolutional block (DCB) module and three RHT modules to extract surface features.In the middle branch and the bottom branch, the DCB modules are added layer by layer to extract deep features step by step.Finally, the features of different scales are added and fed into the convolution activation module to obtain the preliminary fusion results.After the fusion process is completed, the chrominance channels Cb and Cr are synthesized into the fused image.The final fusion results can be restored by Equation (2).Through chromaticity separation and restoration, the color information of the original image can be preserved, and the color fidelity is high.
Description of DCB and RHT
The DCB module is an important module in the proposed method, which is composed of MDC, instance normalization (IN), and Gaussian error linear units (GELUs).In the image fusion process, the fusion performance depends on whether high-quality image features can be extracted.The fusion of SPECT/PET and MRI images is to integrate the functional information of SPECT/PET images and the structural information of MRI images into a fused image.Therefore, the initial convolutional module needs to have a higher capability for feature extraction to extract more detailed information.Inspired by [36], the MDC is introduced to replace common vanilla convolution and common dynamic convolution.
In traditional CNN, each convolutional layer usually learns only one static convolutional kernel.Dynamic convolution can enhance the accuracy of lightweight CNN by learning a set of convolutional kernels and their corresponding attention weights while maintaining an efficient inference process.Nevertheless, the existing dynamic convolution methods, as shown in Figure 2a, only have dynamic characteristics in the dimension of the number of convolutional kernels while ignoring the dimensions of space size, the number of input channels, and the number of output channels.The multi-dimensional attention is introduced in this paper as shown in Figure 2b.Four types of attention mechanisms, i.e., α si , α ci , α f i , and α wi , are introduced, corresponding to the spatial dimension of the convolutional kernel, the dimension of the input channel, the dimension of the output channel, and the number of convolutional kernels, respectively.So, the convolutional operation has different responses to different samples.Specifically, α si gives different attention values to the spatial position of each convolutional kernel, α ci gives different attention to the input channel of each convolutional kernel, α f i gives different attention to the output channel of each convolutional kernel, and α wi is used to adjust the weights between different convolutional kernels.These four types of attention mechanisms support parallel computing and gradually act on the convolutional kernel so that the convolutional kernel can make a more dynamic and personalized response to the input samples.The operation of DCB is shown in Equation ( 3): where MDC represents multiple dynamic convolutions, IN represents instance normalization, and GELU is an activation function.
To collect multi-resolution feature maps, the overall network architecture uses the multi-scale mode [32].The research in the literature [33] suggested that a module with channel attention, window attention, and overlapping cross attention can be designed to activate more pixels to participate in the fusion process and strengthen the connection of global context information.Specifically, this paper uses three branches to collect the resolution information of different depths.The top branch consists of one DCB module and three RHT modules, the middle branch consists of two DCB modules and three RHT modules, and the underlying branch consists of three DCB modules and three RHT modules.As demonstrated in Figure 3, the RHT module consists of the RHTA and RHTB sub-blocks connected by residuals.The schematic diagram of RHTA and RHTB is presented below.The RHTA also consists of two addition operations.The first part is a residual module that is composed of Layernorm (LN), window-based multi-head self-attention (W-MSA), and channel attention module (CAM).The second part is a residual module connected by the LN and a multi-layer perceptron (MLP).The operation of RHTA is expressed in Equations ( 4) and ( 5): where CAM represents the channel attention module, and W-MSA represents windowbased multi-head self-attention: where MLP represents the multi-layer perceptron, and R out A1 represents the output of Equation ( 4).The RHTB module consists of two addition operations.The first part is a residual module connected by LN and an overlapping attention module (OAM).The second part is a residual module connected by LN and an MLP.The operation of RHTB can be expressed by Equations ( 6) and ( 7): where OAM represents the overlapping attention module.
where MLP represents the multi-layer perceptron, and R out B1 represents the output of Equation (6).
Therefore, the entire RHT module is represented in Equation (8).Finally, the preliminary fusion features are obtained by summing the results of these three branches.After inputting the features into MDC and finally activating the GULE, the final gray fusion image is obtained: where R out A and R out B represent the operations of Equations ( 5) and (7), respectively.
Loss Function
The pixel-level difference loss function focuses only on the numerical differences between each pixel, without considering the variations in the human perception of images.Therefore, even if there are minor discrepancies in the noise between the generated image and the real image, the pixel-level loss function may overemphasize these discrepancies.This will reduce the quality of the fused image and limit the robustness and anti-jamming ability of the model to noise.Compared with the traditional pixel-level difference loss function, e.g., mean square error, the perceptual loss and the SSIM loss can better preserve the structural and perceptual features of the image, thereby generating a more real and natural fusion image.The PERCE and SSIM loss functions are more consistent with the characteristics of human perception because they can simulate the high sensitivity of human eyes to the structure and content of the image, leading to fusion images with higher quality.Since the PERCE loss can capture details such as image content perception, and the SSIM loss measures the degree of structure preservation between the fused image and the real image, the total loss function in this paper is designed as follows: where L PERCE represents the perceptual loss, and L SSI M represents the SSIM loss.The perceptual similarity measure involves the following steps.Firstly, the feature vector is extracted by a neural network model, and then the Euclidean distance is calculated by mapping in the feature space.The perceptual similarity loss is defined as follows: where P Y F represents the fused gray-scale image, P MRI represents the MRI image, and P Y PET represents the Y component of the SPECT/PET image.For the feature map ϕ, the popular VGG-16 network is used for pre-training.The perceptual will drive the network to generate an image with similar characteristics to the reference image.The algorithm combines superimposed convolution and hierarchical pool to gradually reduce the spatial dimension and extract higher-level features at a higher level.
The PERCE loss can make the model pay attention to high-level semantic information and content perception, but there are still some defects in the structural attribute constraints, so the SSIM loss function is introduced to deepen the learning of structural information such as scene details and structural texture.According to the design principle of the SSIM loss function proposed by [37], the improved loss function is represented as follows: where P Y F represents the fused gray-scale image, P MRI represents the input MRI image, and P Y PET represents the Y component of the input color image.SSI M(.) is expressed as follows: where µ A and µ B represent the mean values of A and B, respectively.C 1 and C 2 are constants.σ AB represents the covariance of A and B, and σ 2 A and σ 2 B represent the variance of A and B, respectively.
Experiments and Analyses
In this section, the proposed method is compared with some state-of-the-art medical image fusion methods, both quantitatively and qualitatively.Section 4.1 introduces the medical image datasets and the experimental settings in detail.The comparison methods and the quantitative evaluation methods are described in Section 4.2.Section 4.3 demonstrates the experimental results of the proposed method and the comparison methods and analyzes the experimental results in detail.
Datasets and Experimental Settings
The Harvard Medical Image Database (https://www.med.harvard.edu/aanlib/home.html, accessed on 8 February 2024) is a database that contains a large number of registered multi-modal medical images.It is used as the training set and test set in the experiment.Specifically, 310 pairs and 40 pairs of MRI and SPECT/PET images are randomly collected from this common dataset as the training set and test set, respectively.Then, the common data enhancement method of random clipping is adopted, and 310 pairs of medical images are clipped to 19,000 patch pairs.In the training process, the learning rate is fixed at 1 × 10 −4 , the batch size is set to 16, the number of epochs is set to 100, and the optimization algorithm is Adam.To facilitate understanding, the MDC in the DCB block in front of the multi-scale network is referred to as MDconv1, the MDC in the multi-scale network is referred to as MDconv2, and the MDC after the multi-scale network is referred to as MDconv3.The detailed parameters of all convolutional operations in the network architecture are listed in Table 1.The code of the proposed method is available at https://github.com/XUTauto/MDC_RHT (accessed on 11 May 2024).The experimental platform is a computer equipped with a 64-bit Windows 11 operating system, Intel Core i5-12400F CPU, NVIDIA GeForce RTX 3060 Ti GPU, and 16 GB of RAM.
Comparison Methods and Evaluation Methods
To demonstrate the advanced performance of the proposed method, it is compared with seven state-of-the-art medical image fusion methods, including MATR [32], DDC-GAN [38], EMFusion [26], U2Fusion [39], IFCNN [23], MSDRA [40], and SwinFuse [30].These methods cover generative adversarial networks, non-end-to-end fusion networks, end-to-end fusion networks, and transformer networks.The codes of these methods are publicly available or provided by their authors.To ensure the accuracy of the experimental results, all the fixed parameters are set according to the values given by the corresponding papers.
To quantitatively analyze the experimental results, nine mainstream evaluation indicators are applied, including mutual information (MI), non-linear correlation information entropy (NCIE), sum of the correlations of differences (SCD), multi-scale structural similarity (MS-SSIM), entropy (EN), visual information fidelity (VIF), Tsallis entropy (Q TE ), edge-based similarity measure (Q AB/F ), and gradient-based metric (Q G ).A detailed description of these indicators is given below.
(1) MI: It describes the dependence of image content by calculating the amount of information shared between two images.The greater the MI value, the stronger the correlation of the information contained in two source images, and the more MI they share [41].(2) NCIE: It is an information entropy measure to reflect the performance of image fusion, which considers the influence of the non-linear correlation on the fused results [42].(3) SCD: It computes the correlation differences between the source image and the fused image in different frequency bands [43].The higher the SCD value, the better the fused result.(4) MS-SSIM: It is a measure of multi-scale structural similarity, which examines the structural similarity between the source image and the fused image at different scales and reflects the characteristics of the actual visual system [44].The ideal value of MS-SSIM is 1. (5) EN: It is a concept used in information theory to measure the uncertainty of data or the amount of information.In the field of image processing, entropy is often used to measure the complexity or information content [45].The higher the entropy value, the more information of the source image is preserved in image fusion, and the better the effect.(6) VIF: It describes the statistical feature distribution of the fused image through the dispersion model, which evaluates the fidelity of the visual information between the test image and the reference image according to the similarity and mutual information in the feature space [46].The higher the VIF value, the better the performance of the fusion method.(7) Q TE : It is an image evaluation index based on information theory, which evaluates the complexity and quality by measuring the non-uniformity and the amount of information [47].It adjusts the value of parameter q to adapt to the non-Gaussian distribution, thereby measuring the complexity and the amount of information and highlighting its sensitivity and adaptability to non-Gaussian distribution.(8) Q AB/F : It provides a detailed assessment of image fusion quality by quantifying the degree to which edge information and important textural details are maintained in the fused image [48].The ideal value of Q AB/F is 1. (9) Q G : It focuses on the retention consistency and saturation of image edges and details and evaluates fusion performance by comparing the gradient information between the fused image and the source images [49].The ideal value of Q G is 1.
MRI and SPECT
In this section, the proposed method is compared with seven comparison methods on the SPECT and MRI images in terms of visual effect and quantitative analysis.Figure 4 illustrates the fused results of six representative images.To better show the fusion results, two detailed regions are extracted and framed by the red and green boxes, and these magnified regions are put at the left and right bottom of each image.It can be observed from Figure 4 that the DDCGAN and SwinFuse methods can better preserve the functional information from the SPECT images, but they have a poor ability to preserve structural texture information and produce unnecessary artifacts in the fused images.The EMFusion method produces oversharpened fused images due to retaining too much detailed information.The MSDRA method can effectively preserve the color information in the fused images, but the magnified regions show that the fused results of this method contain unnecessary noise and artifacts.The U2Fusion and IFCNN methods have a good ability to preserve detailed information and produce slight distortion in the chromaticity informa-tion.The fused images generated by the MATR method show good colors and completely preserve the overall organizational structures, but it can be seen from the local regions that the textural information is still defective.Compared with the above methods, the proposed method performs better in preserving the color information, overall organizational structure, and local texture.It fully retains the functional information of SPECT images and the structural information of MRI images, indicating excellent fusion performance.Then, nine evaluation indicators are employed to quantitatively evaluate the fusion images generated by the proposed method and the comparison methods.Figure 5 shows the trend curves of the nine indicators on 20 randomly selected test images, where the red line represents the proposed method.It can be observed that the proposed method outperforms the other methods in terms of MI, Q G , VIF, and Q TE .For NCIE, Q AB/F , and MS-SSIM, the proposed method has the best values on some test images.For the indicators EN and SCD, the proposed method has relatively poor values compared to some methods.Table 2 lists the average values of each indicator on 20 test images, where the best values are marked in bold and the second-best values are underlined.It can be found that the proposed method has the best average values in terms of MI, VIF, NCIE, Q TE , Q G , and MS-SSIM, but it has inferior performance in comparison to some methods in terms of EN, SCD, and Q AB/F .Generally, the quantitative results demonstrate that the proposed method has better performance in preserving structural information and color information, thereby providing high-quality results for the following visual tasks.
The average values of the nine evaluation indicators of different fusion methods on 20 pairs
of SPECT and MRI images.The best value is marked in bold, and the second best value is underlined.
MRI and PET
To verify the generalization ability and robustness of our proposed method, experiments are conducted on MRI and PET images.The PET image refers to the image of metabolism, function, and molecular information in the human body obtained by PET.The MRI image provides high-resolution structural information.The PET functional signals can be located more accurately based on the high-resolution structural image of MRI.Therefore, the fusion of MRI and PET images can provide more accurate focus localization and anatomical location information.In the experiment, seven comparison methods are used, including MATR, DDCGAN, EMFusion, U2Fusion, IFCNN, MSDRA, and SwinFuse.
Figure 6 shows six pairs of MRI and PET images and the fusion images of eight methods.Two local regions are magnified to better show the fusion performance of different methods.It can be observed that the DDCGAN method can effectively preserve the texture information in the fused image, but the color of the fused images is overexposed.The SwinFuse method generates a fused image with good structural information, but the overall color of the fused image is relatively dark.The MSDRN method preserves the visual features well but produces unnecessary noise.U2Fusion retains chromaticity information well, but the fused images lose some structural information.Compared with the U2Fusion method, the EMFusion method avoids the problem of losing organizational information, but the color of the fused images relatively dark.From the perspective of the overall visual effect, the fusion images generated by IFCNN and MATR methods have no significant distortion.From the magnified regions, it can be seen that the local regions of the two methods have artifacts.By fully integrating the structural information of MRI images and the functional information of PET images, the proposed method retains the color information of the PET image to the greatest extent and ensures the clarity of detailed textures.The nine evaluation indicators are utilized to make a quantitative analysis of eight fusion methods, and ten pairs of images are selected as the test images.Figure 7 presents a line chart to demonstrate the values and trends of the indicators on each image.Table 3 lists the average value of each indicator for different fusion methods.As shown in Figure 7, our proposed method achieves the best values for all the images in terms of MI and VIF, and it obtains the best average values of MI and VIF.It can be seen from Table 3 that the proposed method achieves the best values of Q G , NCIE, Q TE , and SCD on some fused images, and it obtains the best average values of these indicators.For the indicators MS-SSIM, EN, and Q AB/F , the proposed method has lower values than the other fusion methods.Generally, by effectively integrating the color information of PET images and the structural information of MRI images, the proposed method performs better than the other methods from the perspective of visual quality and quantitative evaluation.
Green Fluorescent Protein and Phase Contrast Image
Green fluorescent protein (GFP) is a type of protein produced by sea anemones, which has fluorescence characteristics and is widely used in the biomedical field.Phase contrast (PC) imaging is a microscopic technique that uses the optical phase contrast effect to produce light and dark contrast among different tissues or structures of visual cells.The fusion of GFP and PC images can show the cell structure and fluorescence expression signal in the same image so that researchers can observe the cell morphology and gene expression simultaneously [50].The dataset of GFP and PC images comes from http://data.jic.ac.uk/Gfp (accessed on 3 February 2024).In the experiment, six advanced methods, including MATR, MSDRA, U2Fusion, IFCNN, SwinFuse, and DDCGAN, are taken as the comparison methods.Twenty pairs of GFP and PC images are selected as test images.Figure 8 presents the fused images of different methods.Specifically, the fused image generated by the DDCGAN method has a slightly dark color.Also, it can be observed from the magnified regions that the detailed information is not clear enough.The fused images generated by the SwinFuse, U2Fusion, and MATR methods preserve the color information of GFP images well and the cell structure information of PC images to a certain extent.The MSDRA and IFCNN methods produce fused images with better edge detail information and lower contrast, but the fused images of IFCNN show a blurring effect.Compared with the above methods, our proposed method not only preserves the chromaticity information as much as possible but also fully preserves the details of the tissue structure in the fused images.To objectively evaluate the performance of different fusion methods, nine classical indicators are used.Figure 9 shows the line charts of the nine indicators for different fusion methods on 20 test images.Table 4 lists the average values of the nine indicators for different methods, where the best values are marked in bold, and the second-best values are underlined.It can be seen from Figure 9 that the proposed method has the best values on some test images in terms of MI, SCD, Q AB/F , VIF, and NCIE.As listed in Table 4, the proposed method obtains the best average values in terms of MI, VIF, NCIE, SCD, and Q AB/F , and it obtains the second-best average values in terms of EN and MS-SSIM indicators.Generally, the proposed method performs well in the fusion of GFP and PC images and achieves better results in terms of both quantitative analysis and qualitative analysis.
Ablation Experiments 4.4.1. Ablation Experiment of Loss Function
To verify the effectiveness of the loss functions, an ablation experiment is conducted for two loss functions, i.e., L PERCE and L SSI M .L PERCE pays attention to the perceptual similarity to enable the model to generate images with more visual realism and perceptual details.Meanwhile, L SSI M pays attention to the structural similarity and enables the model to generate fused images that are closer to the real images.In the ablation experiment, the performance of three methods is compared, i.e., the proposed method with only L PERCE , the proposed method with only L SSI M , and the proposed method with both L PERCE and L SSI M .Figure 10 shows the fused images of the proposed method with different loss functions.It can be found that the fused images generated by the proposed method with only L PERCE have significant differences with the two source images.Although the fused images may preserve some perceptual details, their overall structures contain much noise and many artifacts.The fused images generated by the proposed method with only L SSI M have insufficient visual realism and perceptual details because the model is no longer guided by perceptual features.Compared with the above methods, the proposed method with both L PERCE and L SSI M can generate fused images with better visual features.Figure 11 shows a bar chart of the values of nine indicators for loss functions, where the green bar represents the proposed method with both loss functions.It can be observed that the proposed method with both loss functions has higher values than the proposed method with a single loss function.The experimental results indicate that both L PERCE and L SSI M contribute to higher fusion quality.
Ablation Experiment of Network Architecture
In this section, an ablation experiment is conducted to verify the effectiveness of different network architectures, including (1) replacing MDC convolution with ordinary convolution, (2) removing the RHT module, and (3) single-scale architecture as shown in Figure 12a-c.In the first experiment, the MDC convolution is replaced with the ordinary convolution that is the most commonly used in the traditional deep learning algorithm.In the second experiment, the RHT module is removed from the proposed network architecture to verify the effectiveness of our proposed RHT module.The third experiment is conducted to compare the performance of the single-scale network architecture and multi-scale network architecture.Figure 13 shows the fused images of different network architectures.It can be seen that after using the ordinary convolution instead of the MDC convolution, the fused images tend to be blurred, there is noise and unnecessary artifacts in the fused images, and the local details are not as clear as those of the proposed method.Therefore, it can be concluded that the ordinary convolution is not as capable as the MDC convolution in feature extraction.The proposed method adopts the principle of MDC to fully extract the feature information of the original image from multiple dimensions and channels, which lays a foundation for subsequent operations.It can be observed from the fused images that the proposed method without the RHT module pays too much attention to the local features and fails to establish an effective connection to the global context, so it cannot fully capture the spatial relationship between the images and has poor ability to model the spatial relationship.Hence, the fused images have incoherent local features, lack hierarchy in chromaticity information, and have large areas with sticking colors and distorted texture details.The fused images generated by the single-scale architecture lose much chromaticity information and texture information.The overall resolution of these images is reduced, and there are obvious boundary artifacts in the heterogeneous regions of the fused images.Compared with the single-scale network architecture, the multiscale network architecture enables the network to handle the differences of different spatial resolutions of the images and learn richer semantic information at different scales.Figure 14 presents the quantitative indicator values of different network ablation experiments.It can be seen that the proposed method obtains better values than the other methods in most evaluation indicators.Through the above experiments and analysis, it can be concluded that the MDC convolution, the RHT module, and the multi-scale network architecture can effectively improve the performance of the proposed method.
Discussion
The multi-modal medical image fusion method is a key technology for assisting clinical medical diagnosis, which can provide doctors with high-quality medical images.The deep learning-based fusion methods have gradually replaced the traditional fusion methods with their excellent performance and become a research hotspot.In order to obtain fused images with richer information, this paper proposes the MDC-RHT fusion method.This method integrates multi-dimensional dynamic convolution and residual hybrid transformer, which collectively address several critical challenges in multi-modal image fusion.
In Section 4, we verify the performance of the proposed method through three sets of experiments.Better visual quality and higher scores of quantitative indexes indicate the superiority of the fusion algorithm.Figures 4, 6 and 8 shows the fused images of different fusion methods on three datasets, respectively.From a subjective perspective, it can be seen that the fusion results of our method present better visual quality than that of the other methods, especially in terms of color, edge, texture, contour, etc. Figures 5, 7 The remarkable performance of our method observed in the quantitative and qualitative experiments can be attributed to several key innovations in our method.The multi-scale architecture enables the network to fully leverage characteristics across various scales and resolutions, thereby preserving multi-level image details more effectively.As the MDC module with the multi-dimensional attention mechanism allows the convolutional kernels to focus on different dimensions, the DCB module has an excellent ability to capture the local details, thereby efficiently preserving nuanced information from the source images.The RHT module facilitates efficient feature interaction between different windows, which activates more pixels to participate in the fusion process and ensures better preservation of edge and long-range context information.This is crucial for maintaining the structural integrity of the fused images.Additionally, the well-designed loss function ensures that the fused image preserves the complementary information from the source images.Consequently, the excellent performance of the proposed network enhances the quality of image fusion, thereby providing high-quality data support for advanced subsequent medical tasks.
Although the proposed method has achieved good results, there are still some issues that need to be addressed.The high computational complexity caused by the attention mechanisms poses challenges for deploying the proposed method in real-time clinical medical treatment systems.Additionally, considering the stringent requirements for patient privacy and data security in medical applications, enhancing the interpretability and transparency of our model is essential for gaining the acceptance and trust of doctors [51,52].
To address these challenges, the future research will focus on the following aspects: (1) Reducing the computational burden through model pruning and the use of more lightweight networks so that the model is more suitable for clinical applications; (2) employing thermal maps and the other techniques to enhance the interpretability of neural networks, thereby improving the transparency and understandability of the model; and (3) developing a more general fusion network to enhance the generalization capabilities of the model, which adapts to more complex fusion scenarios across different fields [53,54].
Conclusions
This paper proposes a multi-scale fusion network combining multi-dimensional dynamic convolution and residual hybrid transformer for the fusion of multi-modal medical images.Comprehensive experiments are conducted on three representative datasets.Compared with the state-of-the-art methods, our proposed method achieves higher quantitative scores and better visual quality as indicated by the experimental results and analysis.This study provides important theoretical support for multi-modal medical image fusion based on deep learning.
:TheFigure 1 .
Figure 1.The main framework of MDC-RHT.The SPECT/PET image is decomposed into three components: P Y PET , P Cb PET , and P Cr PET .P Y PET and the MRI image P MRI are sent into the fusion network together to obtain P Y F ; then, the components P Cb PET , P Cr PET , and P Y F are used to obtain the final fusion image P Fusion through inversion transform.
Figure 2 .Figure 3 .
Figure 2. The schematic diagram of dynamic convolution.The standard dynamic convolution is shown in (a), while the multi-dimensional dynamic convolution is shown in (b).The GAP represents global average pooling, and the FC represents fully connected.* represents convolution operations.
Figure 5 .
Figure 5. Quantitative comparison of different fusion methods on the SPECT and MRI images in terms of nine objective evaluation indicators.
Figure 6 .
Figure 6.Qualitative evaluation of the proposed MDC-RHT and seven typical and state-of-the-art methods on six representative PET and MRI image pairs.From left to right: PET, MRI, MATR, DDCGAN, EMFusion, U2Fusion, IFCNN, MSDRA, SwinFuse, and MDC-RHT.
Figure 7 .
Figure 7. Quantitative comparison of different fusion methods on the PET and MRI images in terms of nine objective evaluation indicators.
Figure 8 .
Figure 8. Qualitative evaluation of the proposed MDC-RHT and six typical and state-of-the-art methods on six representative GFP and PC image pairs.From left to right: GFP, PC, MATR, MSDRA, U2Fusion, IFCNN, SwinFuse, DDCGAN, and MDC-RHT.
Figure 9 .
Figure 9. Quantitative comparison of different fusion methods on the GFP and PC images in terms of nine objective evaluation indicators.
Figure 10 .
Figure 10.Fusion results under different loss constraints.From left to right: PET and MRI images, only perceptual loss, only SSIM loss, and using both perceptual loss and SSIM loss.
Figure 11 .
Figure 11.The scores of nine evaluation indicators for fusion results under different loss constraints.
Figure 12 .
Figure 12.Comparison of the effectiveness of different network architectures.(a) Ordinary convolution instead of MDC convolution; (b) removal of RHT module; (c) single-scale architecture.
Figure 13 .
Figure 13.Fusion results under different loss constraints.From left to right: COLOR and MRI images, vanilla convolution instead of ordinary convolution, missing transformer, missing multiscale, and MDC-RHT.
Figure 14 .
Figure 14.Quantitative evaluation results of the proposed method with different network architectures.
and 9, and Tables 2-4 shows the quantitative results of different methods on three datasets.It can be observed that the proposed method achieves the best results in terms of most indicators compared to the other fusion methods.Moreover, Section 4.4 gives the ablation experiments of loss function and network architecture.The experimental results demonstrate that the loss function and network architecture we have designed are the best.
Table 1 .
The detailed parameters of convolutional operations in the proposed network architecture. F
Table 3 .
The average values of nine evaluation indicators of different fusion methods on 10 pairs of PET and MRI images.The best value is marked in bold, and the second best value is underlined.
Table 4 .
The average values of nine evaluation indicators of different fusion methods on 20 pairs of GFP and PC images.The best value is marked in bold, and the second best value is underlined. | 2024-06-23T15:11:28.698Z | 2024-06-21T00:00:00.000 | {
"year": 2024,
"sha1": "40880b40234f30d91ce42765b2cf8563f0121704",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/24/13/4056/pdf?version=1718979099",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3049537f6edf719a8b2e4ea252ff99bad66e391",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
96462316 | pes2o/s2orc | v3-fos-license | Development of Future Rule Curves for Multipurpose Reservoir Operation Using Conditional Genetic and Tabu Search Algorithms
Optimal rule curves are necessary guidelines in the reservoir operation that have been used to assess performance of any reservoir to satisfy water supply, irrigation, industrial, hydropower, and environmental conservation requirements. )is study applied the conditional genetic algorithm (CGA) and the conditional tabu search algorithm (CTSA) technique to connect with the reservoir simulation model in order to search optimal reservoir rule curves. )e Ubolrat Reservoir located in the northeast region of )ailand was an illustrative application including historic monthly inflow, future inflow generated by the SWAT hydrological model using 50-year future climate data from the PRECIS regional climate model in case of B2 emission scenario by IPCC SRES, water demand, hydrologic data, and physical reservoir data. )e future and synthetic inflow data of reservoirs were used to simulate reservoir system for evaluating water situation.)e situations of water shortage and excess water were shown in terms of frequency magnitude and duration.)e results have shown that the optimal rule curves from CGA and CTSA connected with the simulation model can mitigate drought and flood situations than the existing rule curves. )e optimal future rule curves were more suitable for future situations than the other rule curves.
Introduction
Nowadays, water resource issues have become more complex, which is related to global climate change and land-use change due to population and economic growth, which are increasing rapidly.For water resource management, both demand management site and supply management site are often required to solve the problems.Improving the reservoir operation for increased efficiency is another way of supply management site, which does not require the physical development of reservoir.Normally, reservoir operation uses upper and lower rule curves to consider the release of water from the reservoir responding to downstream demands in long-term operation.
e purpose of the rule curves for reservoir operation was divided into two main areas: (i) variation of hydrological conditions [1], such as precipitation and inflow that flows into the reservoir were affected by climate change, and (ii) water allocation for social, economic, and engineering purposes in downstream areas has changed due to the population growth and landuse demand.e reservoir management agency (such as the Royal Irrigation Department of ailand) needs to plan in advance the appropriate volume of water in the reservoir (at each time interval) for storage and release of water for various purposes [2], including the implementation of the plan as long as the relevant factors in the future have not changed from the original.However, if future conditions are different from those anticipated in the planning phase, performance may differ from planned to minimize water shortage or overflow.e rule curve assumptions are based on maintaining the water level (or volume of water) in the reservoir to appropriate the changing hydrological situation [3] and downstream water allocation (according to the time period, which is generally one year).e main purpose is to avoid the risk of water shortages and floods in the reservoir and downstream areas.During the dry season, reservoirs need to maintain water volume to reduce the risk of water levels lower than minimum storage.During the rainy season, the reservoir must release water to reduce the storage volume, which can support the precipitation and inflow that flows into the reservoir.
is also includes prevention of overflow situation in the reservoir [4].e reservoir rule curves have been improved to provide the optimal solution for long-term operation.Typically, reservoir operating system has been large and complex, especially in watershed areas having both drought and flood situations [5].
To search optimal rule curves of the reservoir is a nonlinear optimization problem.ere are many optimization techniques that are applied to connect with the reservoir simulation model for searching optimal rule curves such as dynamic programming (DP), genetic algorithm (GA), and simulated annealing algorithm (SA) [6][7][8][9][10].ose obtained rule curves are effectively applied for each area.However, these new optimization techniques have not been applied to find the optimal rule curves like the tabu search technique.
In the last decades, there are many alternative algorithms to solve complex computational problems.Tabu search is a heuristic procedure designed for solving optimization problems.It has been successfully applied to many engineering fields such as industrial engineering, electrical engineering, civil engineering, and water resources engineering [11,12].Tabu search is a very aggressive heuristic for overcoming local optima and searching for global optimality by exploring other regions of the solution space.Its efficiency depends on the fine-tuning of some parameters [13][14][15].
is study proposed a conditional tabu search algorithm (CTSA) to connect with the simulation model for searching the optimal reservoir rule curves.A minimum average water shortage was used as the objective function for the searching procedure.e proposed model has been applied to determine the optimal rule curves of the Ubolrat Reservoir in the northeast region of ailand with the historic monthly inflow, future inflow under scenario B2, water demand, hydrologic data, and physical reservoir data.Comparison of the conditional genetic algorithm (CGA) and the CTSA was shown to demonstrate the effectiveness of the proposed CTSA model.
Future Inflow into the Ubolrat Reservoir.
e development of the optimal future rule curves will use data from the future inflow flowing into the Ubolrat Reservoir considering the effects of climate change using the PRECIS model.us, the future inflow will be produced using the SWAT hydrological model.For the future climate data in the study area, PRECIS is a regional climate model, based on the development of ECHAM4 model, displaying the data as "grid" with high solution of 22 × 22 km 2 [16].
e data recorded during 1997-2014 were used as a baseline to predict those for 2015 to 2064.ese data present the precipitation and maximum and minimum temperatures.
Because of the Ubolrat Reservoir and the study area located in northeastern ailand, most of the economic characteristics are generated by the sale of major agricultural products, such as rice and sugarcane, which require water for cultivation during the rainy season as the primary source.
e expansion of most urban areas in the region is slow.erefore, this study has chosen the appropriate greenhouse gas emission projection model based on the model of socioeconomic development, population growth, and technology of the study area according to the IPCC SRES, with emphasis on regional development for the emission scenario B2-prediction of lower population growth than A2, moderate-level economic development, and oriented toward environmental protection [17].
SWAT (Soil and Water Assessment Tool) [18] is a semidistributed hydrological model developed for the measurement of the inflow, sediment, and water quality under the climate and land-use changes [19].SWAT can be used to continually measure the daily inflow and define a longer period of time in the future.It can also connect and import the spatial data from the Geographic Information System (GIS) in order to evaluate the inflow.e spatial data and SWAT performance evaluation are presented in Table 1.
e accuracy of the SWAT results can be evaluated by comparing the simulated data with that recorded data from the observation station (i.e., Ubolrat Reservoir).ree variables including R 2 (coefficient of determination), RE (relative error), and E ns (Nash-Sutcliffe simulation efficiency) were considered as the key indicator of the accuracy.In general, SWAT needs to modify the value of hydrological parameters for the model calibration and validation [20].In this study, 8 parameter values were used including Alpha_BF, Gwqmn, Gw_Revap, Sol_Awc, Epco, Esco, Ch_N2, and Gw_delay.
en, the SWAT with adjusted sensitivity parameters was optimized (calculated results are close to the observed data); it was considered to be suitable for calculating the future inflow.Later, the daily future climate data from PRECIS that had been downscaled were classified into 50 years in future periods; the processes of model setup are shown in Figure 1.
Reservoir Operation Model.
Reservoir system comprises available water that flows from upstream into the reservoir and multipurpose downstream demand.e reservoir operation is performed using water usage criteria release, operating policies, and reservoir rule curves with monthly 2. e y-axis is defining the total water released, and the x-axis is defining the available water.e available water is represented as the total of inflow and starting point of storage in the reservoir during the period.In case of where demand is constant, water is not conserved for future demand, and if the available water is less than water demand, all of water in the reservoir is released and the reservoir is emptied [21].At the point P 1 , the available water is equal to the water demand, and at the point P 2 , the available water is the total of reservoir capacity and water demand.If the available storage is between P 1 and P 2 , water release is equal to water demand.e line between P 0 and P 1 makes the ratio of 1 : 1 (45 °) [22].On the other hand, if the available water exceeds the sum of active storage and water demand, the reservoir will release the excess spilled [23].In addition, the standard operating policies can also be calculated from where R ],τ is the released water from the reservoir during year ] and period τ (τ �1 to 12, representing January to December), D τ is the water demand of month τ; x τ is the lower rule curve of month τ; y τ is the upper rule curve of month τ, and W ],τ is the available water calculating by a simple water balance, as described in where S ],τ is the stored water at the end of month τ; Q ],τ is the monthly inflow to reservoir, E τ is the average value of evaporation loss, and DS is the minimum reservoir storage capacity (the capacity of dead storage).In (2) and Figure 2, if available water is in the range of the upper and lower rule levels, then demands are satisfied in full.If available water is over the top of the upper rule level, then the water is spilled from the reservoir in downstream river in order to maintain water level at the upper rule level.If available water is under the bottom of the lower rules level, a reduction of water release is performed.
e operating policy usually reserves the available water (W ],τ ) for reducing the risk of water shortage in the future, when 0 ≤ W ],τ < x τ -D τ under long-term operation.
e released water from the reservoir was used to calculate the water shortage and excess water release situations, which can be expressed as the frequency of failures in a year and the number of excess water release, as well as the average annual shortage (as the objective function for searching the optimal rule curves in this study).e results were recorded and used to develop the CTSA model.
Application of CGA with the Reservoir Simulation Model for Searching Rule Curves.
e connection of the CGA to the reservoir simulation model was as follows.
e CGA requires an encoding format to change the decision variables into the form of chromosomes.e CGA, which consists of selection, crossover, and mutation, is executed.After this stage, the genetic operations will create new chromosomes.For this study, each decision variable represents the average Advances in Civil Engineering monthly water storage of the rule curves in the reservoirs, which are defined as the upper bound and the lower bound.
After the first set of chromosomes in the initial population have been calculated (24 decision variables, which consist of 12 values from the upper bound and 12 values from lower bound situations), the released water will be recalculated by the reservoir simulation model using these rule curves.Next, the released water is used to determine the objective function with the aim of assessing the fitness of the GA.After that, the reproduction process will create new rule curve values in the next generation.is procedure is repeated until the 24 values of rule curves are appropriate.e CGA and reservoir simulation model for searching the rule curves are described in Figure 3.
In this study, the objective function for searching the optimal reservoir rule curves is the minimum of the average water shortage (min (avr) ) in million cubic meters (MCM) per year, as shown in where n is the total number of considered years and Sh v is the water deficit during year ] (a year that does not meet 100% of the target demand).
Applying Conditional Tabu Search Algorithm for
Searching Rule Curves.e developed CTSA for searching rule curves is described as follows.e CTSA begins with an initial population {X 1 , X 2 , . .., X n } created randomly within the feasible space.With the 24 decision variables (rule curve variables for both upper and lower), the feasible solution of the iteration ith is represented as en, a set of rule curves is used in reservoir simulation, and the released water is calculated by the simulation model using these rule curves.Next, the released water is used to calculate the fitness function to evaluate the feasible solution.
e fitness function is the minimum of the average water shortage (Z) subject to constraints on the simulation model as described in (3).
en, the process is continued until the termination criterion is satisfied as described in Figure 4. is termination criterion is optimum; it can be expressed by a slight change in the fitness values (less than 0.10 MCM).
Illustrative Application.
e Ubolrat Basin is a branch of the Chi Basin located in northeastern ailand (Figure 5).It has an area of about 3,282 km 2 .
e average annual rainfall is 1,411 mm, and the mean annual temperature is 27 °C.e Phong River lies in the middle of the basin.e Ubolrat Dam was developed by building an earth core rock fill dam across the Phong River with a height of 33 m and crest length of 7,800 m. e normal storage capacity and average annual inflow are 2,263 MCM and 2,478.591MCM, respectively.e objectives of the Ubolrat Dam are irrigation, flood control, and industrial and domestic water supply.Schematic diagram of Ubolrat basin is described in Figure 6.
e study used CTSA in connection with a reservoir operation model to find optimal rule curves through the MATLAB toolbox.
e optimal rule curve can then be applied to an actual scenario depending on whether the rule curve can be used to cover every case or event that might occur.us, the HEC-4 model was used to create the synthetic inflow data into the monthly inflows as a synthetic data set of 500 events.is method was based on the actual historic monthly inflow of the Ubolrat Reservoir between (1 event is a representative period of 50 years.)erefore, the monthly inflow data are 300,000 values (50 years × 12 months × 500 events).en, input synthetic inflow data were used to assess the efficiency of the new rule curves and compare them with the existing rule curves and also between the CTSA and CGA model under the same conditions.Further, the new obtained rule curves from CTSA and CGA model were used to evaluate with the future situation of B2 scenario [24].e future inflows to reservoir were created by SWAT model that considered climate changes [25][26][27][28].2.
Results and Discussion
e inflow calculated by SWAT and compared with the data from the two observed station shows the inflow during the period of model calibration and validation; meanwhile, R 2 , RE, and E ns were satisfactory and accurate as the deviation can be accepted as presented in Table 3; the goodness of fit of the data was depicted in Figure 7.
e inflow at the Ubolrat Reservoir station simulated by SWAT was divided into 2 phases: (1) baseline inflow which is the climate and spatial data recorded during 1997-2014 and (2) future inflow using the climate data from PRECIS model resulted during 2015-2064.e inflow analysis indicates that an average volume of the baseline inflow was 2,736 MCM and an average volume of the future inflow was 4,580.5 MCM.When comparing those two volumes, it was noted that the future inflow seems to be increased (1,844.5 MCM or 40.3% in 50 future years).Figure 8 illustrates the annual inflow simulated by SWAT during 2015-2064, and Figure 9 depicts the comparative result between the average baseline inflow and the average 10-year future inflow with increased trend in the future period, most of which have shown more than 4,000 MCM, except the third period.is result shows that the future inflow flows into the Ubolrat Reservoir with increased volume.
Optimal Historic Rule Curves.
e historic data of inflow, evaporation, water requirement, and monthly rainfall
Advances in Civil Engineering
were imported for processing in the CGA connected to the simulation model and CTSA model, and the optimal rule curves were obtained.ese obtained rule curves are plotted in order to compare them with the existing rule curves as shown in Figure 10.ey indicated the optimal upper and lower rule curves for the CTSA (RC4) compared with the existing rule curves (RC1) and the rule curves obtained using CGA (RC2).e results show that the patterns from the 6 Advances in Civil Engineering existing rule curves and the new rule curves obtained from the CTSA and CGA are similar.e obtained rule curves also indicated that the water storage levels of the CTSA and CGA lower rule curves are lower than the existing rule curves during the dry season (February-June) in order to release more water to reduce water scarcity.In the middle of rainy season (August-October), the CTSA and CGA upper curves are higher than their existing rule curves in order to increase water storage for next dry season.is will help alleviate water shortages in the next year.
ese patterns of the obtained curves are similar to the pattern of the other reservoirs in ailand on the other studies [7,10] because of seasonal e ect.
Optimal Future Rule Curves.
To nd the future rule curves, the average monthly in ow for the future period 2015-2064 under the B2 scenario [27] was imported into the CGA and CTSA model, and then, the optimal future rule curves were obtained.Figure 7 shows the optimal upper and lower rule curves of the CGA (RC3) and CTSA (RC5) compared with the existing rule curves (RC1).e results show that the patterns from the existing rule curves and the new future rule curves obtained from the CGA and CTSA are similar.e storage capacity of the upper rule curves of the CGA (RC3) and CTSA (RC5) were higher than the existing upper rule curves to reduce the spill water and to keep the storage capacity full at the end of the rainy season.
is will help prevent water shortages in the following year.Whereas, during the dry season (February to June), the storage capacities of the RC3 and RC5 were lower than the existing rule curves in order to release more water for alleviating the problem of water shortages.ese rule curve patterns are similar to those from previous studies on reservoir rule curves in ailand, which were a ected by the seasons [7,10].e evaluation of the new historic rule curves and future rule curves generated from the CGA and CTSA model aimed to determine the performance of the rule curves with the synthetic historic inflow of 500 samples and the future inflows (B2 scenario), as shown in Tables 4-7.Table 4 shows the situations of water shortage and excess release of the systems when considering historic inflow.It indicated that the magnitudes of water shortage and excess release of the systems using CGA and CTSA rule curves are less than the magnitudes of using existing rule curves (402.633 and 455.776 million cubic meters (MCM)/year for average water shortage of CGA and CTSA, resp.).Whereas the frequency and duration time of water shortage and excess release of the systems using CGA and CTSA rule curves are higher than the frequency and duration of using existing rule curves.
Advances in Civil Engineering
Tables 5 and 6 show the efficiency of the five rule curves for water shortage and excess release situations by considering the synthetic historic inflow of 500 samples.It indicates that the situations of water shortage and excess release when using the historic rule curves (RC2 and RC3) are less than using the existing rule curves (RC1) and the future rule curves (RC4 and RC5).
In the case of future situation (Table 7), the future rule curves (RC4 and RC5) showed the best performance, as indicated by the frequency of the water shortage and the average and the maximum magnitudes of the water shortages.e future rule curves are more suitable for future situations than the existing rule curves and the historic rule curve.It can be concluded that rule curves created using the specific inflow periods will be the most suitable.e proposed CTSA model is another search optimal technique, so the results are near optimality that closed to the results of the other search techniques based on the same condition.However, the efficiency of each technique was carried out on many studies [7,10].
Conclusion
is study proposed an alternative algorithm for searching optimal reservoir rule curves.e conditional tabu search algorithm (CTSA) and reservoir simulation model were applied to search the optimal rule curves of the Ubolrat Reservoir under historic monthly inflow and future inflow under the scenario B2.
e future inflow and synthetic inflow data of reservoirs were used to simulate reservoir system for evaluating situations of water shortage and excess release.e results found that the new obtained rule curves from CTSA are more suitable for reservoir operating than the existing rule curves.e frequency and magnitude of water shortage and excess water release for using new obtained rule curves are lower than the existing rule curves.When comparing the new obtained rule curves from CTSA with the rule curves of the CGA method as well as the existing simulation method, it was found that these rule curves are similar.e proposed CTSA model is an effective method for application to find optimal reservoir rule curves.
is reveals that the CTSA and GA model with future inflow are effective methods for searching optimal reservoir rule curves that are suitable for using in the future situations.
Figure 1 :
Figure 1: Model setup processes for future inflow.
Figure 4 :Figure 3 :
Figure 4: Applying tabu and reservoir simulation for searching rule curves.
3. 1 .
Future Inflow into the Ubolrat Reservoir.An evaluation on SWAT accuracy used the data found during 1997-2014 (18 years; 1997-2008 for calibration and 2009-2014 for validation) for Ubolrat Reservoir station.Practically, 8 parameter values were selected and used to analyze the flexibility score as the modified parameter values of the flexibility by adjusting the inflow volume to closely match with the data from the observed station as presented in Table
Figure 7 :
Figure 7: e comparison between the runo from the observed stations and the SWAT result.
Table 1 :
Spatial data and observed inflow data for SWAT performance evaluation.
voir operating policies are based on the monthly rule curves of individual reservoirs and the principles of water balance equation under the reservoir simulation model.e existing standard operating policies used for the reservoir rule curves operation are presented in Figure
Table 3 :
SWAT performance evaluation index.
Table 4 :
Situations of water shortage and excess release of the systems using historic inflow.
Table 5 :
Situations of water shortage of the systems using synthetic inflow from historic data.
Table 6 :
Situations of excess water release of the systems using synthetic inflow from historic data.
Table 7 :
Situations of water shortage and excess release of the systems using future inflow. | 2019-01-08T14:15:02.862Z | 2018-02-11T00:00:00.000 | {
"year": 2018,
"sha1": "2c42de3c78135ea33c2520e8de9c7c35970b26f6",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ace/2018/6474870.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2c42de3c78135ea33c2520e8de9c7c35970b26f6",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
265986195 | pes2o/s2orc | v3-fos-license | Psychometric Properties of the Turkish Version of the Children’s Saving Inventory in a Clinical Sample
Objective: The Children’s Saving Inventory (CSI) is a measurement tool developed to assess hoarding behavior in children. This study aims to investigate the psychometric properties of the Turkish version of the CSI in a clinical sample of children and adolescents. Materials and Methods: The study sample consisted of 52 children and adolescents diagnosed with obsessive-compulsive disorder in the 8-17 age group and their families. As a structured diagnostic interview, the Development and Well-Being Assessment (DAWBA) was applied to all participants included in the research. Hoarding disorder (HD) diagnosis was made clinically by considering the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) diagnostic criteria. The Children’s Yale-Brown Obsessive–Compulsive Scale Symptom Checklist (CY-BOCS) was administered by an experienced clinician. The parents and children filled out the Obsessive-Compulsive Inventory—Child Version (OCI-CV) and CSI scales independently. Results: The 20-item CSI Turkish version demonstrated good internal consistency. This 4-factor structure of the scale was confirmed by confirmatory factor analysis. Children’s Saving Inventory showed convergent and discriminant validity with the OCI-CV and CY-BOCS subscales, and the higher CSI total scores in children and adolescents diagnosed with HD confirmed the construct validity. Conclusion: These findings support the use of the CSI Turkish version as a valid and reliable scale to investigate the hoarding behavior of children and adolescents in a clinical sample. In addition, the CSI Turkish version is currently the only validated instrument to evaluate hoarding behavior in children and adolescents, as rated by parents in Türkiye.
Introduction
Hoarding disorder (HD) is a condition where individuals experience difficulties getting rid of possessions and have strong attachments to them, leading to distress when faced with the prospect of discarding them. 1 While hoarding was previously considered a symptom of obsessivecompulsive disorder (OCD), HD is now recognized as a separate disorder within the OCD and Related Disorders category in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5). 2 Research indicates that HD typically emerges between the ages of 11 and 15, 3 although there is limited data available for childhood and adolescence in the scientific literature.However, the number of studies on this age group has been increasing since the inclusion of HD in the DSM-5.The estimated prevalence of HD in children and adolescents is 0.98%. 4 is emphasized that the early diagnosis and treatment of HD is important in terms of preventing the future recurrence of problems as well as increasing functionality in adult life. 5Evidencebased assessments encourage the routine use of standard tools for the screening, diagnosis, and follow-up of psychiatric disorders in young people. 6Self-report questionnaires are widely used as an important source of information in daily practice since they help children and adolescents report their feelings, thoughts, and behaviors.Because hoarding is a subset of OCD, hoarding behavior is assessed using 2 items related to hoarding obsessions/compulsions on the Children' s Yale-Brown Obsessive-Compulsive Scale Symptom Checklist (CY-BOCS) 7 and several items on Psychometric Properties of the Turkish Version of the Children's Saving Inventory in a Clinical Sample the Obsessive-Compulsive Inventory-Child Version (OCI-CV). 8Hoarding disorder being recognized as a separate disorder and the limited ability of OCD scales to measure hoarding behaviors have led to the development of a specific scale for measuring hoarding in children and adolescents.Storch et al 9 (2011) created the Children' s Saving Inventory (CSI), which is a parent-rated measure of a child' s level of hoarding behavior.The CSI is the first and only tool designed to assess hoarding behavior in children.
The CSI is based on the Saving Inventory-Revised (SI-R), a well-established adult self-report scale with strong psychometric properties. 10The items of the SI-R were revised to be appropriate for children and to be completed by parents.The psychometric properties of the CSI were tested on American 9 and Canadian 11 children and adolescents diagnosed with OCD.While the scale developers assessed the psychometric characteristics of the original 4-factor 23-item scale in the American context, the Canadian team assessed the revised 3-factor 15-item version, which excluded the factor assessing clutter.However, no validity or reliability study has yet been conducted for this tool in any language other than English.The lack of valid and reliable tools in non-English-speaking countries limits the ability to screen, diagnose, treat, and follow up on psychiatric disorders.Regardless of being shown to be sufficient in 1 culture, it is not guaranteed that these diagnostic instruments will be valid or reliable in another culture. 12There are differences between different cultures in the demographic characteristics, severity of symptoms, and comorbid psychiatric disorders of hoarding. 13Therefore, it is critical to translate the CSI parent version into a language other than English and to test it in a non-Englishspeaking culture.
This study aims to investigate the psychometric characteristics of the Turkish CSI rated by parents in a clinical sample of children and adolescents in the 8-17 age group.We hypothesized that the results of the study would support the validity and reliability of the original 4-factor CSI in Turkish society.
Procedure and Participants
Necessary written permission and ethical approval for the study were obtained from the Ethics Committee of Atatürk University (B.30 .2.0.01.00 /50-0 1/29) .In addition, permission was obtained electronically from the authors of the scale.After obtaining the necessary permissions, the items in the CSI were translated into Turkish separately by the present authors (fluent English speakers); then, the differences in the translated questionnaire were checked at a meeting.The inconsistencies were then examined by another researcher (a native English speaker) who was blind to the original items.The final version was accepted by all team members.The Turkish translation was translated back into English by a clinician who was fluent in both languages, and the translation was submitted to the author, who granted permission to use the original CSI.The participants included 52 children and adolescents aged 8-17 and their families.The children and adolescents were admitted to our outpatient clinic, where they were diagnosed with OCD for the first time.Those who read the informed consent form, volunteered to participate in the research, and signed the written consent form were included in the study.As a structured diagnostic interview, the Development and Well-Being Assessment (DAWBA) was applied to all participants included in the research.Obsessivecompulsive disorder diagnoses and comorbid psychiatric diagnoses were made using the DAWBA.Children and adolescents with a primary diagnosis of OCD were included in the study.The primary diagnosis was determined by the clinicians based on psychiatric interviews and the tools used according to the clinical manifestation that caused the most distress and deterioration in functionality.In addition, clinical interviews were conducted to assess the DSM-5 diagnostic criteria.The DSM-5 diagnostic criteria were endorsed.The findings and materials obtained as a result of the psychiatric interview were reviewed by an experienced child and adolescent psychiatrist.Each criterion and specifier of HD has been endorsed.Whether it is appropriate to use the C diagnostic criterion in children and adolescents and whether it should be adapted due to the strictness of this criterion remain controversial issues.Therefore, we neither waived this criterion nor disregarded parental intervention when confirming criterion C. Hence, criterion C was validated based on the verbal statement of the parent.At the end of all procedures, HD in children and adolescents was diagnosed based on the DSM-5 diagnostic criteria.Participants who had received or were already receiving treatment were excluded from the research due to their potential effects on the diagnosis and clinical presentation of both HD and OCD.The CY-BOCS was applied by an experienced clinician.The parents, children, and adolescents completed the OCI-CV and CSI scales independently.
Sociodemographic Data Form
The form prepared within the scope of the study was used to collect demographic information (e.g., age, gender, grade, duration of maternal and paternal education, income status, etc.) about the children and their parents.
Development and Well-Being Assessment
This is a diagnostic tool used to assess psychiatric disorders in children and adolescents aged 2-17 years.It uses both the 10th edition of the International Classification of Diseases (ICD-10) and the DSM IV-V classifications.The DAWBA is composed of 3 parts: a structured interview for parents, a structured interview for young people aged 11-17, and a questionnaire for teachers.The interviews can be conducted through a written interview text or a computer application and can also be completed by parents, young people, and teachers themselves without the need for an interviewer.Unlike other interviewer-based formats, DAWBA includes openended questions in each section.This allows for a more accurate evaluation of symptoms and loss of functionality. 14The Turkish version of DAWBA was developed by Dursun et al 15 in 2013.
Children's Saving Inventory
The CSI is a parent-rated tool developed by Storch et al 9 to measure the frequency and severity of hoarding symptoms in children aged 8-17 years with OCD.The scale ranges from 0 to 4, with higher scores indicating more severe hoarding symptoms.The original scale consisted of 23 items, which were tested on 123 children and adolescents with OCD and their parents.Three items were removed from the scale due to low correlations and factor loadings.The final 23-item scale was found to be valid and reliable in the United States, with an internal consistency coefficient of r = .84-.96.In this study, the original 23-item version of the scale was used to assess the suitability of the scale for use in the Turkish population.
Children's Yale-Brown Obsessive-Compulsive Scale
This is a semi-structured interview form used to assess the reported severity of obsessive-compulsive symptoms. 7Obsessions and compulsions
Main Points
• This is the first and only study to test the validity and reliability of the Children's Saving Inventory (CSI) in another culture.
• The CSI Turkish version demonstrated good internal consistency.
• This four-factor structure of the scale was confirmed by confirmatory factor analysis.
• The CSI Turkish version was found to be a valid and reliable scale for use in a clinical sample.
are scored in 5 subscales (scored from 0 to 4 points per item) to calculate the CY-BOCS obsession score (0-20 points), the CY-BOCS compulsion score (0-20 points), and the CY-BOCS total score (0-40 points).The Turkish reliability study of the scale was conducted in 2006 by Yücelen et al. 16 Obsessive-Compulsive Inventory-Child Version This is a self-reported Likert-type instrument used to measure the level of obsessive-compulsive symptoms in children and adolescents. 8The scale consists of 6 subscales: doubt/checking, obsessions, neutralizing, washing, ordering, and hoarding.Each item of the 21-item scale is scored from 0 to 4 points, with increasing scores in each subscale indicating more prominent OCD symptoms.The Turkish validity and reliability study of the scale was carried out in 2014 by Seçer. 17
Statistical Analysis
The Statistical Package for the Social Sciences version 24.0 (IBM SPSS Corp.; Armonk, NY, USA) program was used to analyze the data obtained in the study.The Kolmogorov-Smirnov test was used to test the normality of the distribution of continuous variables.An independent sample t-test was used for between-gender and between-group (HD/non-HD) comparisons.Continuous variables were presented using means and standard deviations (mean ± SD), and the categorical variables were presented as numbers and percentages (n, %).
The internal consistency of the CSI total and subscale scores was evaluated by examining Cronbach' s alpha coefficients, item-total correlations, and Cronbach' s alpha if the item was deleted.For acceptable reliability, alpha values should be in the range of 0.70-0.95. 18,19One method to assess whether the items need to be removed from a scale to improve its alpha coefficient is to calculate the adjusted itemtotal correlation and remove items with a low (≤0.30)correlation. 19,20To achieve adequate item-total correlation values for the Turkish CSI scale, we used a cut-off value of 0.30, which is the accepted general cut-off value for the item removal criterion.
Confirmatory factor analysis (CFA) was performed using AMOS 24.0 software to test the model fitness of the original 23-item CSI and the 20-item Turkish CSI, which were developed by removing items that did not meet the criteria.
To assess fitness, various fit indices were investigated, including model χ 2 , df, and P values, the root mean square error (RMSEA) of approximation (0.05-0.08 = indicate adequate fitness, values >.05 = good fitness), 21 the Comparative Fit Index (CFI, 0.90-0.95= adequate fitness, ≥0.95 = good fitness), and Tucker-Lewis Index (TLI, 0.90-0.95= adequate fitness, ≥0.95 = good fitness). 22In addition to examining model fit indices to obtain the most appropriate model, item factor loadings were also examined to identify items with poor fitness.A threshold value of 0.40 was used to include items on the scale. 23As a result, items with low factor loadings and low item-total correlations were removed.
The validity of the Turkish version of the CSI was evaluated using Pearson' s correlation analysis for continuous variables.The correlations between the CSI total and its subscales (discarding, clutter, acquisition, and distress) and between the CSI total, CY-BOCS (obsession, compulsion, and total score), and OCI-CV (doubt/checking, obsessions, neutralizing, washing, ordering, hoarding, and total score) were examined.In addition, the differences between the CSI total and subscale scores of the participants with and without HD diagnoses in the clinical interview were compared to test the construct validity of the CSI.A significance level of P < .05 was used to indicate statistical significance.
Confirmatory Factor Analysis
The model fit of the original 23-item CSI data was somewhat good (RMSEA = 0.077, TLI = 0.87, and CFI = 0.89).All factor loadings, except for 3 items, were positive and significant (P < .001).The factor loads of Item 2 (regression weight = 0.05), Item 3 (regression weight = 0.27), and Item 4 (regression weight = 0.14) were not significant (P = .735,P = .094,and P = .315,respectively).These 3 items were removed from the scale due to low item-total correlations in the reliability analysis and low factor loads in the CFA.Once removed, the analysis was repeated.The 20-item Turkish CSI was found to have an acceptable goodness of fit for the model (RMSEA = 0.072, TLI = 0.0.91,CFI = 0.92).The chi-square test results and fit statistics are shown in Table 2.
Face Validity
Descriptive statistics and correlations for the CSI total and subscale scores are presented in Table 3.All CSI scores (total, discarding, clutter, acquisition, and distress/impairment) had moderate to high correlations with each other (all P < .001).
Convergent Validity
Correlations between CSI scores and the hoarding subscale scores on the other scales used suggest acceptable convergent validity.The CSI total score was strongly correlated with the hoarding subscale of the OCI-CV (r = 0.648, P < .001).
Discriminant Validity
Low and moderate correlations were found between the CSI total score and the CY-BOCS total score (r = 0.287, P < .05),and between the CSI total score and the OCI-CV total score (r = 0.486, P < .001).In addition, there was no significant relationship between the CSI total score and the CY-BOCS and OCI-CV subscales (CY-BOCS: obsession and compulsion; OCI-CV: doubt/checking, obsessions, washing, ordering, and neutralizing, P > .05).The correlations between the CSI total score and the OCD scales are shown in Table 4.
Construct Validity
To test construct validity, the CSI total and subscale scores of the participants with (n = 7) and without (n = 45) HD diagnoses, according to clinical interviews, were compared.A statistically significant difference was found between the groups with and without HD diagnosis in CSI total scores (46.43 vs. 20.89,P < .001).A statistically significant difference was also found between the groups with and without HD diagnosis in terms of CSI discarding (17.29 vs. 6.22),CSI clutter (7.29 vs. 2.16), CSI acquisition (11.71 vs. 6.33), and CSI distress/impairment (10.14 vs. 6.18)subscales (P < .05).
Discussion
To the best of our knowledge, this is the first and only study to test the validity and reliability of the CSI-the first and only measurement tool developed to evaluate hoarding symptoms in children-in another culture.In general, the Turkish version of the CSI was found to be a valid and reliable scale for use in children and adolescents in a clinical sample.
The scale developers excluded 2 items (Item 2 and Item 4) from the original 23-item CSI from their analysis due to their low item-total correlation and 1 item (Item 11) due to insufficient factor loading in the explanatory factor analysis. 9n the analysis performed in our study, Item 2 (How much control does your child have over his/ her urges to acquire possessions that s/he does not need?) and Item 4 (How much control does your child have over his/her urges to save possessions that s/he does not need?) were removed from the scale due to low item-total correlations and low factor loads in CFA.However, Item 11 (To what extent does attachment to things interfere with your child's functioning at school, at home, or with friends?), which did not apply to the US population, was left on the scale since it had good fitness in the Turkish population.On the contrary, Item 3 (How much time do you spend dealing with your child's possessions (e.g., organizing, discarding, arranging)?), which applies to the US population, was removed from the scale due to insufficient fitness in the Turkish population.Thus, the 20-item Turkish CSI was obtained.The 4-factor Turkish CSI-20 was found to have an acceptable goodness of fit for the model.This result confirmed the 4-factor structure of the Turkish population.Compared to the original 23-item CSI, although we do not argue that the original CSI does not fit, the Turkish CSI-20 fits Turkish society better.When these findings are considered together, they offer a good example of how scale items translated into other languages and tested in different cultures may not have similar fitness.
Determining the internal consistency of the scale is one of the most important steps in determining whether a scale is reliable by evaluating item-total correlations. 24One method of measuring a scale' s internal consistency is the calculation of Cronbach' s alpha coefficient. 18he Cronbach' s alpha value in the study was found to be 0.93.In addition to the reliability analyses, validity analyses show that the Turkish CSI-20 has adequate validity.First, the mutual correlations of the CSI total and subscale scores were examined for face validity.All CSI scores (total, discarding, clutter, acquisition, and distress/impairment) had moderate to high correlations with each other.Second, the strong correlation between the CSI total score and the OCI-CV hoarding subscale supports acceptable convergent validity.Third, low and medium correlations were found between the CSI total score and the CY-BOCS and OCI-CV total scores.Since the sample of the study included children with OCD, this finding is expected.No significant correlations were found between the CSI total score and the other CY-BOCS subscales (obsession and compulsion) and the OCI-CV subscales (doubt/checking, obsessions, washing, ordering, and neutralizing).This, in turn, indicates the discriminant validity of the CSI-20.Finally, to test construct validity, the CSI total and subscale scores of participants with and without HD diagnoses were compared.In this study, HD diagnostic evaluation was based on a combined evaluation of data collected through clinical interviews with both the children and their parents.A statistically significant difference in the CSI total and 4 subscale scores (discarding, clutter, acquisition, and distress/impairment) was found between the groups with and without an HD diagnosis.This result provides evidence that CSI can be used to distinguish between those with and without hoarding behavior.
This study has some limitations despite its strengths.One of the limitations is that, similar to the original CSI, the generalizability of the results may be restricted because the study only focused on children and adolescents with OCD.Therefore, testing the applicability of community sampling and a clinical sample of young people diagnosed with HD would be useful for future studies.Second, the relatively small sample size (n = 52) might be considered a limitation.Third, since the CSI is a parent-rated scale, there is a need for a child version to compare parentchild self-reports.It is unclear how effective the child version of the questionnaire would be for this age group because children and adolescents have limited control and resources in their home environment, and parental involvement in their living spaces could impact the nature, features, and consequences of hoarding behavior.These potential effects will be explored in future studies.Fourth, the authors did not conduct test-retest analyses by repeating questionnaire administration a few weeks after filling out the first questionnaires, which might also be listed among the limitations of this study.
In conclusion, this study investigated the psychometric properties of the Turkish version of a parent-rated scale for the hoarding behavior of children and adolescents in a clinical sample.The 20-item CSI Turkish version demonstrated good internal consistency for both the total score and the factor scores.This 4-factor structure of the scale was confirmed by CFA.Children' s Saving Inventory showed convergent and discriminant validity with the OCI-CV and CY-BOCS subscales, and the higher CSI total scores in children and adolescents diagnosed with HD confirmed the construct validity.As a result, our findings support the idea that the Turkish version of the CSI can be used as a valid and reliable measurement tool to assess the hoarding symptoms of children and adolescents in a clinical sample.We also believe that this study will lead to further validity and reliability studies of the CSI in cultures and languages other than English.
Table 1
ReliabilityThe Cronbach' s alpha reliability of the 23-item original CSI was 0.91.Three items were 12, 0.21, and 0.10, respectively); they also showed low factor loadings in CFA (see CFA).After removing these items, the analysis was repeated.The internal consistency of the Turkish 20-item CSI was excellent for the total score (Cronbach' s α = 0.93) and the subscales of discarding (α = 0.89), clutter (α = 0.78),
Table 2 .
Fit Statistics for the Confirmatory Factor Analytic Models
Table 3 .
Children's Saving Inventory Total and Subscale Scores Correlations and Descriptive Statistics **P < .001.
Table 4 .
Correlations Between CSI Total Score and OCD Scales | 2023-10-28T15:19:09.937Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "e0341e02294e78afbb3cce887620c1e7c7cf0c22",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "518ae08a42ded90e3a4479062f50ef649358448e",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16573131 | pes2o/s2orc | v3-fos-license | Breakdown of Conventional Factorization for Isolated Photon Cross Sections
Using $e^+e^-\rightarrow\gamma + X$ as an example, we show that the conventional factorization theorem in perturbative quantum chromodynamics breaks down for isolated photon cross sections in a well defined part of phase space. Implications and physical consequences are discussed.
High energy photons have long been considered an excellent probe of short-distance physics in strong interactions. They couple directly to pointlike quark constituents and do not interact much once produced. [1] Photons can also result from long-distance fragmentation of quarks and gluons, themselves produced in short-distance hard collisions. Consequently, the inclusive photon cross section at high energy includes both short-distance direct and long-distance fragmentation contributions, and the cross section is not completely perturbative. Nevertheless, in accord with the factorization theorem of perturbative quantum chromodynamics (QCD) [2], all long-distance physics associated with parton-to-photon fragmentation can be represented by non-perturbative, but well-defined and universal photon fragmentation functions, and the remainder of the theoretical expression for the cross section, calculable in QCD perturbation theory, is insensitive to the infrared region of the theory.
However, for observational reasons the inclusive cross section may not be measurable at high energy. Owing to backgrounds from, e.g., π 0 → γγ, a single high energy photon is observed and the cross section is measured only when the photon is relatively isolated. In the experimental definition of isolation, a cone of half-angle δ is drawn about the direction of the photon's momentum, and the cross section is measured for photons accompanied by less than a specified amount of hadronic energy in the cone, e.g., E cone h ≤ E max . Because of isolation, the experimental cross section for isolated photons depends explicitly on the isolation parameters δ and E max .
A proper theoretical treatment of the cross section for isolated photons requires careful consideration of the origins and cancellation of both infrared and collinear singularities in QCD perturbation theory. In a theoretical calculation, isolation of the photon restricts the final-state phase space accessible to accompanying quarks and gluons. In this Letter, using e + e − → γX as an example, we demonstrate that this phase space restriction inevitably breaks the perfect cancellation of infrared singularities between real gluon emission and virtual gluon exchange diagrams that is required to yield finite cross sections in each perturbative order.
Breakdown of the cancellation of infrared singularities appears first at next-to-leading order in the fragmentation contributions. The associated physics can be summarized as follows. In the fragmentation contribution, sketched in Fig. 1, hadronic energy in the isolation cone has two sources: a) energy from parton fragmentation, E f rag , and b) energy from nonfragmenting final-state partons, E cone partons , that enter the cone. When the maximum hadronic energy allowed in the isolation cone is saturated by the fragmentation energy, E max = E f rag , there is no allowance for energy in the cone from other final-state partons. In particular, if there is a gluon in the final state, the phase space for this gluon becomes restricted. By contrast, isolation does not affect the virtual gluon exchange contribution. Therefore, in the isolated case, there is a possibility that the infrared singularity from the virtual contribution may not be cancelled completely by the restricted real contribution. In the remainder of this Letter, we show that this is indeed the case, and we explore the implications.
The cross section for the inclusive yield of high energy photons in hadronic final states of e + e − annihilation is well-defined, in accord with the factorization theorem of perturbative QCD. [3] For isolated photons, we first assume that the factorization theorem holds; we then follow standard procedures to calculate the short-distance partonic hard parts perturbatively.
Finally, we demonstrate that some of the hard parts have uncanceled infrared singularities.
If conventional factorization were true, the cross section for isolated photons would be expressed in the following factorized form In Eq. (1), and the sum extends over c = γ, q,q and g; D c→γ (z, δ) is the nonperturbative function that describes fragmentation of parton "c" into a photon. Fragmentation is assumed theoretically to be a collinear process. The lower limit of the z-integration results from the isolation requirement with the assumption that all fragmentation energy is in the isolation cone. [4] For the short-distance partonic hard parts, E c dσ iso e + e − →cX /d 3 p c , the isolation requirement is that the energy carried into the cone by non-fragmenting partons (i.e., partons other than c) should satisfy E cone To demonstrate the incomplete cancellation of infrared singularities, we present the one-loop hard part for the quark fragmentation contribution: e + e − → qX; q → γ.
The derivation of all other contributions is found in Ref. [4].
To calculate the one-loop quark fragmentation contribution, we consider e + e − → qX and apply Eq. (1) perturbatively to first order in α s . We derivê The convolution over z ′ in Eq. (2) is the same as that in Eq. (1) but with z replaced by z ′ , and the lower limit replaced by max[ is the zeroth order hard part obtained from the lowest order Feynman diagram for e + e − → qq, and it is is the singular first order quark-to-quark fragmentation function. [3] On the right side of Eq. (2), the first order isolated partonic cross section, σ (1)iso e + e − →qX , has both real gluon emission and virtual gluon exchange contributions. The real contributions are obtained from three body final-state tree diagrams e + e − → qqg, and the virtual contributions from one-loop interference diagrams e + e − → qq. Integration over the phase space of the gluon from e + e − → qqg yields both infrared (when E g → 0) and collinear (when g is parallel to the observed q) divergences. If factorization is valid, the infrared divergence will be cancelled by the infrared divergence from the virtual diagrams, and the collinear divergence will be cancelled by the singular second term on the right side of Eq. (2).
Our calculation of σ (1)iso e + e − →qX proceeds as follows. For the real gluon emission ("R") contribution we write [4] The constant C is an overall color factor, e is the electric charge, and the normalization factor (2/s)F P C q (s) includes contributions from both γ * and Z 0 intermediate states. We use dimensional regularization with n = 4 − 2ǫ. Letting p 1 , p 2 and p 3 label the momenta of the q,q and g, respectively, we derive the squared matrix element H.
where Eq. (2)), y 13 = 2p 1 · p 3 /s, and θ 1 is the angle between the "observed" quark and the e + e − beam axis. An overall coupling constant (eµ ǫ ) 2 (gµ ǫ ) 2 is omitted in Eq. (4). The three particle phase space element is In the isolated photon situation, isolation splits the integration overŷ 13 into three regions: Quantitiesȳ c andȳ m arē with z = x γ /x 1 . For the first interval in Eq. (6), the condition 0 ≤ŷ 13 ≤ȳ c ensures that a gluon is in the isolation cone of the fragmenting quark; andŷ 13 ≤ min[ȳ c ,ȳ m ] ensures that the total hadronic energy in the isolation cone is less than E max = ǫ h E γ . Similarly, the condition max[(1 −ȳ c ), (1 −ȳ m )] ≤ŷ 13 ≤ 1 for the third interval ensures that the antiquark is in the isolation cone of the fragmenting quark, and that the total hadronic energy in the isolation cone is less than E max . The second interval represents the situation when neither gluon nor antiquark is in the isolation cone.
Equations (6) and (7) show that the isolated cross section is identical to the inclusive cross section if δ = 0 orȳ c ≤ȳ m . However, ifȳ c >ȳ m , the phase space of the final state gluon (and/or antiquark) is smaller.
We now examine in turn two separate situations: Both yield isolated photon partonic cross sections that are infrared sensitive. When x γ ≤ 1/(1 + ǫ h ),ȳ m ≤ 0, and only the second interval in Eq. (6) survives. We reexpress it as whereȳ c is expanded to order δ 2 . The first term on the right side of Eq. (8) is, by definition, the complete real contribution to the inclusive partonic cross section. When it is combined with the virtual contribution, as in the inclusive case, all pole terms cancel, except for one 1/ǫ term due to the collinear singularity, discussed above. The second and third terms diminish the real contribution in isolated case. The third term is innocuous in that it does not generate a 1/ǫ singularity; it yields only terms which vanish as δ 2 → 0. However, the second term generates a number of pole terms in 1/ǫ.
Neglecting all terms of O(δ 2 ) or higher, we obtain the isolated partonic cross section after including real gluon contributions from all three terms in Eq. (8) and the virtual gluon contribution.
As explained above, the uncanceled poles in Eq. (9) come from the interval specified by the second term in Eq. (8). The singularities corresponding to these poles are infrared in nature and, as expected, are proportional to δ(1 − x 1 ). These poles would be irrelevant if x 1 = 1.
Combining the real contribution from the first term in Eq. (10), the virtual contribution, and the subtraction term in Eq. (2), we obtain the complete partonic inclusive cross section. First, we caution that the breakdown of perturbative factorization for the cross section of a physical process does not render the process useless. Factorization of the cross section for massive lepton-pair production, the Drell-Yan process, fails at order 1/Q 4 [5,6]. Breakdown means that one may not factorize contributions proportional to 1/Q 4 into infrared-safe short-distance hard parts times well-defined long-distance matrix elements. The terms proportional to 1/Q 4 are still finite, and the Drell-Yan process remains a fine process for tests of QCD dynamics, as long as Q 2 is large enough.
In isolated photon production, when x γ ∼ 1/(1 + ǫ h ), the breakdown of factorization demonstrated in this paper means that the cross section cannot be factored into a sum of terms each having the form of an infrared-safe partonic hard part times a corresponding parton-to-photon fragmentation function. Nevertheless, the measured cross section is still well-behaved and finite. It becomes a task to show that one can still extract meaningful information from the data on isolated photons, even if factorization does not hold throughout phase space.
These are logarithmically divergent when x γ → 1/(1 + ǫ h ), but if x γ is kept much smaller than 1/(1 + ǫ h ), the cross section for isolated photons in e + e − annihilation is well-behaved.
The opportunity to extract good information on photon fragmentation functions is one of the reasons for the study of isolated photons in e + e − annihilation. [7,8] Equation (1) indicates that a large value of ǫ h allows a large range of values of z. On the other hand, a large value of ǫ h leaves a small range of x γ in which a fixed-order analytical calculation can be trusted. Therefore, special care must be taken when data from e + e − annihilation are used to extract photon fragmentation functions.
For production of isolated photons at hadron-hadron colliders, the physical cross section is obtained after an integration over the momentum fractions of incoming partons. One is not free to impose a selection on x γ analogous to that in e + e − annihilation, and the integration is done throughout the part of phase space where the breakdown of factorization discussed in this Letter takes place. It is therefore not altogether straightforward to specify the precise form and magnitude of the fragmentation contribution to isolated prompt photon production in hadron-hadron collisions. More discussion of this question will be found in Ref. [4]. hadronic energy E f rag . In addition, a gluon enters the cone and fragments giving hadronic energy E parton . | 2014-10-01T00:00:00.000Z | 1995-12-11T00:00:00.000 | {
"year": 1995,
"sha1": "36c10e6ff2d213802e75926e8eb1557e7a1a8ff9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9512281",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "36c10e6ff2d213802e75926e8eb1557e7a1a8ff9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
254206035 | pes2o/s2orc | v3-fos-license | Carbon-Supported PdCu Alloy as Extraordinary Electrocatalysts for Methanol Electrooxidation in Alkaline Direct Methanol Fuel Cells
Palladium (Pd) nanostructures are highly active non-platinum anodic electrocatalysts in alkaline direct methanol fuel cells (DMFCs), and their electrocatalytic performance relies highly on their morphology and composition. This study reports the preparation, characterizations, and electrocatalytic properties of palladium-copper alloys loaded on the carbon support. XC-72 was used as a support, and hydrazine hydrate served as a reducing agent. PdxCuy/XC-72 nanoalloy catalysts were prepared in a one-step chemical reduction process with different ratios of Pd and Cu. A range of analytical techniques was used to characterize the microstructure and electronic properties of the catalysts, including transmission electron microscopy (TEM), X-ray diffractometry (XRD), X-ray photoelectron spectroscopy (XPS), and inductively coupled plasma emission spectroscopy (ICP-OES). Benefiting from excellent electronic structure, Pd3Cu2/XC-72 achieves higher mass activity enhancement and improves durability for MOR. Considering the simple synthesis, excellent activity, and long-term stability, PdxCuy/XC-72 anodic electrocatalysts will be highly promising in alkaline DMFCs.
Introduction
With the rapid development of human society, the demand for energy in all countries is increasing, and the accompanying environmental problems are becoming more serious. It has become an imperative development strategy to develop innovative, sustainable, and environmentally friendly energy conversion technologies and devices. Fortunately, Direct Methanol Fuel Cells (DMFCs), as a new energy conversion device with simple structure, high energy density, fast charging, and environmental advantages, can transform direct chemical energy stored in methanol into electricity, which has attracted much attention in recent years [1][2][3][4][5]. At the same time, methanol is a relatively abundant product of the chemical industry, and the development of DMFCs is also considered to be the key to achieving carbon neutrality [6][7][8][9][10]. However, the anodic catalyst with excellent performance and low cost is a challenge for the commercial use of DMFCs. Therefore, it is essential to design and prepare the anode catalyst of DMFCs scientifically [11].
Pt has been verified to be the best catalyst for DMFCs; however, the active site of Pt is easily poisoned by CO ads produced in the process of methanol oxidation, and the methanol oxidation activity on the Pt electrode surface consequently decreases. In order to improve the anti-CO poisoning ability of catalysts, the research of DMFCs anode catalysts mainly focused on Pt and PtM (M = Pd [12], Ru [13], Fe [14], Co [15], Ni [16], Cu [17], Sn [18], and Ag [19]). Some studies have shown that there are two mechanisms of bifunctional mechanism and electronic effect in improving the anti-CO poisoning ability of Pt-based alloy catalysts. The bifunctional mechanism holds that methanol takes the adsorption-dehydrogenation processes on the surface of Pt. The second metal or metal oxide which is introduced promotes the dissociation of the electrolyte to form more OH ads , which is conducive to faster oxidation removal of CO ads adsorbed on Pt [20,21]. On the other hand, the second metal reduces the adsorption energy of CO ads on the Pt surface by affecting the electronic properties of Pt, thus inhibiting the adsorption of CO ads on the Pt surface. The Pd-based catalyst is more abundant, has higher electrocatalytic activity for small alcohol molecules in alkaline media, and is resistant to CO poisoning. Thus, it can be used as a substitute for Pt-based anode catalysts [22]. Therefore, in order to further improve the electrocatalytic activity and economical utilization of the Pd catalyst, the second metal is usually introduced to form the alloy PdM (M = Cu [23], Co [24], Ag [25], Fe [26], and Sn [27]). Comparing it with a single metal, the bimetallic catalyst exposed to more active sites had higher activity and stability, owing to the shift of the d-band center of Pd, which is caused by the electronic effects and geometric changes of the catalyst. In addition, the support with a large specific surface area can provide a channel for electron transfer, such as carbon black, carbon nanotubes, and graphene, thus maintaining the excellent conductivity and performance of the catalyst.
In recent years, palladium copper alloy nanoparticles have attracted much attention due to their excellent small molecule oxidation properties of alcohols and ability to resist CO ads poisoning. The reason for this is that the d-band center of Pd shifts after the alloy formation between Cu and Pd, and the addition of copper reduces the binding energy of Pd n+ . In contrast, Pd increases the binding energy of Cu n+ , which weakens the adsorption of CO ads on the catalyst surface.
Recently, Ye et al. [28] reported a PdCu catalyst with an unique core-shell structure. In an alkaline medium, the catalyst has excellent catalytic performance for methanol oxidation reaction (MOR), and the enhanced activity is rigorously ascribed to the adsorption strength of OH ads , which is good for the removal of adsorbed COads and increases the ability of CO ads to resist CO ads poisoning. Shih et al. [23] studied porous PdCu NPs for electrocatalytic oxidation of methanol with excellent electrochemical activity, greater stability, and lower cost-effectiveness. The copper content of a catalyst is known to have a significant effect on its morphology and catalytic activity. Additionally, the copper content controls the morphology and affects the catalytic activity toward the MOR in alkaline media. This is an important discovery, because it can help to optimize the performance of catalysts for various applications. Saleem et al. [29] prepared an element segregation phenomenon in two-dimensional (2D) core-shell nanoplates, which showed superior electrocatalytic activity and stability towards the MOR compared to the commercial Pt/C catalyst. Unfortunately, the complex synthesis process impedes its practical application in DMFCs. Therefore, it is urgent to design and synthesize an efficient MOR catalyst using convenient, rapid, and eco-friendly synthetic methods.
Based on the above views, in this work, we synthesized PdCu nanoalloys on Vulcan XC-72 support by means of the liquid reduction method, which can synthesize many alloy catalysts in a short time under mild conditions, and meets the requirements of green chemistry. In addition, we synthesized a series of PdCu nanoalloys with different atomic ratios to study the influences of Cu addition on the electronic environment and the lattice strain of the catalysts.
Preparation of Catalysts
The supported Pd x Cu y /XC-72 (x + y = 5) nanoalloy catalysts with different compositions were prepared by one-step chemical reduction, with hydrazine hydrate as a reducing agent and controlling the total metal loading at 30%. Typically, taking Pd 1 Cu 1 /XC-72 as an example, 50.0 mg XC-72 was accurately weighed and dispersed in 200 mL ethylene glycol for 30 min by ultrasound, and 2 mL hydrazine hydrate was added to the mixture under stirring. Then, 11.41 mL of H 2 PdCl 4 (1 mg/mL) and 8.011 mg Cu(NO 3 ) 2 ·3H 2 O mixed precursor solution was added to it by dropping, and the mixture was reduced by stirring for two hours, followed by filtration, washing, and drying at 70 • C overnight to obtain Pd 1 Cu 1 /XC-72.
Physical Property Characterization
In order to conduct an X-ray diffraction (XRD, Bruker D8 Advance, Germany) analysis, a Bruker D8 Advance, equipped with Cu target Kα radiation as the X-ray source (λ = 0.15404 nm), was employed. Transmission electron microscopy (TEM, FEI Talos F200x, USA) and high-resolution transmission electron microscopy (HRTEM, FEI Talos F200x, USA) were also performed on a JEOL 2100F TEM with an accelerating voltage of 200 kV. Inductively coupled plasma-atomic emission spectroscopy (ICP-OES, Agilent 5110(OES), USA) was a powerful analytical tool that could measure a wide range of elements in a sample. In this study, ICP-OES was used to measure the concentration of elements in the sample. This information was then used to understand the chemical composition of the sample. To this end, a certain amount of the sample was digested by microwave with aqua regia at 200 • C for 1 h. This process was repeated five times until it was clear and transparent, to ensure that the solids were completely digested. Then, the as-obtained solution was volume-constant at 25 mL and diluted 100 times. Moreover, in order to obtain the electronic structure information of the catalyst surface, the Pd x Cu y /XC-72 was analyzed by X-ray photoelectron spectroscopy (XPS, Thermo Scientific K-Alpha, America). The as-synthesized materials were evaluated for electrochemical characterization using cyclic voltammetry (CV) and chronoamperometry (CA).
Electrochemical and Physical Characterization
The electrochemical performance of electrocatalysts were evaluated by workstation (CHI 660E) of a three-electrode system. The platinum electrode was the counter electrode, the calomel saturated electrode (SEC) was the reference electrode, and the glassy carbon electrode (GCE, 4.0 mm in diameter, 0.1256 cm [2]) coated with catalyst was the working electrode. The working electrode was prepared as follows: 2.0 mg of catalyst was accurately weighed, and 450 µL absolute ethanol, 50 µL 5% Nafion membrane solution, and 500 µL ultrapure water were successively added to the catalyst. It was then dispersed by ultrasound for 0.5 h to form a uniform inked mixture. Next, 5 µL was removed and dropped onto the surface of the glassy carbon electrode, which was allowed to dry naturally before testing. Cyclic voltammetry (CV) was used to test the activity of electrocatalytic methanol oxidation and the electrochemical active area (ECSA) in an alkaline environment. The MOR activity test electrolyte was 1 M KOH + 1.0 M CH 3 OH solution saturated with N 2 , and the ECSA test was placed into 1 M KOH solution saturated with N 2 . The scanning speed was 50 mV/s, and the potential range was −0.9-0.3 V (vs. SCE). In this work, a chronoamperometry experiment was used to evaluate the stability of the catalyst. The specific operating conditions were as follows: a chronoamperometry test was performed at −0.2 V constant potential, and rapid scanning was performed at 0.1 V/s scanning speed. All above tests were performed at room temperature. to the smaller radius of Cu (0.128 nm) compared to that of Pd (0.137 nm), indicating the formation of successful PdCu alloy NPs [30]. Figure 1 indicates the XRD profile of the as-synthesized PdxCuy/XC-72. The XRD profile reveals that the PdCu NPs are well crystallized, as observed from the three prominent diffraction peaks. The three main diffraction peaks appear at 2θ = 40.6°, 46.7°, and 68.2° index to the (111), (200), and (220) crystal planes of face-centered cubic (fcc) Pd NPs. The Pd (111) shift peak is shown in Figure 1. After Cu incorporation, the diffraction peak at 40.6°, corresponding to the Pd (111) plane, slightly shifted to the higher values (namely, 40.61°, 40.26°, 40.28°, 40.20°, 40.00°). This shift may be due to the Pd lattice contraction, owing to the smaller radius of Cu (0.128 nm) compared to that of Pd (0.137 nm), indicating the formation of successful PdCu alloy NPs [30]. The dispersion of several catalysts with different PdCu ratios is reflected in the TEM images of PdxCuy/XC-72 in Figure 2. It can be seen that the catalysts formed with our synthesized are well-dispersed, with a uniform size distribution, and the particle size of the formed catalysts gradually rises with the increase in the Pd ratio. The dispersion of several catalysts with different PdCu ratios is reflected in the TEM images of Pd x Cu y /XC-72 in Figure 2. It can be seen that the catalysts formed with our synthesized are well-dispersed, with a uniform size distribution, and the particle size of the formed catalysts gradually rises with the increase in the Pd ratio. Figure 3 shows the HRTEM profile of the Pd 3 Cu 2 /XC-72. It can be seen that the metal is dispersed more evenly on the carbon support, and in the Figure 3c, the lattice fringes of the Pd(111) plane, Pd (200) plane, and Cu (111) plane can be clearly seen, which further verifies the formation of its alloy [31]. Moreover, we used the microwave digestion technique combined with inductively coupled plasma optical emission spectrometry (ICP-OES) analysis. Table 1 shows that the Pd metal content of Pd1Cu4/XC-72, Pd2Cu3/XC-72, PdCu/XC-72, Pd3Cu2/XC-72, and Pd4Cu1/XC-72 is 7.6, 12.4, 18.2, 19.7, and 25.8 wt%, respectively. In addition, the Cu metal content of Pd1Cu4/XC-72, Pd2Cu3/XC-72, PdCu/XC-72, Pd3Cu2/XC-72, and Pd4Cu1/XC-72 is 15.4, 11.1, 10.2, 6.9, and 3.5 wt%, respectively. This generally agrees with the ratio of PdCu alloy in our experimental design. The composites were further examined by XPS in order to learn more about the surface electronic state of PdxCuy/XC-72 (Figure 4). To ascertain the valence state of Pd in the hybrids, the XPS spectra of Pd 3d were also gathered. Metallic Pd 0 is responsible for the Pd 3d5/2 peak at 335.2 and the 3d3/2 peak at 340.5 eV, and also the Pd 3d5/2 peak at 342.4 eV and 3d5/2 peak at 337.2 eV assigned to Pd 2+ . It can be seen that the majority of the metal precursors are successfully reduced and loaded onto the support by comparing the quantities of Pd 0 and Pd 2+ . According to the Cu 2p spectra, the PdxCuy/XC-72 peaks at 952.56 and 932.7 eV, respectively, correspond to Cu 0 2p1/2 and 2p3/2; the peaks at 953.7 and 933.6 eV, respectively, correspond to Cu 2+ 2p1/2 and 2p3/2. The peaks at 944.8 and 941.9 eV, correspond to Cu 2+ 2p3/2 satellite peaks. The overall intensity of the Pd peaks decreases with the addition of Cu, indicating less Pd exposure in the bimetallic. The composites were further examined by XPS in order to learn more about the surface electronic state of Pd x Cu y /XC-72 ( Figure 4). To ascertain the valence state of Pd in the hybrids, the XPS spectra of Pd 3d were also gathered. Metallic Pd 0 is responsible for the Pd 3d5/2 peak at 335.2 and the 3d3/2 peak at 340.5 eV, and also the Pd 3d5/2 peak at 342.4 eV and 3d5/2 peak at 337.2 eV assigned to Pd 2+ . It can be seen that the majority of the metal precursors are successfully reduced and loaded onto the support by comparing the quantities of Pd 0 and Pd 2+ . According to the Cu 2p spectra, the Pd x Cu y /XC-72 peaks at 952.56 and 932.7 eV, respectively, correspond to Cu 0 2p1/2 and 2p3/2; the peaks at Nanomaterials 2022, 12, 4210 6 of 11 953.7 and 933.6 eV, respectively, correspond to Cu 2+ 2p1/2 and 2p3/2. The peaks at 944.8 and 941.9 eV, correspond to Cu 2+ 2p3/2 satellite peaks. The overall intensity of the Pd peaks decreases with the addition of Cu, indicating less Pd exposure in the bimetallic. Area ratios for various PdxCuy/XC-72 catalysts are shown in Figure 5. Since Pd n Pd 0 and Cu n+ to Cu 0 are the alloys formed during reduction, other metals will chang create a relatively stable state during the alloy formation process. At this point, low lence Cu 2p electrons will be transferred to Pd, reducing the positive state of Pd n+ w Area ratios for various Pd x Cu y /XC-72 catalysts are shown in Figure 5. Since Pd n+ to Pd 0 and Cu n+ to Cu 0 are the alloys formed during reduction, other metals will change to create a relatively stable state during the alloy formation process. At this point, lowvalence Cu 2p electrons will be transferred to Pd, reducing the positive state of Pd n+ while increasing the valence state of low-valence Cu, resulting in a gradual rise in the concentration of Cu n+ . However, Pd3Cu2/XC-72 has the highest value of Pd 0 and Cu n+ content. There is also a change in the binding energy of a fraction of the Pd, represented by the broadening of the peaks. The electronic structure of the surface Pd atoms is modified by alloying with the Cu atoms as the Cu content increases and the binding energy shifts to a higher value. These results further confirm that the alloy of PdxCuy nanoparticles supported on XC-72 has been successfully synthesized by the chemical reduction method. In addition, the presence of Cu 2+ may be caused by the surface oxidation or chemisorption of environmental oxygen during the synthesis process, as indicated by the shake-up satellite (sat.) peaks at the high binding energy side of the Cu 2p3/2 and Cu 2p1/2. The XPS spectra of the other catalysts are shown in Figure 4. The characteristic peaks for each catalyst are the same as the PdxCuy/XC-72 XPS results.
Electrochemical Tests for MOR (Three-Electrode Cell)
As is summarized in Figure 6, the electrochemical performance of the five electrocatalysts is shown. Cyclic voltammetry (CV) was performed on each of the Pd1Cu4/XC-72, Pd2Cu3/XC-72, PdCu/XC-72, Pd3Cu2/XC-72, and Pd4Cu1/XC-72 electrodes in N2-saturated 1.0 M KOH and 1.0 M CH3OH. It can be seen that there are obvious differences in electrocatalytic activities with different amounts of Cu. The results depict that Pd3Cu2/XC-72, the maximum value, was obtained (1719 mA·mg −1 Pd). Therefore, combined with XPS ( Figure 4) and XRD (Figure 1) data, we can conclude that Pd3Cu2/XC-72 exhibits excellent catalytic performance due to its excellent electronic structure. In addition, it was found that the performance of the catalyst synthesized in this work was much higher than that of the catalysts synthesized in other works ( Table 2). However, Pd 3 Cu 2 /XC-72 has the highest value of Pd 0 and Cu n+ content. There is also a change in the binding energy of a fraction of the Pd, represented by the broadening of the peaks. The electronic structure of the surface Pd atoms is modified by alloying with the Cu atoms as the Cu content increases and the binding energy shifts to a higher value. These results further confirm that the alloy of Pd x Cu y nanoparticles supported on XC-72 has been successfully synthesized by the chemical reduction method. In addition, the presence of Cu 2+ may be caused by the surface oxidation or chemisorption of environmental oxygen during the synthesis process, as indicated by the shake-up satellite (sat.) peaks at the high binding energy side of the Cu 2p3/2 and Cu 2p1/2. The XPS spectra of the other catalysts are shown in Figure 4. The characteristic peaks for each catalyst are the same as the Pd x Cu y /XC-72 XPS results.
Electrochemical Tests for MOR (Three-Electrode Cell)
As is summarized in Figure 6, the electrochemical performance of the five electrocatalysts is shown. Cyclic voltammetry (CV) was performed on each of the Pd 1 Cu 4 /XC-72, Pd 2 Cu 3 /XC-72, PdCu/XC-72, Pd 3 Cu 2 /XC-72, and Pd 4 Cu 1 /XC-72 electrodes in N 2saturated 1.0 M KOH and 1.0 M CH 3 OH. It can be seen that there are obvious differences in electrocatalytic activities with different amounts of Cu. The results depict that Pd 3 Cu 2 /XC-72, the maximum value, was obtained (1719 mA·mg −1 Pd). Therefore, combined with XPS ( Figure 4) and XRD (Figure 1) data, we can conclude that Pd 3 Cu 2 /XC-72 exhibits excellent catalytic performance due to its excellent electronic structure. In addition, it was found that the performance of the catalyst synthesized in this work was much higher than that of the catalysts synthesized in other works (Table 2).
Generally, the ratio between the peak current in the forward (I f ) and the backward scan (I b ) is the measure for the tolerance of the catalyst against CO poisoning, with a higher I f :I b indicating higher tolerance. In our experiment, Pd 3 Cu 2 /XC-72 displays a higher I f :I b , which reflects its higher CO tolerance (Table 3). In other words, I f :I b is positively correlated with the area ratio of Pd 0 and Cu n+ , which proves that higher Pd 0 and Cu n+ in the catalyst results in higher anti-CO performance. Generally, the ratio between the peak current in the forward (If) and the backward scan (Ib) is the measure for the tolerance of the catalyst against CO poisoning, with a higher If : Ib indicating higher tolerance. In our experiment, Pd3Cu2/XC-72 displays a higher If : Ib, which reflects its higher CO tolerance (Table 3). In other words, If : Ib is positively correlated with the area ratio of Pd 0 and Cu n+ , which proves that higher Pd 0 and Cu n+ in the catalyst results in higher anti-CO performance. Figure 6. This means less energy was needed to reduce PdO in the presence of Cu. This is probably the attraction of Pd to metal oxides like CuO, which provides more electronics to the Pd and PdO active sites. In other words, it is more difficult to reduce PdO oxides as CuO withdraws electrons from Pd.
The electrochemical active surface area (ECSA) is also an important index for evaluating the activity of electrocatalysts. In this work, the ESCA of different ratios of Pd x Cu y /XC-72 is estimated by integrating the Pd reduction peaks in order to find the Coulombic charge of each catalyst, and then following the proposed method of Rand and Wood (Equation (1) ). These results reveal that Cu acts as a promoter to boost the electrochemical activity of Pd. Figure 7 summarizes the electrochemical performance of five electrocatalysts. Cyclic voltammetry(CV) is performed on each of the Pd1Cu4/XC-72, Pd2Cu3/XC-72, PdCu/XC-72, Pd3Cu2/XC-72 and Pd4Cu1/XC-72 electrode in N2-saturated 1.0 M KOH. The PdO reduction peaks of the catalysts containing Cu occur at earlier potentials, as seen in Figure 6. This means less energy was needed to reduce PdO in the presence of Cu. This is probably the attraction of Pd to metal oxides like CuO, which provides more electronics to the Pd and PdO active sites. In other words, it is more difficult to reduce PdO oxides as CuO withdraws electrons from Pd. The electrochemical active surface area (ECSA) is also an important index for evaluating the activity of electrocatalysts. In this work, the ESCA of different ratios of Pdx-Cuy/XC-72 is estimated by integrating the Pd reduction peaks in order to find the Coulombic charge of each catalyst, and then following the proposed method of Rand and Wood (Equation (1) Figure 8. The high initial current density is caused by double-layer charging and abundant active sites for methanol activation. The current density dropped quickly within the first dozens of seconds due to the adsorption of CO produced from the MOR on the catalytic surface. Figure 8. The high initial current density is caused by double-layer charging and abundant active sites for methanol activation. The current density dropped quickly within the first dozens of seconds due to the adsorption of CO produced from the MOR on the catalytic surface. After 500 s of testing, Pd3Cu2/XC-72 still had the highest ultimate current density (122.3 mA mg −1 Pd), further confirming that the Pd3Cu2/XC-72 nanocatalyst has the best electrocatalytic activity and long-term electrochemical durability for MOR. The synthesized Pd3Cu2/XC-72 nanoparticles have the advantages of large specific surface area, high alloying, strong electronic interactions between Pd and Cu atoms, a clear surface, and many active sites, which can significantly improve the electrocatalytic performance of MOR.
Conclusions
In summary, we studied a highly efficient palladium copper alloy catalyst with a coordination structure which was successfully synthesized by the liquid phase reduction method. During the synthesis, the ratios between Pd and Cu determined the alloying degree. The particular electronic characteristics endowed Pd3Cu2/XC-72 with large ECSA, fast mass transfer, and high self-stability, which contributed to the high electrocatalytic After 500 s of testing, Pd 3 Cu 2 /XC-72 still had the highest ultimate current density (122.3 mA mg −1 Pd), further confirming that the Pd 3 Cu 2 /XC-72 nanocatalyst has the best electrocatalytic activity and long-term electrochemical durability for MOR. The synthesized Pd 3 Cu 2 /XC-72 nanoparticles have the advantages of large specific surface area, high alloying, strong electronic interactions between Pd and Cu atoms, a clear surface, and many active sites, which can significantly improve the electrocatalytic performance of MOR.
Conclusions
In summary, we studied a highly efficient palladium copper alloy catalyst with a coordination structure which was successfully synthesized by the liquid phase reduction method.
During the synthesis, the ratios between Pd and Cu determined the alloying degree. The particular electronic characteristics endowed Pd 3 Cu 2 /XC-72 with large ECSA, fast mass transfer, and high self-stability, which contributed to the high electrocatalytic activity and excellent stability of Pd 3 Cu 2 /XC-72 for MOR in alkaline media. Additionally, Pd 3 Cu 2 /XC-72 exhibited enhanced MOR activity compared to Pd/C, indicating that the PdCu interface improved the MOR activity of Pd nanostructures. Experimental results demonstrated that the synergistic effect between Pd and Cu accelerated the oxidation of CO ads , which are also responsible for MOR activity/stability enhancement of Pd 3 Cu 2 /XC-72. This study provides new ideas for the design of highly active Pd electrocatalysts, and demonstrates the great potential of Pd 3 Cu 2 /XC-72 as a DMFCs anode electrocatalyst. | 2022-12-04T19:07:09.104Z | 2022-11-26T00:00:00.000 | {
"year": 2022,
"sha1": "e8630a24a2a0afddcf9dac7e31f30a53d04ff778",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/12/23/4210/pdf?version=1669605487",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1ed5b8341fc5abcfe90a8aba26ade4786b99b8ba",
"s2fieldsofstudy": [
"Chemistry",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244773632 | pes2o/s2orc | v3-fos-license | Enlarged Kuramoto Model: Secondary Instability and Transition to Collective Chaos
The emergence of collective synchrony from an incoherent state is a phenomenon essentially described by the Kuramoto model. This canonical model was derived perturbatively, by applying phase reduction to an ensemble of heterogeneous, globally coupled Stuart-Landau oscillators. This derivation neglects nonlinearities in the coupling constant. We show here that a comprehensive analysis requires extending the Kuramoto model up to quadratic order. This"enlarged Kuramoto model"comprises three-body (nonpairwise) interactions, which induce strikingly complex phenomenology at certain parameter values. As the coupling is increased, a secondary instability renders the synchronized state unstable, and subsequent bifurcations lead to collective chaos. An efficient numerical study of the thermodynamic limit, valid for Gaussian heterogeneity, is carried out by means of a Fourier-Hermite decomposition of the oscillator density.
Collective synchronization is a phenomenon in which an ensemble of heterogeneous, self-sustained oscillatory units (commonly known as oscillators) spontaneously entrain their rhythms. This is a pervasive phenomenon observed in natural systems and man-made devices, covering a wide range of spatio-temporal scales, from cell aggregates to swarms of fireflies [1,2].
Seeking to understand the onset of collective synchronization, Winfree invented a model consisting of globally coupled oscillatory units with one degree of freedom (phase oscillators) [3,4]. Following this scheme, Kuramoto found an analytically tractable model, which captures the onset of collective synchronization from an incoherent state [5,6]. Due to its simplicity, the Kuramoto model and its generalization with phase-lagged coupling -the so-called Kuramoto-Sakaguchi model after Ref. [7]-, have been intensely studied, with a vast number of extensions and applications in several fields [8,9].
The Kuramoto(-Sakaguchi) model is often introduced as above, i.e. as a mere mathematical refinement of the Winfree model. However, this is only partly true, since Kuramoto rigorously derived the model bearing his name. In particular, he applied phase reduction to an ensemble of weakly coupled Stuart-Landau oscillators [5,6]. The Stuart-Landau oscillator is a relevant natural choice, as it represents a generic limitcycle attractor close to a Hopf bifurcation.
Kuramoto's perturbative phase-reduction approach is valid for weak coupling. Specifically, oscillator heterogeneity and interactions appear at zeroth and linear orders in the coupling constant, respectively. These considerations explain why the quadratic order was neglected in the original Kuramoto model. Nevertheless, in certain circumstances, going beyond the first (or linear) order may be required. Indeed, the description of some experiments with lattices of optomechanical [10] and nanoelectromechanical [11] oscillators rely on second-order phase reductions. The analysis of the corresponding second-order phase-reduced models has remained, however, rather incomplete. The reason for this is the nonpairwise interactions appearing at quadratic order. From this perspective, the original setup with heterogeneous, diffusively coupled Stuart-Landau oscillators appears to be the ideal testbed model for investigating second-order phase reduction to the fullest extent possible. So far, only the case of identical oscillators has been analyzed [12].
In this Letter we extend the Kuramoto model up to second order in the coupling constant . In this "enlarged" Kuramoto model the new terms of order 2 comprise two different threebody (nonpairwise) interactions. Strikingly, their combined action triggers a secondary instability in which standard collective synchronization destabilizes. This is the precursor of a sequence of instabilities giving rise to a state of collective chaos. We efficiently investigate the thermodynamic limit of the model by means of a Fourier-Hermite decomposition of the oscillator density. This scheme appeared some years ago in a theoretical study [23], but it is numerically implemented here for the first time (adopting an appropriate closure).
The starting point of our work is a heterogeneous population of N 1 Stuart-Landau oscillators with global diffusive coupling: (1) Here A j ≡ r j e iφj is a complex variable, and index j runs from 1 to N . The ω j 's are drawn from a unit-variance normal distribution g(ω). The mean of g(ω) is selected to be 0, by going to a rotating frame if necessary. Therefore, each individual Stuart-Landau oscillator possesses a natural frequency equal to σω j − c 2 , where c 2 is the noniscochronicity parameter. Parameter σ > 0 is included to account for the frequency dispersion. Concerning the coupling, it is diffusive through the mean field A = 1 of the model. In this work we select σ = 10 −3 and c 2 = 3 (a standard value in the literature, see e.g. [24]), leaving c 1 and as control parameters. The effect of varying c 2 and σ is discussed at the end of this Letter. System (1) displays a plethora of complex states. In particular, collective chaos already emerges at moderate and large coupling under simplifying assumptions such as, homogeneity (σ = 0) [24,25] and vanishing reactivity and shear (c 1 = c 2 = 0) [26]. We focus here on the weak coupling regime, in which the oscillators remain close to their original limit cycles at r j = 1 and a phase description becomes possible. Two states are generically expected for small . On the one hand, there is the uniform incoherent state (UIS), corresponding to a vanishing mean field A (in the thermodynamic limit), with the oscillators angles φ j uniformly scattered, see Figs. 1(a) and 1(b) for particular parameter values and = 0.07. On the other hand, typically, as exceeds a certain threshold UIS becomes unstable and a state of collective partial synchrony (PS) emerges. In this configuration, a macroscopic proportion of the oscillators becomes entrained to a common frequency φ j∈S = Ω and the mean field rotates uniformly with constant amplitude: |A| = const. In a finite population, as in Fig. 1(c), entrained oscillators may not be observed, since they belong to one of the tails of g(ω). Drifting oscillators alone cause A to depart from zero. Surprisingly, our numerical simulations indicate that the dynamics may become of a different kind as the coupling is further increased, while still remaining small. As shown in Fig. 1(a) for = 0.09, the collective dynamics incorporates a new frequency, and |A(t)| oscillates periodically, i.e. the attractor is a two-dimensional torus or T 2 (disregarding finite-size fluctuations). Figure 1(d) shows the corresponding snapshot of the angles φ j for = 0.09. We may see that part of the population forms a two-cluster state that evolves in time such that the phase differences are time-dependent but bounded. As far as we know, this unsteady configuration with time-dependent clusters has not been observed before in Eq. (1). It is very much alike the Bellerophon state coined in [27] for ensembles of phase oscillators. For still larger , |A| exhibits even more complex oscillations, as can be seen setting = 0.115 in Fig. 1(a). In Fig. 1(f) we represent the local maxima and minima of |A(t)| as a function of . The low-frequency modulation sets in at ≈ 0.109. As a result of the instability, a three-frequency quasiperiodic collective motion is, in principle, expected. Still, an additional transition to weak collective chaos cannot be ruled out. At some parameter values (e.g. = 0.14, c 1 = −0.415), see the Supplemental Material, the largest Lyapunov exponent does not decay to zero with the system size, what is a clear indication of collective chaos. (For the value = 0.115 taken in Fig. 1
the result is inconclusive.)
To put the previous observations in a wider framework we numerically determined where the unsteady behavior occurs in the c 1 − plane. The phase diagram in Fig. 2(a) shows where qualitatively different dynamics are observed. The stability boundary of UIS was analytically computed following the approach in [28], see the Supplemental Material. Remarkably, numerical simulations of Eq. (1) reveal that PS is unstable inside the dark shaded region in Fig. 2(a), i.e. unsteady |A(t)| spontaneously sets in. In addition, numerical continuation discloses an adjacent narrow band of coexistence between unsteady dynamics and PS. The orange line in Fig. 2(a) divides the unsteady region into two parts: the lower one with T 2 collective motion, and the upper one with more complex oscillations. We emphasize that determining the exact nature of the complex unsteady states is an arduous work, which hinders a more detailed phase diagram.
At this point, we resort to phase reduction in order to better understand the nature and organization of the unsteady collective states. For weak coupling phase reduction allows us to describe the system solely in terms of phase variables θ j = φ j − c 2 ln r j [2,6]. Following [12] we write down the second-order phase reduction [29] of (1), or 'enlarged Kuramoto model': where three new constants, depending on c 1 and c 2 , are defined: η ≡ (1 + c 2 2 )(1 + c 2 1 ); and the phase lags α ≡ arg[1 + c 1 c 2 + (c 1 − c 2 )i], and β ≡ arg(1 − c 2 1 + 2c 1 i). For simplicity, we have chosen a reference frame with vanishing central frequency. Interactions involve two mean fields, Z 1 ≡ R e iΨ and Z 2 ≡ Q e iΦ , which are the first two elements of an infinite set of Kuramoto-Daido order parameters [30]: (2) includes nonpairwise interactions, which are inherent to higher-order phase reduction, even if the coupling in the original system (1) is pairwise and linear [12,31,32]. In particular, three-body interactions are conveyed by the last two terms [33] and are comparatively weak (of order 2 ), as usual in physics [34]. This is not the case of most previous studies on coupled phase oscillators [15-17, 19, 20, 35, 36], but see [11,12,32,37].
We start the analysis of Eq.
(2) noticing that if we neglect the O( 2 ) terms, then we recover the Kuramoto-Sakaguchi model with coupling constant η. For N → ∞, the phase diagram resulting from this O( ) approximation is shown in Fig. 2(b). The only attracting configurations are UIS and PS. The boundary of UIS can be calculated following [7]. It diverges at c 1 = −c −1 2 = −1/3, corresponding to α = −π/2. When comparing Figs. 2(a) and 2(b), it is manifest that firstorder phase reduction does not provide a faithful description of system (1) in the left part of the phase diagram.
We now consider Eq. (2) in full. Concerning the linear stability of UIS (R = Q = 0), only the first term of order 2 is relevant. It may be added to the linear term to recalculate the stability boundary [7], see the Supplemental Material. The result is shown as a solid black line in Fig. 2(c). Now the boundary of UIS exhibits a knee at c 1 ≈ −1/3, in qualitative agreement with Fig. 2(a). Analyzing the stability of PS is a much harder problem. Through a numerical self-consistent approach [7] we tracked the branch of PS emanating from incoherence. However, this does not allow us to determine its stability. Moreover, the direct numerical integration of Eq. (2) is not more efficient than simulating Eq. (1): The number of degrees of freedom is reduced by a factor 2, but at the cost of including computationally expensive trigonometric functions.
In order to exploit the dimensionality reduction achieved in Eq. (2), an alternative strategy is required. We resort to a moments system introduced almost a decade ago by Chiba in his theoretical study of the Kuramoto model [23]. Crucially, working with a set of moments avoids finite-size fluctuations and the concomitant microscopic (phase) chaos [38]. We start defining the density ρ(θ|ω, t), such that ρ(θ|ω, t)dθ is the fraction of oscillators with phases between θ and θ + dθ and frequency ω at time t. Now, we write the Fourier-Hermite decomposition of ρ: where h m (x) = He m (x)/ √ m! are normalized (probabilist's) Hermite polynomials: The Fourier-Hermite coefficients P m k are obtained inverting Eq. (3): These Fourier-Hermite modes extend the Kuramoto-Daido order parameters to the space of the natural frequencies. Specifically, P 0 k = Z k (in the N → ∞ limit). The density ρ obeys the continuity equation ∂ t ρ = −∂ θ (ρθ). Inserting the expansion (3), using the recurrence relation ωh m = √ mh m−1 + √ m + 1h m+1 [39], and redefining P m k → (−i) m P m k for convenience, we get an infinite set of ordinary differential equations: where the asterisk denotes complex conjugation. System (5) is equivalent to Eq. (2) with N → ∞.
The numerical integration of Eq. (5) requires to implement a truncation at finite k max and m max , with an adequate closure. Note first that, in the UIS, P 0 0 = 1 is the only nonzero coefficient, whereas in the PS state the modes decay with k and m roughly as |P m k | ∼ e −ak e −b √ m . We imposed the boundary conditions: P m kmax+1 = 0, and P mmax+1 We tested the performance of different system sizes, finding that k max = m max = 40 already yield an excellent convergence, even for strongly unsteady states. Therefore, our analysis below relies on Eq. (5) with n f = k max × (m max + 1) × 2 = 3280 degrees of freedom. In comparison, simulating Eq. (2) with n f oscillators is unproductive because of unavoidable finite-size fluctuations.
One now can see that the PS state corresponds to a solid rotation P m k (t) = p m k e ikΩt . After inserting this solution into Eq. (5), the unknowns p m k and Ω are found via a Newton-Raphson algorithm (imposing p 1 1 ∈ R). The result completely agrees with the one obtained from the self-consistent numerical calculation mentioned above. Now, however, we can determine linear stability. Moving to a rotating frame with angular velocity Ω, we linearize the system around the fixed point. The locus of a secondary (Hopf) instability is accurately located requiring the eigenvalues of the Jacobian matrix with the largest real part to be ±iΩ H (with an extra zero eigenvalue due to rotational invariance P m k → e ikγ P m k ). The Hopf line is shown in blue in Fig. 2(c). The transition is supercritical (subcritical) at the solid (dashed) line. The emerging oscillatory mode yields a torus attractor (T 2 ), in which, due to the rotational symmetry, no lockings on its surface are expected, see e.g. [40,41]. Recalling Eq. (2) we infer that, at the level of the individual oscillators, the superimposed oscillation induces entrainment at frequencies Ω + (n/2)Ω H (n ∈ Z). The half-integer frequency plateaus stem from the term accompanying R 2 in Eq. (2). In particular, the two clusters in Fig. 1(d) correspond to a frequency plateau at frequency Ω + Ω H /2. Fig. 1(a) the signature of a doubled torus T 2 d can be discerned. However, it is very hard to determine the bifurcation point due to the long transients involved and unavoidable finite-size fluctuations, see Fig. 1
(f).
As occurs with the ensemble of Stuart-Landau oscillators, the torus attractor undergoes a Hopf bifurcation, see the orange line in Fig. 2(c). Thereby three-frequency quasiperiodic dynamics (T 3 attractor) emerges, consistently with three vanishing Lyapunov exponents.
Adjacent to the T 3 domain in Fig. 2(c), there exists a region with chaotic dynamics, in conformity with the Ruelle-Takens-Newhouse scenario. As occurred with system (1), Fig. 2(a), PS and unsteady states coexist. In Fig. 2(c) the bistability region is bounded by a purple line denoting either a saddle-node bifurcation, emanating from a (codimension-2) Bautin point at the bottom of the Hopf line, or an attractor crisis. The phase diagram in Fig. 2(c) reveals which are the unsteady collective states of (1), and their expected arrangement. Indeed, obtaining a phase diagram with the degree of detail of Fig. 2(c) is virtually unattainable simulating the original system, Eq. (1).
To better characterize the chaotic region, a detailed explo-ration along the horizontal line = 0.14 is shown in Fig. 3. In Figs. 3(a) and 3(b) the five largest Lyapunov exponents and the local maxima and minima of |P 0 1 (t)| = R(t) are, respectively, depicted for the same c 1 range. In the T 3 interval there may be some additional bifurcations (lockings or torus doubling), which we did not attempt to resolve. Interestingly, in the chaotic domain an increasing number of Lyapunov exponents become positive as c 1 increases, i.e. collective chaos transforms into collective hyperchaos.
In this Letter we have introduced the 'enlarged Kuramoto model'; a population of phase oscillators in which three-body interactions enter in a perturbative way. Remarkably, this makes a world of difference, drastically reshaping the traditional Kuramoto scenario. The 'enlarged Kuramoto model' exhibits a variety of unsteady states, including collective chaos and hyperchaos. To our knowledge, these states have not been previously reported in a population of globally coupled phase oscillators, with a unimodal distribution of the natural frequencies. We have considered a particular frequency dispersion σ = 10 −3 in Fig. 2(c). If σ is lowered the bottom of the Hopf bifurcation line approaches the c 1 axis at c 1 = −c −1 2 . This is expected to occur for any nonzero c 2 value, in consistence with the σ = 0 case [12] (to be shown elsewhere). Nonetheless, only heterogeneity, in contradistintion to weak noise [12,42], is able to trigger unsteady collective dynamics (absent for σ = 0). As a final remark, we stress that reducing the population of Stuart-Landau oscillators (1) to the phase model (2) is both illuminating and convenient, as it enables an efficient investigation of the thermodynamic limit by virtue of the Fourier-Hermite expansion. The application of this scheme to other populations of phase oscillators with Gaussian heterogeneity is straightforward. For other forms of g(ω) the suitable set of orthogonal polynomials must be adopted: e.g. the Fourier-Legendre mode decomposition is appropriate for uniform g(ω).
We thank Ernest Montrbrió and Juan M. López for their critical reading of the manuscript. We acknowledge support by Agencia Estatal de Investigación and Fondo Europeo de Desarrollo Regional under Project No. FIS2016-74957-P (AEI/FEDER, EU). IL acknowledges support by Universidad de Cantabria and Government of Cantabria under the Concepción Arenal programme. The ensemble of heterogeneous Stuart-Landau oscillators (Eq. (1) in the main text) exhibits a wide variety of dynamical states. In this section we show the existence of collective chaos. Collective chaos is characterized by the existence of at least one positive Lyapunov exponent (LE) in the thermodynamic limit (N → ∞).
Verifying collective chaos in Eq. (1) is a hard task due to the unavoidable presence of microscopic (phase) chaos [1]. Microscopic chaos yields n mic ≫ 1 positive LEs. Crucially, although n mic is proportional to the system size N , the magnitude of the corresponding LEs decays to zero as N → ∞. This means that for large enough N , the largest LE associated with collective chaos will prevail. Otherwise, if the largest LE decays to zero as N grows, then collective chaos can be ruled out.
In Fig. S1 we depict the largest LE λ versus the number of oscillators N for different parameter values. We choose c 2 = 3, σ = 10 −3 , c 1 = −0.4, and ǫ values as in Fig. 1 of the main text. Moreover, data for c 1 = −0.415 and ǫ = 0.14 are also depicted. In the figure it can be observed that λ rapidly decays to zero for UIS (orange), PS (blue). This decaying is consistent with the 1/N law reported in [1]. The exponent also decays to zero (although in a slower manner) for the parameters where the T 2 is stable (gray). The LE computed for the largest ǫ value in Fig. 1, ǫ = 0.115, exhibits an even slower decay. This behavior does not confirm nor refute if (weak) collective chaos is already present. For a larger value of ǫ = 0.14 (c 1 = −0.415), collective chaos is more evident, with λ ≃ 3.5 × 10 −4 . These values of λ are of the same order of magnitude than those found in the enlarged Kuramoto model with collective chaos, see Fig. 3(a).
II. BOUNDARY OF INCOHERENCE FOR EQ. (1)
We derive the conditions satisfied at the marginal stability of incoherence for an infinite ensemble of Stuart-Landau oscillators, Eq. (1) in the main text. This derivation follows [2].
The outline of the derivation is as follows. First we consider an infinitesimal perturbation around the uniform incoherent state (UIS), written in terms of the density of oscillators. This perturbed density evolves according to the continuity equation, but in addition it should verify the self-consistent condition. That means, the distribution has to be consistent with the mean-field it produces. At the stability boundary of incoherence the perturbation neither decays nor grows.
For simplicity, it is convenient to transform the equation of the oscillators so that in the UIS the amplitude of the oscillations is 1. This can be achieved by a change of coordinates of the oscillators B j = A j exp[iǫc 1 t + ic 2 (1 − ǫ)t]/ √ 1 − ǫ, and rescaling time τ = (1 − ǫ)t, frequencyω = σω/(1 − ǫ) and coupling In the new coordinates, Eq. (1) writes: It is important to notice that nowω are distributed following a Gaussian distribution g(ω) with standard deviationσ = σ/(1 − ǫ) = σ(1 + κ)/(1 − κ). This implies that the frequency distribution depends on κ. For convenience we write (S2) in polar coordinates: 3 where the mean field is At UIS B = 0 and r j = 1.
Thermodynamic limit: Perturbed Density
As we are interested in the behavior in the thermodynamic limit N → ∞, we can define the density distribution of oscillators ρ(r, φ|ω, τ ), such that ρ(r, φ|ω, τ )rdφdr is the fraction of oscillators between (r, r + dr) and (φ, φ + dφ) with frequencyω at time τ . In the vicinity of the UIS the density is: where the small constant ε explicits the smallness of the deviations from the unit circle and from the uniform angular density through r 1 and f 1 , respectively. Note that the normalization condition, requires the average of f 1 over φ to vanish.
Close to the UIS, the amplitude of the mean field |B| is a small quantity of order ε:
Perturbation dynamics
The evolution of the radial perturbation r 1 is obtained using the chain rule Replacing (S3) and (S4) with r = 1 + εr 1 , |B| ≡ εB 0 and neglecting terms of order ε 2 : We assume an exponential growth rate λ of the perturbation. In general, λ is a pure imaginary number at threshold, however, we assume λ is real provided the frequency distribution is shifted by a certain frequency shift Ω c , which is later obtained selfconsistently. In this reference frame we can choose Υ = 0, but we keep it for clarity. We replace ∂r1 ∂τ by = λr 1 in (S8). The solution has a trigonometric form: Next, we calculate f 1 . First of all notice that the density ρ satisfies the continuity equation: where v is the velocity vector. The divergence in polar coordinates is ∇ · F = 1 r ∂ ∂r (rF r ) + 1 r ∂Fφ ∂φ with F = F r u r + F φ u φ . In our case F r = ρṙ and F φ = ρrφ. We integrate Eq. (S10) over r: Now, we replace ∂f1 ∂τ by λf 1 and keep only the order ε terms: and using r 1 from Eq. (S9): The solutions are:
Self-consistency condition
Our solutions for f 1 and r 1 must be consistent with Eq. (S6) (self-consistency condition). This implies: This yields two equations for the imaginary and real parts, respectively: (S16)
Final result
For the stability threshold, we make λ → 0: To obtain the boundary of incoherence we need the κ c -and Ω c -dependent integrals: dω ω dω; I 3 = dω g(ω)w ω 2 + 4 ; I 4 = dωg(ω)δ(ω) = g(0) (S21) We have then: This equation determine the boundary of the UIS and the frequency of the instability Ω c . The most efficient way to compute the stability boundary is to fix κ c and Ω c , compute I i and use (S22) and (S23) to obtain c 1 and c 2 . Finally, one refers to Eq. (S1) to obtain ǫ c . 5 III. DERIVATION OF EQ. (2) In this section we apply second order phase reduction to the ensemble of globally coupled Stuart-Landau oscillators, Eq. (1). Neglecting terms of order O(ǫ 3 ) and O(ǫ 2 σ) we obtain Eq. (2) in the main text.
The first step in phase reduction is performing a change of variable in Eq. (1) to phase and amplitude, A j = r j e i(θj+c2 ln rj ) . Additionally, we go to a rotating reference frame such that the frequency of the oscillator is σω j : r k sin θ k − θ j + γ + c 2 (ln r k − ln r j ) (S24a) where, for simplicity, we have defined Γ = 1 + c 2 1 , tan γ = −1/c 1 , and η and α as in the main text. Assuming the smallness of the coupling parameter ǫ we can expand the radial variable in this parameter r j = 1 + ǫr (1) j + ǫ 2 r (2) j + . . . where the zeroth order is 1 since the limit cycle has r = 1. To obtain the second-order phase reduction we expand the equation (S24b) up to order ǫ 2 : where tan δ = c 2 . Now, to close this equation we only need the radial correction up the order ǫ, i.e. r (1) j . In order to obtain r (1) j we assume the radii depend on the phases, r j = r j (θ), and apply the chain rule to (S24a),ṙ j = kθ k ∂ θk r j . Expanding r j in powers of ǫ, we obtain the equation for r sin θ k − θ j + γ − σ ω k − ω j 4 + σ 2 (ω k − ω j ) 2 cos θ k − θ j + γ . (S28) Neglecting terms of order σ: Replacing (S29) into the (S26) and using the relations α + δ + γ = β − π/2 and −α − δ + γ = −π/2 we obtain Eq. (2) in the main text. Notice that preserving the O(σ) term of (S28) translates into an O(ǫ 2 σ) term in Eq. (S26). This term is safely neglected provided that σ ≪ 1. The relative importance of terms O(ǫ 2 σ) and O(ǫ 3 ) depends on the ratio ǫ/σ. In conclusion, Eq. (2) in the main text is the phase reduction of Eq. (1) after neglecting terms O(ǫ 3 ) and O(ǫ 2 σ). | 2021-12-02T02:16:30.512Z | 2021-11-30T00:00:00.000 | {
"year": 2021,
"sha1": "579a2abaa117a11f4c3dee1c98ba1fbcda02142a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2112.00176",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a6a20c651fef13fc7271722beb1090f4f7ed34c2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
233862480 | pes2o/s2orc | v3-fos-license | Bilateral Panophthalmia as a Late Sequel of Leishmaniasis in Dogs
Received: Revised: Accepted: Published online: October 21, 2019 October 12, 2020 December 13, 2020 January 17, 2021 Fifteen dogs were presented with complete blindness that progressed over 2-4 months. Diagnosis was confirmed that dogs had leishmaniasis through direct observation of the amastigotes within the blood cells, PCR testing and phylogenetic analysis. Gross pathologic and histopathologic examinations were performed for two dogs that were severely debilitated and humanely euthanized. Systemic involvement including decreased appetite (n=8), generalized weight loss (n=4), generalized lymphadenopathy (n=3), icterus (n=3), polyuria and polydepsia (n=2), lethargy (n=5) and four dogs were presented without any systemic involvement. All dogs had bilateral panophthalmia (n=30 eyes) manifested by cataract, anterior uveitis, posterior uveitis, retinal detachment, peri-ocular alopecia, conjunctivitis, blepharitis, keratoconjunctivitis and glaucoma. Detailed ultrasonographic ocular lesions were described; histopathological examination confirmed the ongoing changes within the eye. Leishmaniasis should be considered in the differential diagnosis of dogs with bilateral ocular involvement especially those not responding to symptomatic medicinal therapy.
INTRODUCTION
Leishmaniasis is a zoonotic disease caused by the obligate intracellular protozoon of Leishmania spp. and transmitted by phlebotomine sand flies (Dantas-Torres, 2007;Jarallah, 2015;Aslan et al., 2016). It is of great medical and veterinary significance with diverse epidemiological and clinical presentation. Based on World Health Organization (WHO) records, leishmaniasis is reported in approximately 12 million people worldwide with 0.9-1.3 million new cases and 20 to 30 thousand deaths annually (Kimutai et al., 2009;Postigo, 2010). The geographical distribution of the disease is mostly in tropical and sub-tropical Africa, the Middle East, South and Central America, Southern Europe and Asia (Myler and Fasel, 2008;Bessat and El Shanat, 2013). Clinically, three major forms of the disease have been described including cutaneous (most common form), mucocutaneous and visceral leishmaniasis (most serious form) (Bessat and El Shanat, 2013).
Dogs can naturally be infected with Leishmania, also they may be a reservoir for disease transmission; they are more likely to be victims rather than reservoirs (Dantas-Torres, 2007;Jarallah, 2015;Pennisi and Persichetti, 2018). The disease seems to be under reported especially that infected dogs may remain asymptomatic for a long time (Moreno and Alvar, 2002;Karakuş et al., 2015). Also, the diverse clinical manifestation of the disease along with the non-specific clinical signs including but not limited to change in appetite, generalized weight loss, facial alopecia, muscle wasting, painful joint, lymphadenopathy, splenomegaly, hepatomegaly, polyuria, polydipsia, polyphagia, epistaxis, melena, diarrhea, anterior uveitis, blepharedema, blepharitis, keratoconjunctivitis, or panopthalmitis makes the diagnosis difficult with a large differential list (Peña et al., 2000(Peña et al., , 2008Baneth et al., 2016;Abbehusen et al., 2017). Canine leishmaniasis may be presented with ocular manifestation which may occur concurrently with other systemic signs or may be a sole clinical complaint (Peña et al., 2000;Baneth et al., 2016).
The present study aimed to document the clinical, ultrasonographic and histopathologic characteristics of confirmed ocular leishmaniasis in 15 dogs presented with bilateral panophthalmia.
MATERIALS AND METHODS
The present study was performed on 15 dogs admitted to the clinic of the Department of Surgery, Anesthesiology and Radiology-Faculty of Veterinary Medicine-Cairo University with bilateral panophthalmia. The dogs were referred by private veterinarians with complete blindness that was progressed over 2-4 months that remained non-responsive to treatment (topical antibiotics, anti-inflammatories and anti-glaucoma therapy). All study procedures were approved by Cairo University Institutional Animal Care and Use Committee (CU-IACUC. All dogs' owners were aware that their dogs will be included in research and signed a written consent form indicating their approval. Dogs included in the study were of both sexes (8 males and 7 females), of different breeds (9 Mongrel, 3 Labrador Retriever and 3 German Shepherd) and aged 3.5±1.1 years.
Historical data were recorded for each dog including the owner's complaint, onset and progression of clinical signs, previous medications and housing. Complete clinical examination including evaluation of the vital health parameters was performed for all dogs. Ophthalmic examination was done including inspection of the eyelids and globe, slit-lamp biomicroscopy and indirect ophthalmoscopy. Trans-eyelid ocular ultrasonography was performed using 8-10 MHz microconvex transducer. Hematological and biochemical examinations were also included for all dogs.
The diagnosis was made that all dogs had leishmaniasis through direct observation of the amastigotes of the parasite within the white blood cells using Giemsa Wright's stain.
Diagnosis was confirmed by PCR testing performed on peripheral blood sample. DNA extraction was performed from Ethylene Diamine Tetra Acetic acid (EDTA) blood by using a DNeasy Blood and Tissue Kit (Qiagen, Germany) according to the manufacturer's instructions. PCR amplification was performed for the target region of the small subunit of ribosomal Ribonucleic acid (RNA) gene of leishmania spp. in dogs. The forward primer was 174 F (5ʼ-GGTTCCTTTCCTGA TTTACG-3ʼ) and the reverse primer was 798 R (5ʼ-GGCCGGTAAAGGCCGAATAG-3ʼ). These primers generate amplicons of 560 bp and the amplification condition done (Osman et al., 1997).
PCR product of positive samples was purified using Qiaquick purification kit (Qiagen, Germany) following the manufacturer's specifications. Sequencing was done using Big Dye Terminator V3.1 sequencing kit (Applied Biosystems, Waltham, USA) with the forward and reverse primer for 18S ribosomal RNA. The obtained nucleotide sequences were aligned with the sequences in GenBank using the NCBI BLAST server to confirm the identity with Leishmania spp.
The sequence (591 bp) of Leishmania spp. 18S rRNA was deposited in the GenBank database, under accession number MH916554. The submitted gene sequence was compared with the aligned sequences available in the NCBI GenBank database. Phylogenetic analysis revealed that the obtained nucleotide sequence was comparable to those available in public domains in GenBank using NCBI, BLAST server.
Publicly available 18S rRNA gene sequences of Leishmania spp. were downloaded from NCBI GenBank and imported into BioEdit version 7.0.1.4 for multiple alignments using the Clustral W program of the BioEdit. Phylogenetic analysis was done using MEGA version 7 with the Maximum likely hood. The bootstrap consensus tree was construed from 50 replicates. A similarity matrix was utilized using the DNASTAR program (Lasergene, version 8.0). The genetic distance values of species variations of Leishmania spp. were analyzed with MegAlign project of DNSTAR software.
Two dogs were severely debilitated and did not respond to medications, these dogs were humanely euthanized according to their owners' written request. Euthanasia was performed using an over dosage of pentobarbital (Beuthanasia-D ® , Intervet/Schering-Plough Animal Health Corp, Kenilworth, NJ; 1mL/5kg) injected into the cephalic vein.
Gross pathologic examination and tissue samples were collected from both eyes, fixed in neutral formalin and processed routinely for histopathological examinations.
Clinical examination:
The main clinical manifestation of the presented dogs was panopthalmia with subsequent bilateral disturbance in vision that was progressed to complete blindness over 2-4 months. All presented dogs were outdoor dogs and 4 of them were housed within the same animal shelter. All dogs did not receive any antiparasitic medications during the last year.
Ocular Ultrasonography:
The cornea lost its characteristic concave appearance in 12 eyes where it appeared as a straight hyperechoic line. In 18 eyes, the cornea maintained its thick echogenic curvilinear appearance. The anterior chamber was of mixed echogenicity (n=24 eyes) where multiple hypoechoic areas were seen within the normal anechoic pattern. The iris leaflets were visualized as thick echogenic bands attached to the thickened echogenic ciliary body (n=10 eyes). The anterior and posterior lens capsules were visualized as thick hyperechoic structures enclosing the hypoechoic nucleus (n=24 eyes). The lens dimensions were markedly increased in 21 eyes (6 immature and 15 mature cataract) and it was markedly decreased in 3 eyes (hyper-mature cataract). 23.3% -Diffuse corneal edema with neovascularization, corneal epithelial dystrophy and reddish granulation tissue occupying either the periphery or the center of the cornea. -Corneal opacity was detected in 2 of these eyes. Vitreous opacities were visualized in association with cataract (n=16 eyes). In 6 eyes, viteral hemorrhage was visualized as hyperechoic dots within the anechoic vitrous. Viteral membrane was visualized as a thick hyperechoic band in 3 eyes. The choroid was differentiated from the retina and sclera as a hypoechoic thickened structure (n=8 eyes). Complete retinal detachment was seen in 10 eyes where the retina was identified as a thick hyperechoic band between the ocular fundus and the ora cilliaris retinea forming the characteristic seagull wings. Incomplete retinal detachment (n=3) was visualized as thick hyperechoic band separated from the fundus at the level of the optic nerve. Ultrasonographic ocular changes of dogs presented with canine leishmaniasis are demonstrated in Fig. 3. Fig. 2: Photograph demonstrating the ocular lesions of leishmaniasis in dogs. a: Uveitis with miosis, corneal edema, partial third eyelid prolapse and conjunctival injection. b: Endophthalmitis with corneal edema and superficial vascularization. c: Endophthalmitis with chronic vascular (pannus) keratitis. d: Endophthalmitis with chronic glaucoma and chronic vascular keratitis with early signs of granulation tissue formation. e: Endophthalmitis with secondary glaucoma, anterior lens luxation, corneal edema and intense vascular response (vascular fringe). f: Uveitis with corneal edema and corneal vascularization. g: Endophthalmitis, corneal opacity with inflammatory cell infiltrate within the corneal stroma and blepharitis with crusts formation and mucopurulent discharges. h: Endophthalmitis with keratoconjunctivitis and conjunctival chemosis. i: Endophthalmitis with corneal perforation and granulation tissue formation. Note the marked corneal vascularization and ciliary injection.
Hematologic examination: Leukocytosis with absolute lymphocytosis was recorded in all dogs. Direct observation of leishmania amastigotes was a consistent finding, the amastigotes appeared as a round to oval parasite with a round basophilic nucleus and a small rod-like kinetoplast within the macrophage or freed from ruptured cells (Fig 4.a). Marked increase in serum aspartate aminotransferase (AST), alanine aminotransferase (ALT), total protein, blood urea nitrogen and creatinine was reported in 7 dogs compared to breed-specific normal reference range (Peavy et al., 2003). The polymerase chain reaction (PCR) product of blood samples were positive and prominent bands on Agarose gel 2% electrophoresis at amplicon molecular weight 591 base pair (bp) (Fig. 4.b). Sequence analyses of purified PCR products were found 98% identical to Leishmania species (L. spp.) of GenBank database. The phylogenetic analysis revealed close relationship between detected Leishmania spp. in Egypt and L. infantum, L. donovani, L. chagasi, L. major, L. tropica, Leishmania spp. and L. brasiliensis brasiliensis in other countries (Fig. 5. A).
The comparison between inter-and intra-species analyses of genetic distance 16 isolates of Leishmania spp. available in public domains in GenBank with Egyptian Leishmania spp. were used in tree Maximum likely hood. The genetic identity of Leishmania spp. isolated from dogs in Egypt has a high sequence homology (97.9-95.2 % similarity) with Leishmania spp. Ghana (EF524072) which was isolated from patients living in the Eastern Ghanaian community of Taviefe and Leishmania spp. (KU500888) which was isolated from blood of female wolf in Brazil, respectively (Fig.5. b). Interspecies analysis based on the genetic distance values indicates 1.7 of genetic divergence (GD) within L. infantum JPCMS (XR001203206) isolated from human in USA. However, it was genetically more distant (GD 2.9) from L. braziliensis isolated from human in Brazil (JX030135) (Fig.5.b). Gross pathologic examination: Gross pathologic examination of the two euthanized dogs revealed marked corneal opacity with thickened cornea and sclera. Focal hemorrhagic areas were seen within the lens close to the ciliary body. The vitreous appeared whitish; turbid with coagulated gelatinous mass adhered to the lens (Fig. 6).
Histopathologic examination: Numerous amastigotes of Leishmania were seen within the macrophages and histocytes which were confirmed by Giemsa and toluidine blue stains. The amastigotes appeared as multiple intracellular small parasites surrounded with hollow zone. Marked inflammatory cell infiltration and edema was seen all over the cornea, sclera, iris, vitreous, choroid and retina. Diffuse hyperplastic corneal epithelial proliferation was seen together with stromal edema and dispersion of collagen fibers within the stromal tissue. The iris and ciliary body showed marked inflammatory cell infiltration characterized by mononuclear cell infiltration mainly lymphocytes and macrophages together with edema at pars plicata of the ciliary body. Retinal detachment was manifested by retinal vasculitis (hemorrhage and multiple neutrophils) with separation between the retinal pigmented epithelium, tapetum and choroid. Marked necrosis was seen at the optic nerve. Histopathological changes of ocular lesions of leishmaniasis are represented in Figs. 6 and 7.
DISCUSSION
The current study presented 15 dogs with bilateral panopthalmia as a late sequel of canine leishmaniasis. The diagnosis was made through direct observation of Leishmania amastigotes within blood cells and confirmed by PCR amplification of Leishmania DNA obtained from peripheral blood samples. The genetic similarity between Leishmania spp. isolated from dogs in Egypt and high sequence homology (97.9%) of Leishmania spp. Ghana (EF524072) isolated from human supports that dogs are considered to be a reservoir for Leismania spp. (Osman et al., 1997;Silva and Gontijo, 2005;Baneth et al., 2008).
Ultrasonographic examination provided rapid, noninvasive diagnostic tool that allowed visualizing the ongoing pathologic changes within the globe; these changes were also confirmed by the gross pathologic and the histopathologic examinations.
The diverse clinical and pathological presentation of canine leishmaniasis reflects the difficulty in its diagnosis. Ocular leishmaniasis may be the only or the main clinical manifestation in 3.7 to 25% of Leishmania infected dogs (Ciaramella et al., 1997;Peña et al., 2008;Koutinas and Koutinas, 2014). Retrospective clinical studies have concluded that ocular leishmaniasis was reported in 16-25% of dogs naturally infected with Leishmania (Ciaramella et al., 1997;Peña et al., 2008). This variation could be attributed to the pathogenicity of the Leishmania involved, duration of illness or to the type of immune response developed by the patient (Koutinas and Koutinas, 2014).
Similar to previous reports, ocular lesions of dogs naturally infected by leishmaniasis were predominately bilateral reflecting the systemic involvement of the disease (Brito et al., 2006). However, in earlier stages of the disease only one eye may be affected (Peña et al., 2000;. Ocular involvement in dogs infected with leishmaniasis may be either a sequel of leukocytic infiltration secondary to the presence of Leishmania amastigotes or as a result of an immune mediated process with deposition of immune-complex at the blood aqueous barrier. This lymphoplasmacytic and granulomatous inflammatory infiltration involves (in order of frequency) the conjunctiva, limbus, ciliary body, iris, cornea, sclera, iridocorneal angle, choroid, and the optic nerve sheath (Peña et al., 2000;Brito et al., 2006) which could explain the presence of multiple ocular manifestations recorded in the present study and progression of these ocular lesions to panophthalmia and complete blindness over 2-4 months.
Similar to previous reports (Marcondes et al., 2000;Brito et al., 2006;Pietro et al., 2016) anterior uveitis was the most common manifestation of ocular leishmaniasis in dogs included in this study. Uveitis may have an immunologic or allergic basis similar to post-kala-azar leishmaniosis of humans and may result in secondary glaucoma and panophthalmitis with permanent loss of vision (García-Alonso et al., 1996;Ciaramella et al., 1997). Uveitis regardless of its chronicity is characterized by uveal and corneal edema, miosis, fibrin formation in the anterior chamber, and multiple nodules within the iris stroma (García-Alonso et al., 1996;Peña et al., 2008;Pietro et al., 2016). Posterior uveitis is less commonly reported and is usually accompanying anterior uveitis (Koutinas and Koutinas, 2014) explaining why all eyes in the present study with posterior uveitis were associated with anterior uveitis. | 2021-05-07T00:03:43.705Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "994ea520e140700d7006e06737ce740a3e500b5b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.29261/pakvetj/2021.006",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b963e3f65245e5a07f2c9310e0a14d670dfe4642",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7885026 | pes2o/s2orc | v3-fos-license | Sharp Bounds for Optimal Decoding of Low Density Parity Check Codes
Consider communication over a binary-input memoryless output-symmetric channel with low density parity check (LDPC) codes and maximum a posteriori (MAP) decoding. The replica method of spin glass theory allows to conjecture an analytic formula for the average input-output conditional entropy per bit in the infinite block length limit. Montanari proved a lower bound for this entropy, in the case of LDPC ensembles with convex check degree polynomial, which matches the replica formula. Here we extend this lower bound to any irregular LDPC ensemble. The new feature of our work is an analysis of the second derivative of the conditional input-output entropy with respect to noise. A close relation arises between this second derivative and correlation or mutual information of codebits. This allows us to extend the realm of the interpolation method, in particular we show how channel symmetry allows to control the fluctuations of the overlap parameters.
Introduction and Main Results
Linear codes based on sparse random graphs have emerged as a major chapter of coding theory [1]. While the belief propagation (BP) decoding algorithm and density evolution method have been explored in detail because of their low algorithmic complexity and good performance, much remains to be understood about the optimal (MAP) performance bounds of sparse graph codes. Recent theoretical progress on the binary erasure channel (BEC) has convincingly shown that BP and MAP decoding have intimate relationships (see [1] and in particular [4]), but understanding this relationship for other channels is still a largely open problem. In fact, the replica and/or cavity methods of statistical mechanics of dilute spin glass models allow to conjecture an analytic formula for H n (X|Y ), the entropy of the transmitted message X = (X 1 , ..., X n ) conditional to the received message Y = (Y 1 , ..., Y n ) in the large block length limit n → +∞. The replica formula expresses the conditional entropy as the solution of a variational problem whose critical points are given by the density evolution fixed point equation (see [2], [3]). If one is to solve the fixed point equation iteratively, the choice of initial conditions is not necessarily the one given by channel outputs (as in standard density evolution) but the one which yields the maximum conditional entropy. Note that a byproduct of the replica formula is the determination of the maximum a posteriori (MAP) noise threshold, above which reliable communication is not possible whatever the decoding algorithm.
The proof of the replica formulas is, in general, an open problem 1 . In the context of communication they have been proven for a class of low density parity check codes (LDPC) codes on the BEC [11], [12] (see also [13] for recent work going beyond the BEC) and for low density generator codes (LDGM) on a class of channels [14].
A promising approach towards a general proof of the replica formulas seems to be the use of the so-called interpolation method first developped in the context of the SK model [15], [16], [17]. Consider an LDPC(n, Λ, P ) ensemble where Λ(x) = d Λ d x d , P (x) = k P k x k are the variable and check degree distributions from the node perspective. We will always assume that the maximal degrees are finite. Montanari [7] (see also the related work of Franz-Leone [8] and Talagrand-Pachenko [9]) has developped the interpolation method for such a system and has derived a lower bound for the conditional entropy for ensembles with any polynomial Λ(x) but P (x) restricted to be convex for −e ≤ x ≤ e (in particular if the check degree is constant this means it has to be even). An important fact is that these lower bounds match the replica solution, and are thus believed to be tight. Since Fano's inequality tells us that the block error probability for a code having length n and rate r is lower bounded by 1 rn H n (X|Y ), an immediate application of the lower bound is the numerical computation of a rigorous upper bound on the MAP threshold.
In the present paper we drop the convexity requirement for P (x) in the cases of the BEC, BIAWGNC with any noise level an in the case of general binary memoryless (BMS) channels in a high noise regime. In other words we prove the lower bound for any standard regular (so odd degrees are allowed) or irregular code ensemble.
Besides the main result itself, we introduce a new tool in the form of a relationship between the second derivative of the conditional entropy with respect to the noise and correlations functions of codebits. These correlation functions are shown to be intimately related to the mutual information between two codebits. The formulas are somewhat similar to those for GEXIT functions [1] which relate the first derivative of conditional entropy to soft bit estimates. By combining these relations with the interpolation method we are able to control the fluctuations of the so-called overlap parameters. This part of our analysis is crucial for proving the general lower bound on the conditional entropy and relies heavily on channel symmetry.
A preliminary summary of the present work has appeared in [20].
Variational bound on the conditional entropy
Let p Y |X (y|x) be the transition probability of a BMS(ǫ) channel where ǫ is the noise parameter (understood to vary in the appropriate range). We will work in terms of both the likelihood and difference t = p Y |X (y|0) − p Y |X (y|1) = tanh l 2 variables. It will be convenient to use the notation c L (l) and c D (t) for the distributions of l and t, assuming that the all zero codeword is transmitted (that is to say that c L (l)dl = c D (t)dt = p Y |X (y|0)dy). Let V be some random variable with an arbitrary density d V (v) satisfying the symmetry condition where V i are i.i.d copies of V and k is the (random) degree of a check node. We denote by U c , c = 1, ..., d i.i.d copies of U where d is the (random) degree of variable nodes. Notice that in the belief propagation (BP) decoding algorithm U appears as the check to variable node message and V appears as the variable to check node message. Define the functional 2 (we view it as a functional of the probability distribution d V ) Our main result is about the conditional entropy per bit, averaged over the code ensemble C = LDPC(n, Λ, P ).
Definition H. We define the parameters (p an integer) and say that a BMS(ǫ) channel is in the high noise regime if the following series expansions are convergent and if For example the BSC(ǫ) certainly satisfies H if the crossover noise parameter is close enough to 1 2 , because E[t 2p ] = (1 − 2ǫ) 2p . More generaly any channel with bounded likehood variables satisfies H for a regime of sufficiently high noise. For channels with unbounded likehoods the condition will be satisfied if c L (l) has sufficiently good decay properties. But note that the BEC(ǫ) which has mass at l = +∞ does not satisfy this condition since E[t 2p ] = 1 − ǫ. However as we will see for the BEC(ǫ) and the BIAWGNC(ǫ) we do not need condition H. For these two channels our analysis can be made fully non-perturbative, and holds for all noise levels.
Theorem 1 (Variational Bound). Assume communication using a standard irregular C = LDPC(n, Λ, P ) code ensemble, through a BEC(ǫ) or BIAWGNC(ǫ) with any noise level or a BMS(ǫ) channel satisfying H. For all ǫ in the above ranges we have, Let us note that this theorem already appears in [18] for the special case of the BIAWGNC for a Poissonnian Λ(x). We stress again that a formal calculation using the replica method yields For this reason it is strongly suspected that the converse inequality holds as well, but so far no progress has been made except in a limited number of situations alluded to before.
Derivatives of the conditional entropy
Our proof of the variational bound uses integral formulas for the first and second derivatives of E C [h n ] with respect to the noise parameter. The ensemble formulas follow from slightly more general ones that are valid for any fixed linear code. To give the formulation for a fixed linear code it is convenient to introduce a noise vector ǫ = (ǫ 1 , ..., ǫ n ) and a BMS(ǫ) channel with noise level ǫ i when bit x i is sent. When all noise levels are set to the same value ǫ the channel is denoted BMS(ǫ). The distributions of the likelihood l i or difference domain t i representations of the channel outputs now depend on ǫ i . In order to keep the notation simpler we do not explicitely indicate the ǫ i dependence and still denote them as c L (l i ) and c D (t i ) respectively.
We introduce the soft MAP estimates of bit X i and the soft estimate for the modulo 2 sum X i ⊕ X j , In the sequel the notations v ∼i (resp. v ∼ij ) means that component v i (resp. v i and v j ) are omitted from the vector v. The following is known [1] but we state it for completeness. A derivation in the spirit of the present paper can also be found in [19].
Proposition 1 (GEXIT formula). For any BMS(ǫ) channel and any fixed linear code we have This formula will be used for an ensemble that is symmetric under permutation of bits and a BMS(ǫ) channel. Using and averaging over the code ensemble C we get for the average entropy per bit, There are two channels where these general formulas take a simpler form. and Similarly on the BIAWGNC, and We will prove Proposition 2 (Correlation formula). For any BMS(ǫ) channel and any fixed linear code we have In this case the ratio in the logarithm may take the ambiguous value 0 0 but the formula is to be interpreted as (4). We will see in section 2 that in terms of extrinsic soft bit estimates there is an analogous expresion that is unambiguous.
Again, for the case of interest later on, we have a BMS(ǫ) channel and a linear code ensemble that is symmetric under permutations of bits, thus For the BEC 4 these formulas simplify For the BIAWGNC and Formulas (9) and (11) involve the "correlation" (T ij − T i T j ) for bits X i and X j . The general formula (8) can also be recast in terms of powers of such correlations by expanding the logarithm (see section 3). Loosely speaking, in the infinite block length limit n → +∞, the second derivative will be well defined only if the correlations have sufficient decay with respect to the graph distance (the minimal length among all paths joining i and j on the Tanner graph). Thus we expect good decay properties for all noise levels except at the phase transition thresholds where, in the limit n → +∞, the first derivative generally has bounded discontinuities, and thus the second derivative cannot be uniformly bounded in n.
Relation to mutual information
The correlation T ij − T i T j is basicaly a measure of the independence of two codebits, thus it is natural to expect that it should be related to the mutual information I(X i ; X j | Y ). We do not pursue this issue in all details because it is not used in the rest of the paper, but wish to briefly state the main relations which follow naturaly form the previous formulas.
The conditional entropy on the r.h.s is explicitly . In this expression the three conditional entropies are independent of the channel parameters ǫ i and ǫ j . Thus Summarizing, we have obtained for i = j, The BIAWGNC(ǫ). Take i = j. We note that Applying the inequality for the Kullback-Leibler divergence of the two distributions P = p X i X j |y and Q = p X i |Y p X j |y , we get for i = j Averaging over the outputs we get Highly noisy BMS channels. From the high noise expansion (see section 3 and the above remarks, we can derive an inequality like the preceding one, which holds in the high noise regime for general BMS channels. The number 8 gets replaced by some suitable factor which depends on the channel noise.
Organisation of the paper
The statistical mechanics formulation is very convenient to perform many of the necessary calculations, but also the interpolation method is best formulated in that framework. Thus we briefly recall it in section 2 as well as a few connections to the information theoretic language. Section 3 contains the derivation of the correlation formula (proposition 2) and other useful material. The interpolation method that is used to prove the variational bound (theorem 1) is presented in section 4. The main new ingredient of the proof is an estimate (see proposition 3 in section 4) on the fluctuations of overlap parameters. The proof of proposition 3 is the object of section 5. The appendices contain technical calculations involved in the proofs.
Statistical Mechanics Formulation
Consider a fixed code belonging to the ensemble C = LDP C(n, Λ, P ). The posterior distribution p X|Y (x|y) used in MAP decoding can be viewed as the Gibbs measure of a particular random spin system. For this it is convenient to use the usual mapping of bits onto spins σ i = (−1) x i . Given any set A ⊂ {1, ..., n}, we use the notation σ A = i∈ σ i . Thus σ A = (−1) ⊕ i∈A x i . It will be clear from the context if the subscript is a set or a single bit. For a uniform prior over the code words and a BMS channel, Bayes rule implies where c is a product over all check nodes of the given code, and σ ∂c = i∈∂c σ i is the product of the spins (mod 2 sum of the bits) attached to the variable nodes i that are connected to a check c. Z is the normalization factor or "partition function" and ln Z is the "pressure" associated to the Gibbs measure µ(σ). It is related to the conditional entropy by Expectations with respect to µ(σ) for a fixed graph and a fixed channel output are denoted by the bracket − . More precisely for any A ⊂ {1, ..., n}, More details on the above formalism can be found for example in [18]. The soft estimate of the bit X i is (in the difference domain) We will also need soft estimates for X i ⊕X j , i = j. In the statistical mechanics formalism they are simply expressed as In particular the correlation between bits X i and X j becomes which is the usual notion of spin-spin correlation in statistical mechanics.
In section 3 (and appendices B, C) the algebraic manipulations are best performed in terms of "extrinsic" soft bit estimates. We will need many variants, the simplest one being the estimate of X i when observation y i is not available The second is the estimate of X i when both y i and y j are not available Finally we will also need the extrinsic estimate of the mod 2 sum X i ⊕ X j when both y i and y j are not available, It is practical to work in terms of a modified Gibbs average σ A ∼i which means that l i = 0 , in other words y i is not available. Similarly we introduce the averages σ X ∼ij , in other words both y i and y j are unavailable. One has The extrinsic brackets − ∼i and − ∼ij are related to the usual ones − by the following formulas derived in appendix A, and
The Correlation Formula
A derivation of propositon 1 and of (4), (6) within the formalism outlined in section 2 can be found in [19].
Proof of proposition 2
For any BMS(ǫ) channel and linear code we have from (12) The second equality follows by permutation symmetry of code bits. Differentiating once more, we get where and First we consider S 1 . Let we get When we replace this expression in the integral (19) we see that the contribution of ln Z ∼i vanishes because this later quantity is independent of l i . Indeed is a normalized probability distribution. Then, using (15) leads to which (because of (13)) coincides with the first term in the correlation formula. Now we consider the term S 2 . Notice that Thus we can rewrite S 2 as 2 σ k be the partition function for the Gibbs measure · ∼ij , and consider Using again (20) we get As before the contribution of ln Z ∼ij vanishes because it is independent of l i , l j . Similarly we have Using these four identities then leads to To get the formulas in terms of usual averages we use the relations (16), (17). Hence Because of (13) and (14) this coincides with the second term in the correlation formula. The proposition now follows from (18), (22) and (24).
Expressions in terms of the spin-spin correlation
The BEC. From c D (t) = (1 − ǫ)δ(t − 1) + ǫδ(t), the second derivative in terms of extrinsic quantities (formulas (21) and (23)) reduces to There are various ways to see that for the BEC any Gibbs average σ A or σ A ∼ij takes values in {0, 1}. A heuristic explanation is that bits (or their mod 2 sums) are either perfectly known or erased. A more formal explanation follows from a Nishimori identity 5 combined with the Griffith-Kelly-Sherman (GKS) correlation inequality [18]. For example, is a positive random variable with zero expectation and is therefore equal to 0 with probability one. These remarks imply that Note that in deriving the last expression we used the fact that l i = ∞ (l j = ∞) implies that σ i = +1 (σ j = +1) which makes the logarithm term equal to zero. From the previous remarks we also have The difference of the two logarithms is simplified using the following four Nishimori identities, Finaly we obtain the simple expression Let us point out that the second GKS inequality (for the BEC) implies that σ i σ j − σ i σ j ≥ 0, thus the correlation takes values in {0, 1} and we have The BIAWGNC. From the explicit form one can show that the correlation formula reduces to Otherwise diffrentiating (12) thanks to and using integration by parts also leads to this simpler form. This route is much simpler and the details can be found in [18].
Highly noisy BMS channels. We use the extrinsic form of the correlation formula given by (21) and (23). First we expand the logarithms in S 1 and S 2 in powers of t i and t j and then use various Nishimori identities. After some tedious algebra (see Appendices B and C) we can organize the expansion in powers of the channel parameters (2). In the high noise regime this expansion is absolutely convergent. To lowest order we have The second derivative of the conditional entropy is directly related to the average square of the code-bit or spin-spin correlation.
The Interpolation Method
We use the interpolation method in the form developed by Montanari. As explained in [7] it is difficult to establish directly the bounds for the standard ensembles. Rather, one introduces a "multi-Poisson" ensemble which approximates the standard ensemble. Once the bounds are derived for the multi-Poisson ensemble they are extended to the standard ensemble by a limiting procedure. The interpolation construction is fairly complicated so that it helpful to briefly review the simpler pure Poisson case.
Poisson ensemble
We introduce the ensemble Poisson-LDPC(n, 1 − r, P ) = P where n is the block length, r the rate and P (x) = k P k x k the check degree distribution.
A bipartite graph from the Poisson ensemble is constructed as follows. The graph has n variable nodes. For any k choose a Poisson number m k of check nodes with mean n(1 − r)P k . Thus graph has a total of m = k m k check nodes which is also a Poisson variable with mean n(1 − r). For each check node c of degree k, choose k variable nodes uniformly at random and connect them to c. One can show that the left degree distribution concentrates around a Poisson distribution Λ P (x) = e P ′ (1)(1−r)(x−1) . In other words the fraction Λ l of variable nodes with degree l is Poisson with mean P ′ (1) (1 − r).
The main idea behind the interpolation technique is to recursively remove the check node constraints and compensate them with extra observations U distributed as (1) where d V is a trial distribution to be optimized in the final inequality. One can interpret these extra observations as coming from a repetition code whose rate is tuned in a such a way that the total design rate r remains fixed. More precisely let s ∈ [0, 1] be an interpolating parameter. At "time" s we have a Poisson-LDPC(n, (1 − r)s, P ) = P s code. Besides the usual channel outputs l i , each node i receives e i extra i.i.d observations U i a , a = 1, ..., e i , where e i is Poisson with mean n(1 − r)(1 − s) (so the total effective rate is fixed to r). The interpolating Gibbs measure is Here c is a product over checks of a given graph in the ensemble P s . At s = 1 one recovers the original measure while at s = 0 (no checks) we have a simple product measure (corresponding to a repetition code) which is tailored to yield the replica symmetric entropy h RS [d V ; Λ P , P ] (up to an extra constant).
The central result of [7] is the sum rule Let us explain the notation. The first term on the right hand side h RS,P [d V ; Λ P , P ] is the replica symmetric functional of section 1 evaluated for the Poisson ensemble. The remainder term R n (s) is and Q 2p the overlap parameters Here σ
Multi-Poisson ensemble
The multi-Poisson-LDPC(n, Λ, P, γ) = MP ensemble, is a more elaborate construction which allows to approximate a target LDPC(n, Λ, P ) ensemble. Its parameters are the block length n, the target variable and check node degree distributions Λ(x) and P (x) and the real number γ which controls the closeness to the standard ensemble. We recall that variable and check node degrees have finite maximum degrees. The construction of a bipartite graph from the multi-Poisson ensemble proceeds via rounds: the process starts with a high rate code and at each round one adds a very small number of check nodes till one ends up with a code with almost the desired rate and degree distribution. A graph process G t is defined for discrete times t = 0, ..., t max , t max = ⌊Λ ′ (1)/γ⌋ − 1 as follows. For t = 0, G 0 has no check nodes and has n variable nodes. The set of variable nodes is partitioned into the subsets V l of cardinality nΛ l for every l and every node i ∈ V l is decorated with l free sockets. The number d i (t) keeps track of the number of free sockets on node i once round t is completed. So for t = 0, G 0 has no check nodes and each variable node i ∈ V l has d i (0) = l free sockets. At round t, G t is constructed from G t−1 as follows. For all k, choose a Poisson number m t k of check nodes with mean nγP k /P ′ (1). Connect each outgoing edge of these new degree k check nodes (added at time t) to variable node i according to the probability w i (t) = d i (t−1) P i d i (t−1) . This is the fraction of free sockets at node i after round t − 1 was completed. Once all new check nodes are connected, update the number of free sockets for each variable node d i (t) = d i (t − 1) − ∆ i (t). where ∆ i (t) is the number of times the variable node i was chosen during the round t. For n → ∞ this construction yields graphs with variable degree distributions Λ γ (x) (the check degree distribution remains P (x)). The variational distance between Λ γ (x) and P (x) tends to zero as γ → 0.
The interpolating ensemble now uses two parameters (t * , s) where t * ∈ {0, ..., t max } and 0 ≤ s ≤ γ. For rounds 0, ..., t * − 1 one proceeds exactly as before to obtain a graph G t * −1 . At the next round t * , one proceeds as before but with γ replaced by s. The rate loss is compensated by adding e i extra observations for each node i, where e i is a Poisson integer with mean n(γ − s)w i (t * ). The round is ended by updating the number of free sockets . Finally, for rounds after t * + 1, ..., t max no new check node is added but for each variable node i, e i external observations are added, where e i is a Poisson integer with mean nγw i (t * ). Moreover the free socket counter is updated as d i (t) = d i (t − 1) − e i (t). Recall that the external observations are i.i.d copies of the random variable U (see (1)).
The interpolating Gibbs measure µ t * ,s (σ) has the same form than (26) with the appropriate products over checks and extra observations. Let h n,γ the conditional entropy of the multi-Poisson ensemble MP (corresponding to t * = t max and s = γ). Again, the central result of [7] is the sum rule Explanations on the notation are in order. The first term h RS,γ [d V ; Λ γ , P ] is the replica symmetric functional of 1 evaluated for the multi-Poisson ensemble. The remainder term R n (t * , s) is given by where q 2p = E V [(tanh V ) 2p ] as before and Q 2p are modified overlap parameters Here as before σ (α) i , α = 1, 2, . . . , 2p are 2p independent copies (replicas) of the spin σ i and − 2p,t * ,s is the Gibbs bracket associated to the product measure 2p α=1 µ t * ,s (σ (α) ) The overlap parameter is now more complicated than in the Poisson case because of the (positive) terms w i (t * ) and X i (t * ). Here X i (t * ) are new i.i.d random variables whose precise description is quite technical and can be found in [7]. The reader may think of the terms w i (t * )X i (t * ) as behaving like the 1 n factor of the pure Poisson ensemble overlap parameter (28). More precisely the only properties (see Appendix E in [7]) that we need are and for any finite k and finite positive constants A, B, A k independent of n (they may depend on some of the other parameters but this turns out to be unimportant). Finaly we use the shorthand E s [−] for the expectation with respect to all random variables involved in the interpolation measure. The subscript s is here to remind us that this expectation depends on s, afact that is important to keep in mind because the remainder involves an intgral over s. When we use E (without the subscript s; as in (33) for example) it means that the quantity does not depend on s. In the sequel the replcated Gibbs bracket − 2p,t * ,s is simply denoted by − s . There will be no risk of confusion because the only property that we us is its linearity. In [7] it is shown that where O(γ b ) is uniform in n (b > 0 a numerical constant) and o n (1) (depends on γ) tends to 0 as n → +∞.
In the next paragraph we prove the variational bound on the conditional entropy of the multi-Poisson ensemble, namely Note that here o n (1) again depends on γ. By combining this bound with (34) and taking limits (36) The main theorem 1 then follows by maximizing the right hand side over d V .
Proof of the Variational Bound (35)
In view of the sum rule (29) it is sufficient to prove that lim inf n→+∞ R n (t * , s) ≥ 0. In the case of a convex P considered in [7] this is immediate because convexity is equivalent to Note that P (x) = k P k x k is anyway convex for x ≥ 0 since all P k ≥ 0. So if do not assume convexity of the check node degree distribution we have to circumvent the fact that Q 2p can be negative. But note Therefore we are assured that for any P (i.e not necessarily convex for x ∈ R) we have and the proof will follow if we can show that with high probability The following concentration estimate will suffice and is proven in section 5.
Proposition 3. Fix any δ < 1 4 . On the BEC(ǫ) and BIAWGNC(ǫ) for a.e ǫ, or on general BMS(ǫ) satisfying H, we have for a.e ǫ, Here P s (X) is the probability distribution E s I X s .
This proposition can presumably be strengthened in two directions. First we conjecture that hypothesis H is not needed (this is indeed the case for the BEC and BIAWGNC). Secondly the statement should hold for all ǫ except at a finite set of threshold values of ǫ where the conditional entropy is not differentiable, and its first derivative is expected to have jumps (except for cycle codes where higher order derivatives are singular). Since we are unable to control the locations of theses jumps our proof only works for Lebesgue almost every ǫ.
We are now ready to complete the proof of the variational bound (35).
End of Proof of (35). From (31) and (33) and Combined with q 2p ≤ 1, this implies (since the maximal degree of P is finite) that for some positive constant C 1 . The only crucial feature here is that this constant does not depend on n and on the number of replicas 2p (a more detailed analysis shows that it depends only on the degree of P (x)). Now we split the sum (30) into terms with 1 ≤ p ≤ n δ (call this contribution R A ) and terms with p ≥ n δ (call this contribution R B ), where δ > 0 is the constant of proposition 3. For the second contribution (41) implies For the first contribution we write In this equation, the second sum is positive due to (37). Thus we find Below we use proposition 3 to show that for almost every ǫ in the appropriate range Let us now prove (43). First we set and use Cauchy-Schwarz and then (40) to obtain for some positive constant C 2 independent of n and p (depending only on the degree of P (x)). Thus In the second inequality we have permuted the integral with a finite sum and used Cauchy-Schwarz. Finaly we can apply proposition 3 and Lebesgue's dominated convergence theorem to the last sum over p, to conclude that (43) holds.
Fluctuations of overlap parameters
In this section we prove proposition 3. The proofs are done directly for the multi-Poisson ensemble. We start by a relation between the overlap fluctuation and the spin-spin correlation.
Lemma 1. For any BMS(ǫ) channel there exists a finite constant C 3 independent of n and p (dpending only on the maximal check degree) such that Proof. Using the identity and (33) we get Here x is the bound in (39). Therefore applying the Chebycheff inequality From the definition of the overlap parameters it follows that Substituting in (46) and applying Cauchy-Schwarz to i,j E s [−] we get From (32), (33) it is easy to see that for any i, j where C 3 is independent of n. It follows that In the last equality we have used the symmetry of the ensemble with respect to variable node permutations.
Denote by h n,γ (t * , s) the entropy of the µ t * ,s interpolating measure. Note that this should not be confused with the multi-Poisson ensemble entropy h n,γ (which corresponds to t * = t max and s = γ).
Lemma 2. For the BEC and BIAWGNC with any noise value and for general BMS(ǫ) channels satisfying H we have
where F (ǫ) and G(ǫ) are two finite constants depending only on the channel parameter.
The proof of lemma 2 is based on the correlation formula of section 1. These are true for any linear code ensemble so they are in particular true for the interpolating (t * , s) ensemble 7 . For the BEC and BIAWGNC we have already shown the two equalities (9) and (11): thus the inequality (48) is in fact an equality for appropriate values of F and G. The case of general (but highly noisy) BMS channels is presented in appendix C. A converse inequality can also be proven by the methods of appendices B and C.
Proof of proposition 3. Note that for all points of the parameter space (ǫ, s) such that the second derivative of the average conditional entropy is bounded uniformly in n the proof immediately follows from (47), (48) (and the last 7 in fact one has to check that the addition of ei a=1 U i a to l i does not change the derivation and the final formulas. For this it sufices to follow the calculation of section 3 inequality before that one) by choosing δ < 1 4 . However, in the large block length limit n → +∞, genericaly the first derivative of the average conditional entropy has jumps for some threshold values of ǫ (these values depend on the interpolation parameter s). This means that for these threshold values the second derivative cannot be bounded uniformly in n. Since we cannot control these locations we introduce a test function ψ(ǫ): non negative, infinitely differentiable and with small enough bounded support included in the range of ǫ satisfying H. We consider the averaged quantity Combining this inequality with (47) and (48) we get Note that from the bounds in appendix C F (ǫ), G(ǫ) and G ′ (ǫ) are integrable except possibly at the edge of the ǫ range defined by H. This is not a problem because we can take the support of ψ(ǫ) away from such points or alternatively take a ψ(ǫ) which vanishes sufficiently fast at these points. Moreover the first derivative of the average conditional entropy is bounded uniformly in n and s (see appendix D) by a constant k(ǫ) that has at most a power singularity at ǫ = 0, and again this is not a problem. Thus by choosing 0 < δ < 1 4 we obtain lim n→+∞ Q = 0 Applying Lebesgue's dominated convergence theorem to convergent subsequences (of the integrand of dǫψ(ǫ) in (49)) we deduce that dǫψ(ǫ) lim which implies that along any convergent subsequences, for almost all ǫ as long as δ ≤ 1 4 . Now we apply this last statement to two subsequences that attain the lim inf and the lim sup (on the intersection of the two measure one ǫ sets). This proves that the lim n→+∞ exists and vanishes.
Conclusion
The main new tool introduced in this paper are relationships between the second derivative of the conditional entropy and correlation functions or mutual information between code bits. This allowed us to estimate the overlap fluctuations in order to get a better handle on the remainder. Some aspects of our analysis bear some similarity with techniques introduced by Talagrand [17] but is independent. One difference is that we use specific symmetry properties of the communications problem.
We expect that the technique developped here can be extended to remove the restriction to high noise (condition H). Indeed the only place in the analysis where we need this restriction is lemma 2. For the BEC and BI-AWGNC the lemma is trivialy satisfied for any noise level (with appropriate constants). Another issue that would be worthwhile investigating is whether the related inequalities of paragraph 1.3 and the converse of lemma 2 can be derived irrespective of the noise level.
The next obvious problem is to prove the converse of the variational bound (theorem 1).
For this one should show that the remainder vanishes when d V is replaced by the maximizing distribution of h RS [d V ; Λ, P ]. This program has been carried out explicitely in the case of the BEC and the Poisson ensemble [12]. It would be desirable to extend this to more general ensembles and channels but the problem becomes quite hard. However a similar program has been succesfuly carried out for a p-spin model with gauge symmetry 8 (see [10]). A solution of these problems would allow for a rigorous determination of MAP thresholds and would extend our understanding of the intimate relationship between BP and MAP decoding.
A Appendix A
We prove the identities (15), (16) , (17). By definition and plugging the identity in the brackets immediately leads to (15). For the second and third identities we proceed similarly. Namely, in the brackets, leads immediately to (16) and (17).
B Appendix B
We indicate the main steps of the derivation of the full high noise expansion for The expansion for S 1 is given by (51) and that for S 2 by (54). They are derived in a form that is suitable to prove lemma 2 of section 5 (see appendix C). For this later proof we need to extract a square correlation at each order as in (54). This is achieved here through the use of appropriate remarquable Nishimori identities, and in order to use these we take the extrinsic forms (21) and (23) of S 1 and S 2 . Let us start with S 1 which is simple. Using the power series expansion of ln(1 + x) we have This yields an infinite series for S 1 which we will now simplify. Because of the Nishimori identities 2p 1 ] we can combine odd and even terms and get This series is absolutely convergent as long as which is true for channels satisfying H.
In the rest of the appendix we deal with S 2 which is considerably more complicated. However the general idea is the same as above. First we use the expansion of ln(1 + x) to get We expand the multinomial in I and subtract the terms II and III. Then only terms that have powers of the form t k i t l j with k, l ≥ 1 will survive in (52). Moreover because of the identities T 00 + T 01 + T 10 + T 11
11
(53) with (we abuse notation by not indicating the (kl) and (ij) dependence in the T and T ′ factors) T ′ κλ = exchange k, l and κ, λ and i, j The next simplification step occurs by using the Nishimori identity for the expectation in the above formula and using σ i ∈ {±1}, to "linearize" the terms (σ i σ j ) m 1 σ m 2 i σ m 3 j . Tedious but straighforward algebra then yields A similar formula obtained by exchanging k, l and i, j holds for κλ T ′ κλ . Replacing these sums in (53) yields a high noise expansion for S 2 .
However this is not yet pratical for us because we need to extract a general square correlation factor σ i σ j ∼ij − σ i ∼ij σ j ∼ij 2 . The fact that this is possible is a "miracle" that comes out of the Nishimori identities that were used. Setting X = σ i ∼ij σ j ∼ij , Y = σ i σ j ∼ij and using the change of variables m = p − 2k + 1 the last expression becomes (k ≥ l) One can check that this is equal to The latter can be checked by first expanding (X − Y ) 2l and then differentiating. On the other hand one can use the Leibnitz rule to find that the last expectation above is equal to We define A 011 = 1 2 . We proceed similarly for the terms with k < l. Finaly one finds idem with k, l and i, j exchanged (54) Let us now briefly justify that the series is absolutely convergent for channels satisfying H. We Note the following facts: A rlk ≤ 2l−2 r 2 2k−3 and 2 2k−2 3 2l−2 ≤ ( 5 2 ) 2k+2l−4 for k ≥ l together with the version with k, l exchanged. It easily follows that Thus the series for S 2 is absolutely convergent as long as +∞ p=1 5 2 2p |m (2p) 1 | < +∞ Note that we have not attempted to optimize the above estimates.
C Appendix C
We prove lemma 2 for highly noisy general BMS channels. For this we use the high noise expansion derived in appendix B. There it was derived for a general linear code ensemble, and this is also the framework of the proof below. Of course the result applies to the interpolating ensemble of lemma 2. Note that the the final constants F (ǫ) and G(ǫ) do not depend on the code ensemble but only on the channel. Consider equation (8) for d 2 dǫ 2 E C,t [h n ]. By the same estimates than those for S 1 in appendix B, the first term on the right hand side is certainly greater than To get a lower bound for the second term we consider the series expansion given by that for S 2 in (54). In that series we keep the first term corresponding to k = l = 1, namely and lower bound the rest of the series (k, l) = (1, 1) by using estimates of appendix B. More precisely this part is lower bounded by Putting these three estimates together we get As long as the noise level is high enough so that (see H) 1 | the inequality (56) implies j =1 E C,t ∼1j σ 1 σ j ∼1j − σ 1 ∼1j σ j ∼1j 2 ≤F (ǫ) +G(ǫ) d 2 dǫ 2 E C,t [h n ] (57) for two noise dependent positive finite constantsF (ǫ),G(ǫ). The final step of the proof consists in passing from the extrinsic average − ∼1j in the correlation to the ordinary one − 1j . This is achieved as follows. From the formulas (16) and (17) we deduce that a function that depends on all log-likelihood variables. Thus we have Taking now the expectation E C,t we get Since t i , t j are independent we get which converges for highly noisy channels satisfying H. The result of the lemma follows by combining (57) and (58). The constants F (ǫ) and G(ǫ) are equal toF (ǫ) andG(ǫ) divided by the expression on the right hand side of the last inequality.
D Appendix D
We prove the boundedness and positivity of d dǫ E s [h n,γ (t * , s)] which is needed in the proof of lemma 2.
Lemma 3. For the BEC and BIAWGNC with any noise level, and any BMS satisfying H, there exists a constant k(ǫ) independent of n, γ, t * and s such that For the BEC we can take k(ǫ) = ln 2 ǫ and for the BIAWGNC k(ǫ) = 2 ǫ −3 . For general BMS channels satisfying H the constant remains bounded as a function of ǫ (i.e. in the high noise regime).
Here we have stated the lemma for the multi-Poisson interpolating ensemble which is our specific need. However as the proof below shows it is independent of the specific code ensemble and the bound depends only on the channel.
Proof. We will use the GEXIT formula of lemma 1. Since the proposition applies for any linear code it also applies for the interpolating ensemble of interest here. In the case of the BEC and BIAWGNC we have (see (5) For highly noisy BMS channels we proceed by expansions. For this reason we have to use the "extrinsic form" of the GEXIT formula (analogous to (21) which is independent of n, γ, t * and s. | 2008-07-19T04:48:12.000Z | 2008-07-19T00:00:00.000 | {
"year": 2009,
"sha1": "950bac376fe962281cfdf44448744070f3d24dda",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0807.3065",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "adf3d74851458061ab0baae8e30a8b8746d2e455",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
255050536 | pes2o/s2orc | v3-fos-license | Acetylcholine Esterase Gene Expression in Salivary Glands of Albino Rats after Treatment with amitriptyline or/and Ashwagandha
Acetylcholinesterase is required as an enzyme to counteract the effects of acetylcholine. The aim of the study is to assess how amitriptyline and Ashwagandha affect the acetylcholinesterase gene in rat salivary glands. Forty healthy albino rats were divided randomly into four equal groups: Group I (control) received distilled water for 30 days. Group II received amitriptyline (10mg/kg) for 30 days. Group III received ashwagandha watery root extract (200mg/kg) orally for 30 days and Group IV received the combination of amitriptyline orally and ashwagandha root extract orally for 30 days. Rats in each group were sacrificed after day 30 and salivary glands were dissected for measurement of the acetylcholinesterase gene using a Polymerase Chain Reaction technique (PCR). Acetylcholinesterase gene measurements reveal an increase in groups treated with amitriptyline alone (1.55±0.11) and in the group treated with a combination of amitriptyline with Ashwagandha (1.92±0.16), in comparison with the control group. There were no discernible differences between the Ashwagandha treated group (1.073± 0.25) compared to the control group (0.76±0.19).In conclusion , Amitriptyline alone and, when combined with Ashwagandha cause transcription of the acetylcholinesterase gene. ـــــــــــــــــــــــــــــــــــــــــ
INTRODUCTION
One gene encodes the enzyme Acetylcholinesterase (AChE), which is necessary for the termination of acetylcholine's action. Alternative mRNA processing results in the development of three different carboxyl-terminated enzyme forms. These structural variations control the expressed enzyme's cellular location but do not impact its catalytic activity (Taylor, 2011). A 6 kb gene with several transcriptions starts sites codes for AChE (Bronicki and Jasmin, 2012). The promoter includes a multitude of regulatory components, such as Sp-1, Egr-1, and AP2 binding sites (Rotundo, 2020). A number of other heat shock components bring on induction of AChE transcription following heat shock (Chen et al., 2010). Separation has also been proposed as a regulator of AChE expression (Layer et al., 2013).
Alternative splicing occurs in up to 90% of human genes, and AChE is no exception. In the brain, muscle, and erythropoietic tissue, alternative splicing of a single AChE gene in the 5' region results in the production of isoforms with tissue-specific expression patterns. (Tapial et al., 2017). For instance, it has been demonstrated that the brain isoform uses a higher upstream transcriptional start point (Li et al.,1993). At the 3' end, AChE pre-mRNA is also prone to alternative splicing. Three transcripts are produced as a result of this, reading through (AChE R), hydrophobic (AChE H), and synaptic (AChE S) or tail (AChE T). AChE H is present in erythrocytes as glycosylphosphatidylinositol (GPI)-anchored dimers However, depending on whether the 5' donor site downstream of E4 splices to the acceptor site upstream of E5 or E6, alternative splicing in neurons results in either AChET or AChER. By splicing to the distal E6 splice site and integrating E6 into the mRNA, the synaptic AChET is produced ( Bronicki and Jasmin, 2012). Nevertheless, although AChET often predominates, cell stress encourages the upregulation of AChER. (Shaked et al., 2008). In neuronal cell lines, APP can suppress the transcription of AChE. When acting, APP has a contractual partner, possibly ITGA5. Increased levels of total Akt and phospho-Akt result from this interaction's signal transduction, which may involve FAK. The transcription of AChE is repressed due to this activation of Akt (Gunn et al., 2022). Since amitriptyline causes anticholinergic side effects including xerostomia. So, this study to assess the amitriptyline or/and ashwagandha effect on the acetylcholinesterase gene in rat salivary glands and the possibility of overcoming anticholinergic side effects of amitriptyline by Ashwagandha.
Experimental substances
This research used forty healthy albino rats that were 8-10 weeks old and weighed 200-250g. They were obtained from the Faculty of Veterinary Animal House at Mosul University, Iraq. They were kept in rodent plastic cages with wire mesh covers. The animals were kept at a room temperature of 22 2C° with 12 hours of light and darkness dna unrestricted access to food and water ad-libitum. All procedures followed the guidelines of the Faculty of Dentistry's institutional animal research ethics committee in the College of Dentistry, University of Mosul, Iraq(UOM. Dent/A.L.56/22).
Ashwagandha root extract was available in powder obtained from Naturalaya Kimya company/Antalya /Turkey. (Fresh aqueous solution of Ashwagandha was prepared and administered orally every day. ( Rats treated orally by oral gavage needle with 0.5 ml aqueous of Ashwagandha root extract (100 ml water +5000mg plant) at a dose of 200 mg \ kg body weight (50 mg/rats) for 30 days (Mahmoud et al., 2022). Amitriptyline was available in the form of tablets from accord company United Kingdom. (Fresh solution of amitriptyline was prepared and administered orally every day (Fig.1).
Experimental design
Forty rats were randomly divided equally into four groups as follows: Group I (Control n=10): rats were daily received distilled water at (1-2 ml/kg) for 30 days experiment Group II (Amitriptyline group n=10): rats were given Amitriptyline 10mg/kg orally using a gavage needle daily, from the first day to the last day of the experiment. Group III(Ashwagandha group n=10): rats were given Ashwagandha aqueous root extract 200mg/kg orally daily using an oral gavage needle, with 0.5 ml from the first day to the last day of the experiment. Group IV(combination group n=10): rats were given a combination of Amitriptyline 10mg/kg and Ashwagandha watery root extract 200mg/kg orally using a gavage needle from the start of the trial until its conclusion.
Salivary glands tissue preparation
Salivary glands were removed to measure the tissue's acetylcholine esterase gene using Polymerase Chain Reaction (PCR) machine ( fig 2). Salivary gland samples were put in a buffered phosphate solution for analysis. After 30 days of the administration, two hours following the last treatment, the animals in each group were put under light ether anesthesia and sacrificed.
Fig.2: Polymerase Chain Reaction PCR device used in the study
Gene expression analysis Tissue extraction protocol 1. Place up to 20 mg of tissue that has been cut into smaller pieces in a 1.5 ml microcentrifuge tube with 200 µl of Lysis Solution.
2. The sample tube was filled with 20 µl of Proteinase-K-solution (20 mg/ml), proper mixing by vortexing, and then incubated at (56 °C) until the tissue was lysed. To ensure that it is distributed evenly during incubation, you can also put the sample tube in a vibrating water bath or on a platform that rocks. The amount of lysis time depends on the kind of tissue that is being treated. Overnight lysis had no impact on the preparation. 8. using a 2.0 ml collection tube, delicately transfer the lysate into the upper reservoir of the spin column without saturating the rim.
9. Remove the flow-through and connect the 2.0 ml collecting tube to the spin column after a minute of centrifuging at 13,000 rpm.
10. For 1 minute, centrifuge at 13,000 rpm while adding 500 µl of washing one solution using a collection tube to connect the spin column: Drain the flow through first, then insert the 2.0 ml collection tube into the spin column.1.
Centrifuge for one minute at 13,000 rpm. 11. 500 µl of the Washing 2 Solution should be added. Remove the flow through and put the 2.0 ml collecting tube into the spin column. 12. The spin column was dried by running a further 1 minute of 13,000 rpm centrifugation to remove any remnant ethanol. 13. Use the new 1.5 ml microcentrifuge tube and insert it inside the spin column. 14. Pour 100 to 200 µl of the elution buffer solution into the spin column within the microcentrifuge tube, and then leave it alone for at least a minute. 15. Centrifuge at 13,000 rpm for 1 minute to elute the genomic RNA.
Primer design for genes (forward and reverse primer sequence)
One of the most important aspects of quantitative real-time PCR (qPCR) analyses' performance and quality is the design of the primers since effective primer design is essential for accurate and reliable quantification. To locate possible primers for certain qPCR assays, primer design should follow several criteria. The GAPDH gene was used as a housekeeping gene. Acetylcholinesterase gene primers were designed through the use of well-known website software which is (NCBI); the primer sequences as in the following (Table 1).
Genes
The forward sequence of primers The reverse sequence of primers GAPDH ACATGCACAGGGTACTTCGA TTACCCCAGCCTTCTCCATG Acetylcholinesterase gene
GoTaq ® qPCR Master Mix:
The second kit is for quantitative RNA detection, which is (GoTaq qPCR Master Mix), from (Promega Corporation USA). The quantitative PCR (qPCR) reagent system GoTaq ® qPCR Master Mix (a,b). This system includes a fluorescent RNAbinding dye (BRYT Green ® Dye) that binds to double-stranded RNA (dsRNA) and exhibits higher fluorescence amplification. All necessary components for qPCR are included in the easy-to-use, stable 2X formulation known as (GoTaq ® qPCR Master Mix) except (sample RNA, primers and water). This formula includes GoTaq ® Hot Start Polymerase, MgCl 2 , dNTPs, a custom reaction buffer, a proprietary dsRNA-binding dye, and a low concentration of carboxy-X-rhodamine (CXR) reference dye (identical to ROXTM dye) yields the best results in qPCR tests. For use with instruments that need more reference dye than what is in the GoTaq ® qPCR Master Mix, a separate bottle of CXR Reference Dye is provided.
Quantitative measurement of acetylcholine esterase gene
Includes a quantitative measurement of acetylcholine esterase RNA materials, RNA extracted as described above from submandibular gland tissues using AddPrep Genomic RNA Extraction Kit. The acetylcholine esterase genomic material was determined by (qPCR) by using the (Go-Taq-qPCR master mix) produced by Promega and PCR max Eco machine. Replication reactions of the goal gene and household genes were performed for the samples. ΔΔCt calculated for comparison of genes between samples. Replication reactions were done for the genes of the study and household genes were done for all samples. Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) housekeeping genes were used as a control to calculate the ΔCT value. ΔΔCT calculated for comparison of the results of gene expression between samples. The ΔCT value was calculated for each sample as the difference in CT between the gene of interest and the household gene. ΔΔCT was measured as the difference between the ΔCT values of the study sample and the control sample. The acetylcholinesterase genes in this study were expressed as ΔΔCT (mean ±SD). ΔCT (Sample) = CT AchE gene -CT GAPDH ΔCT (control) = CT control -CT GAPDH ΔΔCT = ΔCT (Sample) -ΔCT (control).
Statistical analysis
SPSS program version 21 for Windows was used for the statistical analysis. Mean and standard deviation are two ways to express descriptive statistics of data (SD). Statistical tests such as one-way analysis of variance (ANOVA)followed by (Duncan's posthock) were used to compare the differences between the four study groups. (Ali and Bhaskar 2016). P≤0.05 was the significance level.
RESULTS
Our results showed a significant increase in the acetylcholinesterase gene in the group treated with amitriptyline alone ΔΔCT value (1.55±0.11), and a combination of amitriptyline with ashwagandha ΔΔCT value (1.92±0.16) in comparison with control group ΔΔCT (0.76±0.19) at p-value ≤0.05 but no significant difference between group treated with Ashwagandha alone ΔΔCT value (1.073± 0.25) in comparison with the control group.
In Ashwagandha treated alone, an increase in acetylcholine esterase gene expression was found but not as shown when given amitriptyline alone or in combination with amitriptyline with Ashwagandha, which shows a significant increase in the transcription of acetylcholinesterase gene when compared to the control group (Table 2 and Fig.3). This study showed an increase in acetylcholinesterase gene expression at day 30 when combining amitriptyline with Ashwagandha as a result of ashwagandha agonist action on muscarinic receptors, whatever, drugs that act as serotonin, noradrenaline dopamine and muscarinic agonist can stimulate Egr1 expression in different cell types of M1AchE with the muscarinic agonist (Konaret et al., 2019), The results were consistent with prior research, demonstrating that the muscarinic agonist carbachol significantly increases the transcription of the Egr-1, Egr-2, Egr-3, and Egr-4 transcription factors. The AChE gene has binding sites for the transcription factors Spl, Egr-I, and AP2. Experiments showed that Spl and Egr-1 sites are necessary for activating AChE gene expression, whereas AP2 suppressed it. According to a prior study, treatment with the cholinesterase inhibitor tacrine for 12 months dramatically raised AChE activity in CSF by 50% compared to baseline. This study demonstrated an increased ChE gene expression following 30 days of dosing (Darreh- Shori, et al., 2002).
According to other studies, Physostigmine, an AChE inhibitor, enhanced AChE gene expression in cerebrospinal fluid (Vecchio, et al., 2021). Analysis of the muscarinic receptor subtypes revealed that, in addition to the ml AChR, the m2, m3, and m4AChR could also induce the transcription of the EGR-I gene, albeit to differing degrees (Nitsch, et al.,1998). That agrees with the results of the present study since amitriptyline administration for 30 days will lead to an increase in muscarinic receptors expression that is consistent with the inhibition of acetylcholinesterase enzyme caused by Ashwagandha (Gros, et al., 2021).
The results of previous studies showed that Ashwagandha administration for 30 days enhanced cholinergic markers by inhibiting acetylcholinesterase enzyme and improving muscarinic receptor binding abilities Therefore, blocking muscarinic acetylcholine receptors increased the amount of AChE released by cells (mAChRs). This pathway, which most likely involves the transcription factor Egr-1, will result in the transcriptional upregulation of AChE by mAChRs (Mashimo, et al., 2021). Although Egr-1 appears to be the primary target of activation of mAChRs through MAPK, Egr-2, -3, and -4 levels at both the protein and mRNA levels have also been demonstrated to be regulated (Gitenay and Baron, 2009).
Elk1 has been intimately associated with Egr-1 activation and AChE control. It has been hypothesized that the aforementioned MAPK activation causes the Elk-1 and SAP-1/-2 families of transcription factors to become active (Yang, et al., 2013). Elk-1, a member of the E-twenty six (Ets) family of transcription factors (TFs), activates the Egr-1 promoter by forming a complex with CREBbinding protein (CBP) and SRF (Besnard, et al., 2011). Phosphatases like protein phosphatase 2B prevent Elk-1 signaling (Zhao, et al., 2021). A 6 kb gene with several transcriptions start sites codes for AChE (Bronicki and Jasmin, 2012). AChE transcription is induced following heat shock in addition to other heat shock components (Kim, et al., 2021). At the 3' end, AChE pre-mRNA is also prone to alternative splicing. Three transcripts are produced as a result of this, reading through (AChER), hydrophobic (AChEH), and synaptic (AChET). By splicing to the distal E6 splice site and integrating E6 into the mRNA, the synaptic AChET is produced (Bronicki and Jasmin, 2012).
Nevertheless, despite the fact that AChE T typically predominates, cell stress encourages the upregulation of AChER (Lin and Zhang, 2018). A mechanical connection between the regulation of AChE expression and the Amyloid precursor protein (APP) was revealed by later research. AChE mRNA levels will decrease in response to the overexpression of APP, a protein that plays a function in cell adhesion and is frequently mediated through protein interactions (Tang, 2019). When APP was knocked down and AChE mRNA increased noticeably, this regulatory link between APP and AChE was validated (Rump, et al., 2022). The extracellular, N-terminal E1 domain of APP, and more especially the copper-binding region therein, were necessary for the protein to be able to inhibit AChE transcription. The capacity of APP to suppress AChE's transcription was completely eliminated once these domains were deleted (Uddin, et al ., 2020).
CONCLUSION
Amitriptyline when given alone and/or with Ashwagandha induces acetylcholinesterase gene transcription as a potentiation interaction. While giving Ashwagandha alone non-significantly increased acetylcholinesterase gene transcription compared with the control group.
ACKNOWLEDGMENT
To all staff in the department of dental basic sciences and College of Dentistry, University of Mosul, Iraq | 2022-12-24T16:38:53.223Z | 2022-12-13T00:00:00.000 | {
"year": 2022,
"sha1": "51c0855e93cb3acc599a8067f132da2fb56e0c53",
"oa_license": "CCBY",
"oa_url": "https://javs.journals.ekb.eg/article_274069_c8577e416c0816634b08eef45c865bf9.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "765d1fe8de3279f2c62d6b1e6b576b906414ba64",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
233866371 | pes2o/s2orc | v3-fos-license | The Moderating Role of Ethical Leadership on Nurses' Green Behavior Intentions and Real Green Behavior
Aim This study is aimed at exploring the relationship between green behavior intentions and green behavior and analyzing the moderating role of ethical leadership in this relationship. Background Nurses' green behavior can directly reduce costs and protect the natural environment and organizational sustainability by saving resources and energy. It is not clear how green behavior intention affects green behavior or how the positive influence of green behavior intention on green behavior can be enhanced. Design and Methods. This is a cross-sectional study, and the surveys are collected from 3 hospitals in China. Of the initial cohort of 489 nurses, 89.6% were female. There were 327 subjects (66.9%) aged 35 or less, 267 subjects (54.6%) with 10 years or less of work experience, and 220 unmarried subjects (44.9%). Data were collected from January to July 2018, using three surveys: green behavior intentions, green behavior, and ethical leadership. Results Green behavior intentions impacted employee green behavior (b = 0.32, t = 5.37, p < 0.01). The interaction term for green behavior intentions and ethical leadership was significant (b = 0.28, t = 2.53, p ≤ 0.01); the conditional direct effect of green behavior intentions was only significant at a high level of ethical leadership (conditional effect = 0.53, SE = 0.16, t = 3.38, p < 0.01, 95% confidence interval of 0.22-0.84). Conclusion The intention to engage in green behavior influences nurses' green behavior positively, and the relationship is stronger when ethical leadership is high in the organization than when ethical leadership is low. The results of this study can help both academics and practitioners to understand the micromechanism of environmentally sustainable development in more detail and to identify the mechanisms and boundary conditions of green behavioral intentions, green behavior, and ethical leadership.
Introduction
Environmentally sustainable development is a serious global problem facing all of humankind. Previous studies have shown that environmental pollution is largely caused by human activities [1]. The voluntary environmental protect action is considered to play the most important role in environmental pollution control and sustainable development; thus, organizational researchers and practitioners care about environmental issues [2], and the business idea of "being green" is now widely accepted. Prior research has found that the individual behavior is significantly related to organiza-tional environmental performance [3], cost saving and competitive advantage, and waste reduction [4]. The effectiveness of hospital initiatives for sustainable development relies on nurses' behavior [5].
The energy consumption of hospitals in China is relatively high. The energy consumption of 500-bed general hospitals reaches 1400 TCE/year, and the energy consumption of hospitals accounts for 2.09% of the total hospital cost, and the energy consumption is on the rise. In addition, by the end of 2019, China had produced 843,000 tons of medical waste in 196 large-and medium-sized cities. In October 2019, the Ministry of Ecology and Environment of China issued specific guidance on the three outstanding problems in the current environmental management of hazardous waste, including weak environmental supervision capacity, unbalanced utilization and disposal capacity, and shortcomings in environmental risk prevention capacity. China has gradually formed a technical situation in which incineration and nonincineration disposal technologies coexist, but medical waste is still mainly disposed of, and the end control of medical waste is paid attention to while the reduction of medical waste from the source is ignored [6].
The hospital in China pays attention to environmental protection behaviors related to business activities but ignores the voluntary environmental protection behaviors of nurses, such as daily office environmental protection behaviors (turning off lights after use, not using disposable supplies, garbage classification, waste recycling, etc.), and green living behaviors (green travel, etc.) that have positive effects on environmental protection. The researchers invited future research on nurses' green behavior, which is defined as an element of environment-friendly behavior within organizational citizenship behavior and considered an essential component of organizational sustainable development that reflects the sense of responsibility of citizens at the ethical level [7]. Nurses' green behavior can directly reduce costs and protect the natural environment and organizational sustainability by saving resources and energy. Hence, an exploration of nurse and leader predictors of nurses' green behavior is needed [8].
Some previous research examined green behavior intention instead of actual green behavior [9]. It is not appropriate to equate green behavior intentions with green behavior and conclude that green behavior intention relates to green behavior positively, because intentions are often weakly related to enacted behavior, and the effect of interventions designed to change intentions on subsequent behavior is not as great as we may imagine. A meta-analysis showed that the relationship between propensity and behavior is quite small [10]. In addition, there have been few studies on the relationship between green behavior intentions and green behavior, and the findings of previous studies on the relationship between green behavior intentions and green behavior were not consistent. The effect of intention on recycling behavior was found to be very small (β = 0:07) in a hospital [11]; in contrast, Holland et al. found that the relationship between intention and recycling behavior was strong and positive in an office environment (rs between 0.48 and 0.64) [12]. These findings show that we cannot simply equate green behavior intention with green behavior and that the influence of green behavior intention on green behavior can differ among different samples. An important gap is that the majority of studies have focused on the positive or negative relationship between green behavior intentions and green behavior rather than the reason why their relationship is not consistent. The present study focused on how green behavior intention affects green behavior in Chinese hospitals and how the positive influence of green behavior intention on green behavior can be enhanced.
Nurses' green behavior reflects a sense of responsibility at an ethical level [13], and ethics is related to leadership directly in organizational life. Ethical leadership is a distinct leadership type. An ethical leader impacts the behavior of their subordinates by serving as a role model [14]. Ethical leaders are honest and impartial in their decisions [15]. Empirical studies have found that people who think more about morality and ethical issues tend to be more concerned about the well-being of others and engage in more proenvironment behaviors at work [16]. Since individuals work with high ethical leader, they will feel the approval of the leader for green behavior and demand the subordinates to be responsible for the green behavior [17]. If they translate their green behavior intention into real action, the leadership recognition they gain will be expanded. However, less has been examined about the role of ethical leadership in facilitating from intention to behavior. Some literature justifies the need for further research in this area [18]. We entail the investigation of this mechanism as an organization through ethical leaders can better implement green behavior practices, which can expand the promoting role of green behavior.
Ethical leaders influence nurses to engage in ethical behavior in the management process by setting an example (showing their behavior in line with ethical standards), taking morality as a guide, through a long-term stable behavior imitation and reinforcement process [15]. Though ethical leaders can influence employees' green behavior positively, how ethical leaders influence the relationship between nurses' green intention and behavior is an area that has not gained due attention of researchers [18]. With this background, this study has attempted to examine how ethical leaders moderate the relationship between nurses' green intention and behavior.
Green Behavior Intentions and Green Behavior.
Green behavior includes behaviors such as turning off the lights when the nurse leaves, editing a file electronically rather than printing it out, using teleconferencing instead of traveling to a face-to-face meeting, or using waste paper to print a draft [7]. Although studies on the relationship between intention and behavior have not had consistent results, most prior research found a positive relationship between them, even if the correlation was not significant [12]. Individuals who set the goal to protect the environment and sustainable development should be more likely to show green behavior because setting a goal increases one's inner motivation. Ajzen argued that intention toward a behavior is a positive or negative feeling toward an objective object or the implementation of a specific behavior [19]. The more positive the intention toward an action, the stronger the behavior. The occurrence of green behavior among employees depends primarily on their intentions toward green behavior. The more positive the intention toward green behavior, the stronger the desire to adopt green behavior. BioMed Research International encourage their subordinates to engage in ethical behavior, such as subjective norms, perceived behavior control, and perceptual action control. Managers' actions speak louder than words [20]. "Subjective norm" refers to the pressure from influential individuals or groups in the process of making a decision to implement or not to implement a specific behavior, which reflects the influence of others on the individual's behavior decisions, mainly in the form of whether the subject thinks that a certain behavior should be performed. Research shows that managers' commitment to the environment is often superficial and formal [21]; therefore, it is vital to lead by action. Perceived behavior control refers to behavior that is not completely controlled by the will. It is the degree to which an individual perceives the difficulty or ease of performing a specific behavior and reflects the variables that may promote or reduce the implementation of the action based on his/her past experience and expectations [22]. The stronger the perceived behavior control, the easier to perform the behavior, and the stronger the intention of performing the behavior. Perceptual action control reflects the actual control situation of a behavior, and it can directly predict the possibility of actual behavior. Ethical leaders are ethical people who are characterized by honesty, trustworthiness, and caring for subordinates. An ethical leader is also an ethical manager, who makes fair decisions based on ethical values, publicizes the importance of morality to subordinates, and standardizes subordinates' behaviors with rewards and punishments, so as to require subordinates to be responsible for the ethic of their own behaviors [23]. According to social learning theory, subordinates will regard leaders with power and status as role models to learn from. Individuals judge the ethic of their behaviors by summarizing the rewards/punishments received from their own behaviors or others' behaviors and adjust and regulate their own behaviors accordingly [24]. By using employees' example to learn from them, ethical leadership has realized its influence on the morality of subordinates' behavior.
Ethical leaders pay due attention to the principle of fairness when choosing how to treat their subordinates. Ethical leaders will pay special attention to procedural justice and interactive justice when allocating their energy resources [23]. For example, when a member is perceived to have green behavior intention, an ethical leader who pays attention to interactive justice will also make corresponding and equal efforts, so that the member can be motivated by the leader to a large extent [17]. In the case of a leader with a high level of ethical leadership, individuals are more likely to translate green behavioral intention into actions. Since individuals feel high ethical leadership, they will feel the approval of the leader for green behavior and demand the subordinates to be responsible for the behavior [17]. If they do not translate their green behavior tendency into action, the leadership recognition they gain will be reduced. Therefore, a high level of ethical leadership enhances the positive impact of green behavior orientation on green behavior. On the contrary, when the level of moral leadership is low, individuals find the gap between themselves and leadership requirements through upward comparison [23]. Combined with the perception of the unfairness of the leader, individuals will realize that it is difficult to gain the approval of the leader by relying on their own green behavior. Instead, they will feel more unfair and jealous. Therefore, a low level of moral leadership weakens the positive impact of green behavior orientation on green behavior.
For nurses, green behavior is an individual behavior consistent with environmentally sustainable development goals, and the positive green behavior of leaders encourages employee green behavior. Ethical leadership promotes nurses' green initiatives by sharing leaders' views on the environment with their subordinates, establishing organization value, and encouraging mutual awareness [25]. Ethical leaders show their values through their positive green behavior, and setting an example is a chance for managers to deliver value to subordinates [26]. When an organization formally implements a sustainable development plan, its importance is signaled by managers' active green behavior [26]. When managers engage in green behavior, nurses feel the support of managers for green behavior. The more active the green advocacy, the more employees perceive green behavior as being recognized, thus promoting nurses' green behavior. Ethical leadership can have a complementary effect to that of the green behavior intentions of nurses.
Hypothesis 2. Ethical leadership moderates the relationship between green behavior intentions and green behavior; that relationship is stronger when ethical leadership is high in the organization than when ethical leadership is low.
In brief, this study is aimed at exploring the relationship between green behavior intentions and green behavior and analyzing the moderating role of ethical leadership in this relationship.
Study Design and
Procedure. This is a cross-sectional study, and the surveys are collected from 489 nurses in 3 hospitals in China. Of the initial cohort of 489 nurses, 89.6% (n = 438) were female, and 10.4% (n = 51) were male. As for their age, 66.9% (n = 327) were 26-30 years of age, and 33.1% (n = 162) were above 31 years old. Regarding their organizational tenure, 54.6% (n =268) had worked for less than 10 years and 55.4% (n = 271) for 11 years above. We received formal approval from the Ethics Committee for Research of the School of Business at Liaocheng University before conducting the survey.
We sought the consent of relevant leaders of the hospital and issued an anonymous questionnaire under the guidance of the director of the office. Then, the researcher described the aim of the research to the nurses in the meeting room. Nurses completed the survey anonymously. All the questionnaires were completed during nurses' work hours. The nurses were investigated by layer cluster sampling, stratified by general hospital units. According to the principle of convenience sampling, 489 valid questionnaires (completed in all items and excluding invalid answers, such as only one score provided for the entire questionnaire) were received.
BioMed Research International
This study adopted a single-factor test to the common method variance. When conducted principal component analysis with all factor fixed as one component, the factor with the most massive explanatory power was 31.22% that was not greater than 40%, confirming that there was no problem.
Measures
2.2.1. Green Behavior Intentions. The green behavior intentions scale consisted of a three-item questionnaire developed by Norton et al., with items such as "I intend to perform proenvironmental behaviors while at work," rated on a 5-point Likert scale. Researchers have validated the same scales in the Asian context [27]. As in previous studies, this study only considered the overall green behavior intention level and recorded the average value of all subjects as the whole level. Cronbach's alpha coefficient was 0.86.
Ethical Leadership.
The ethical leadership scale developed by Brown et al. [15] contains 10 statements, such as "My leader disciplines others in the unit who violate ethical standards," which are rated on a 5-point Likert scale. Cronbach's interval coefficient was 0.87. Researchers have validated the same scales in the Asian context [17].
Employee Green
Behavior. The employee green behavior scale, developed by Norton et al. [7], contains five questions, such as "Thinking about your work today, to what extent did you avoid waste?" Cronbach's alpha coefficient was 0.96. Researchers have validated the same scales in the Asian context [28].
Control
Variables. Consistent with previous creativity research, we controlled for demographic variables such as age, gender, job tenure, and education.
Data Analysis and
Availability. SPSS 22.0 was used for descriptive analysis, Pearson's correlation analysis, and regression analysis [29]. The SPSS PROCESS macro was used to calculate the moderation effect and conditional effects [30]. The data used to support the findings of this study are available from the corresponding author upon request.
Results
Based on preliminary data analysis, it was found that the fitting of each questionnaire index was good, the average variation extraction AVE value was between 0.66 and 0.70, and the combination reliability was between 0.91 and 0.95, both of which met the requirements of AVE > 0:5 and the combination reliability of CR > 0:5. On the whole, the convergence of each measurement scale is very good. Variance inflation factor (VIF) values of the variables are all in the range of 1.33 to 3.21, less than the critical value of 10, which means there is no common method bias problem. The results of discriminant validity test show that the three-factor model fits best, that is, the main research constructs all have good discriminant validity.
From the data in Table 1, it is evident that green behavior intentions were related to green behavior (r = 0:47, p < 0:01).
As a precaution, we retested the hypothesis without including control variables [31]. The test model and the results were unchanged when the model was tested without age, gender, job tenure, or education included in the analysis.
Discussion
Climate change is largely caused by the adverse effects of human activities, and the success of environmental action often depends on the individual actions of employees [32]. The purpose of this study was to explore the effect of the intentions of green behavior on green behavior. This study was in the context of China, providing support for greendriven growth and entrepreneurship at the micro-level. As argued by Kim et al., to achieve sustainable development, the importance of green behavior must be realized at different levels (individuals, organizations, and countries). As green development has gained more and more acceptance worldwide, its application at the microlevel in the Chinese context can lead to new progress for sustainable development.
Theoretical Implications.
First, our study explains if green behavior intentions lead to green behavior in China. This study extends the research in the field of green behavior. Our research focused on the influencing mechanism of green development at the microlevel, helping to fill the gap in the existing research, which focused primarily on the macro and middle levels. This study confirmed the impact of green behavior intention on green behavior. We found that green behavior intentions promote nurses' green behavior. This finding is consistent with the theory of planned behavior [19]. Individuals' behavior is determined by their intentions, which are directly impacted by their attitudes. Meanwhile, individuals' attitudes are also influenced by their expectations and evaluations of behavioral results (result beliefs); that is, individuals' attitudes are influenced by their beliefs 4 BioMed Research International about the results of behaviors and indirectly influence their behavior through behavioral intentions. This result is also consistent with Holland et al., who found a positive relationship between intentions and recycling behavior [12]. Second, we further clarify why high green behavior intentions not always lead to green behavior and analysis the relationships among green behavior intentions, ethical leadership, and green behavior. We found that green behavior intentions and ethical leadership complement each other and that green behavior intentions are more important for employees who work under a lower level of ethical leadership. Moreover, there is more employee green behavior among employees who work under a higher level of ethical leadership than employees who work under a lower level of ethical leadership. We found that the impact coefficient of green behavior intentions on green behavior was stronger for employees who worked under a higher level of ethical leadership. Ethical leadership plays a moderating role in the relationship between green behavior intentions and green behavior, and green behavior intentions and ethical leadership complement each other. Employee participation is a key factor in promoting environmental sustainability [32]. Organizational managers influence many behavior outcomes, such as organizational safety and environmental performance [33]. Ethical leaders set an example through their green behavior and influence the sustainable development of organizations through their environmental ethical commitments. Environmental ethics leaders tend to have a collectivist spirit that transcends their own interests. They usually extend their commitment to environmental ethics, to the organization's environmental management practices, and even to the sustainable development of the environment to influence employees' green behaviors.
Practical Implications.
The results could provide a new perspective for environmental pollution management and help leaders achieve their green-driven growth and development. This research can provide a new perspective-focused on management practice-for sustainable development research. The intermingled ethical and social responsibility of enterprise managers is an important factor that restricts further development of enterprise environmental performance. Hospitals should list environmental ethics culture as an important content of corporate culture, so that the publicity of environmental ethics is everywhere and the awareness of environmental ethics is deeply rooted in the hearts of the people. Hospitals need to be fully permeated with the concept of environmental protection. Hospitals can use bulletin boards, work briefings, environmental knowledge manuals, environmental microfilms, environmental moral public welfare volunteer activities, etc., to convey national and enterprise laws, policies, systems, and regulations related to environmental protection [34]. Leaders need to advocate the staff to establish the concept of environmental conservation, environmental protection, and sustainable development, cultivate the environmental protection ethics of the enterprise and its members, and establish a sense of social responsibility and mission.
The ethical level and moral behavior of managers guide the overall ethical level of enterprises, which influences the sustainable development of organizations. To improve employees' green behavior, managers should establish the ethical concept of environmental protection and seek to lead incorruptibly, giving play to the role of environmental ethical models and incorporating strict environmental ethics into their personal behavior to build a strong corporate environmental moral atmosphere, improving the level of environmental protection in enterprise management. Managers should avoid blindly pursuing short-term economic interests and should have a long-term vision and a high degree of social responsibility. Leaders need to take environmental ethical standards as an important aspect of their organization's social responsibility, identify them as development directions and goals of the enterprise, integrate them into the core values of the enterprise, and actively advocate for an environmental ethical culture. They should set up the environmental protection, moral standards of energy conservation and There are a few limitations to our study. First, all of the data were from self-assessment questionnaires. Participants self-reported their green behavior, which is more subjective than regulators' or peers' evaluations, comments, or records. Self-reports can be influenced by social desirability. Hence, in addition to selfreporting, future research could attempt to collect objective data. Second, we used general items to measure employees' green behavior intentions, but specific items to measure actual green behavior. The relationship between these variables is likely to be strengthened if specific items are used for both measures. Recent studies have used more specific concepts of green behavior than our study. For example, one study distinguished between demanded (i.e., formal requirements) and proactive forms of green behavior and showed that they were associated with different antecedent variables [35]. Future research on green behavior could also distinguish between active behavior and behavior that is not done.
Conclusions
In conclusion, this research demonstrates the importance of further considering the moderation role of ethical leadership in the relationship between green behavior intentions and green behavior. The intention to engage in green behavior associates with nurses' green behavior positively, and this relationship is stronger when ethical leadership is high in the hospital than when ethical leadership is low. The results of this study can help both academics and practitioners to understand the micromechanism of environmentally sustainable development in more detail and to identify the mechanisms and boundary conditions of green behavioral intentions and green behavior.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Additional Points
Implications for Nursing Management. Leaders should promote nurses' green initiatives by sharing leaders' views on the environment with their subordinates, establishing organization value, and encouraging mutual awareness.
Conflicts of Interest
No conflict of interest has been declared by the authors. | 2021-05-07T05:22:10.504Z | 2021-04-16T00:00:00.000 | {
"year": 2021,
"sha1": "4e5ad1490e969167064a26203542f99cd3afa8f3",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/bmri/2021/6628016.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4e5ad1490e969167064a26203542f99cd3afa8f3",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231700110 | pes2o/s2orc | v3-fos-license | A method of purifying alpha-synuclein in E. coli without chromatography
Research has implicated alpha-synuclein (aSyn) in pathological protein aggregation observed in almost all patients with Parkinson's disease and more than 50% of patients with Alzheimer's disease. An easy and inexpensive method of purifying aSyn and developing an in vitro model system of Lewy body formation would enhance basic biomedical research. We report aSyn purification technique that leverages the amyloidogenic property of aSyn suitable for purifying monomeric aSyn without chromatography and denaturing agents. We expressed full-length and untagged aSyn in Rosetta(DE3) pLysS and purified ~60 μg of aSyn from 500 mL culture within 24 h. After IPTG-induced expression of aSyn in E. coli, we disrupted the cells with a sonicator. We centrifuged the cell lysate in a 15 mL tube, which leads to aSyn-induced aggregation of native E. coli proteins. After removing aggregates, centrifugation in a 30 kDa cut-off filter followed by a 10 kDa cut-off filter led to purified water-soluble aSyn. The identity of aSyn was confirmed by Western blot using anti-aSyn antibody and Edman sequencing. Its mass was determined to be 14.6 kDa using a MALDI TOF-MS mass spectrometer. The majority of aSyn led to water-suspended (as opposed to precipitated) aggregation of E. coli proteins with visible fibrous structures. The broad-spectrum binding and amyloidogenic property of aSyn is thus not only useful for inexpensive aSyn production for diverse applications, but it also expands studying its possible roles in human physiology. The aggregate of E. coli proteins induced by aSyn during the purification process may serve as a Lewy body model.
Introduction
Alzheimer's disease (AD) and Parkinson's disease (PD) are the leading causes of neurodegeneration in the older population. 5.8 million patients are currently living with AD in the US [1]. This number includes an estimated 5.6 million people age 65 and older and approximately 200, 000 individuals under the age of 65 who have early-onset AD, which may double by 2025 with a projected total cost of care of more than $1 trillion per year in 2050. Also,~1 million patients already have PD in the US [2], and~60,000 new cases per year appear in the US, with a total cost of care estimated to be~$52 billion per year. More than 50% of patients with AD and almost all patients with PD [3] show Lewy bodies, intraneuronal cytoplasmic protein aggregates. The typical protein in Lewy bodies that may be causally related to the etiology of AD and PD is alpha-synuclein (aSyn) [4].
The physiological functions of aSyn remain unclear, but its primary localization in the cytoplasm of mainly neuronal cells [5,6,7] and at presynaptic terminals [5,8,9,10] implies a role in neuronal function. Also, aSyn is associated with the distal reserve pool of synaptic vesicles [7,11,12]. An overexpression [13,14] or knock out [15,16,17] of aSyn leads to dysregulation of synaptic transmission. These studies suggest that aSyn plays an essential role in regulating neurotransmitter release and synaptic function [18]. A growing body of evidence from in vitro and in vivo genetic, biochemical, and biophysical studies suggest that aSyn oligomerization [19,20] and fibrillation [21,22] play a central role in Lewy body pathologies, including AD and PD. There is evidence that the C-terminal truncation of aSyn by proteases leads to protein aggregation and Lewy body formation. Research has shown that asparaginyl endopeptidase (AEP) [23] causes partial cleavage of aSyn. A growing body of evidence suggests that matrix metalloproteases can also partially cleave aSyn [24,25,26,27,28]. However, the mechanisms and cellular pathways leading to neurodegeneration caused by aSyn-induced aggregation remain unknown despite many biophysical studies [29,30,31]. Addressing these knowledge gaps will require further biophysical and biochemical studies of the mechanisms underlying protein aggregation induced by aSyn [32], which in turn would require purified aSyn as well as a model of protein aggregates.
Since the first identification of synucleins in the human brain and E. coli based purification of recombinant synucleins in 1994 [9], several reports of recombinant purification have been reported for aSyn [33,34,35,36]. Protein purification usually requires time-consuming and expensive steps of chromatography to separate proteins of interest from other proteins. Generally, cell lysate goes through a series of purification steps such as precipitation by ammonium sulfate or acid, ion exchange, and size exclusion chromatography [37]. For higher specificity, several fusion strategies such as the glutathione S-transferase (GST) system [38] and the chitin-binding domain/intein system [39] are also useful. Leveraging periplasmic localization of expressed aSyn in E. coli, a shorter purification protocol was developed [40]. For NMR studies, isotopically labeled aSyn was purified in E. coli [41]. For the delivery of aSyn inside cells, researchers have purified Tat-fused recombinant aSyn in E. coli [42]. For purifying aSyn with post-translational modifications, Gerding et al. reported a method of purifying aSyn with 3-nitrotyrosine in E. coli [43]. These methods can provide good purity and homogeneity of purified aSyn within three or more days. Recombinant aSyn purified in E. coli has enabled many in vitro and even in vivo studies revealing many insights into structure-function relationships of aSyn. Based on these studies using recombinant protein, aSyn is generally considered a "natively unfolded"~14.6 kDa monomer [35] that can form secondary α-helical structures upon binding to lipid membranes and detergents [44].
In contrast to the monomeric form of recombinant protein, aSyn purified from the human brain [45], live human cells, neuronal and non-neuronal cell lines predominantly show a "natively folded"~58 kDa tetrameric form [46]. The additional efforts to purify aSyn destabilize the tetrameric forms, leading to the monomeric form [45]. Several factors affect aSyn studies, including purification [37,45], denaturing step, high concentration, and post-translational modification [47]. To enable further studies of the structure-function relationships of aSyn, we developed a quick and inexpensive method of aSyn purification in E. coli. We describe a method to purify untagged recombinant aSyn within 24 h, where we have eliminated chromatography from aSyn purification by exploiting the propensity of aSyn to form aggregates. Our core technology involves "self-purification" of aSyn, which is possible because aSyn leads to aggregation of E. coli proteins leaving soluble full-length aSyn. As a result, we obtained purified aSyn and a model of aSyn-induced protein aggregates to enable further biochemical and biophysical studies.
aSyn is toxic for E. coli
We expressed full-length recombinant human aSyn with 140 residues in E. coli. The molecular weight of full-length aSyn is~14.5 kDa. We inserted the sequence of aSyn into the pET11a vector between NdeI (Nterminal) and HindIII (C-terminal) restriction sites. We transformed the plasmid into Rosetta (DE3) pLysS competent E. coli strain for aSyn expression. The aSyn in the plasmid is under the control of the lac operator, and as such, we induced aSyn expression using 1 mM Isopropyl β-d-1-thiogalactopyranoside (IPTG) at Optical Density (OD 600 ) ¼ 0.17. As shown in Figure 1a, IPTG-induced production of aSyn inhibited E. coli growth. However, E. coli growth was unaffected for 2.5 h after IPTG induction, indicating a threshold concentration of intracellular aSyn for toxicity. We used Congo red, a stain for amyloid [49,50], to test if aSyn production after IPTG induction leads to amyloid formation. E. coli culture without IPTG induction served as the control. We incubated cultures with and without IPTG induction at the same OD, centrifuged and (c) Expression of aSyn with and without IPTG. A comparison of protein expression with and without IPTG showed a band at~16 kDa due to aSyn, which is higher than the actual mass. Published reports have shown aSyn indicating a little higher weight in SDS PAGE [26,40,41,48] and expected value in SDS PAGE [27]. See the supplementary information ( Fig. S2) for SDS PAGE, showing both higher and expected molecular weights depending on the experimental conditions. For full gel, see Fig. S3a. discarded supernatant, reconstituted the cells in water, and measured Congo red fluorescence (see methods for detailed procedure). Figure 1b shows that Congo red fluorescence with IPTG induction is higher than without IPTG induction at each OD, suggesting that aSyn forms amyloid inside E. coli and reduces growth observed in Figure 1a. The supernatant of centrifuged cell lysates was analyzed using SDS PAGE, as shown in Figure 1c, which showed a band at~15 kDa due to aSyn after induction with IPTG. As a negative control, cells without IPTG induction did not produce aSyn. A comparison of protein bands with and without IPTG revealed that E. coli protein bands were of lighter intensity when aSyn was present. In combination with the reduced growth (Figure 1a), this observation suggests that aSyn leads to aggregation of E. coli proteins and causes toxicity.
aSyn leads to water-suspended structures and forms oligomers upon centrifugation
We observed that the centrifugation of E. coli lysate at 10000 rpm in a 15 mL tube led to water-suspended structures that did not precipitate (Figure 2a). The water-suspended structures revealed different morphologies under a light microscope (Figure 2b). The soluble portion in Figure 2a showed oligomers in Western blot (WB) imaging ( Figure 2c). The soluble part's centrifugation using a 30 kDa filter led to further aggregation and a clear flow through (Figure 2d). Interestingly, the watersuspended structures did not form in a 50 mL tube, suggesting that centrifugal force in combination with narrow diameters leads to the water-suspended aggregation.
Aggregation of E. coli proteins induced by aSyn facilitates purification
We leveraged the aggregation of E. coli proteins to purify aSyn without chromatography using 30 kDa and 10 kDa centrifugation filters (Figure 3a). The identity of aSyn was confirmed by WB using an anti-aSyn antibody (Figure 3b). We used the Bradford assay to quantify the concentration of purified aSyn. Figure 3c shows the standard curve generated using bovine serum albumin (BSA), where the red symbol indicates the data point due to aSyn. We obtained~60 μg of purified aSyn in solution from 500 mL culture within 24 h. Note that we got a lower amount of purified aSyn because we also formed aggregates of native E. coli proteins leveraging the amyloidogenic property of aSyn.Also, we performed mass spectrometry and Edman sequencing of purified aSyn at the Tufts Core Facility. Mass spectrometry (Figure 3d) confirmed that the purified aSyn did not have higher molecular weight contamination. Further, Edman sequencing confirmed the first seven amino acids to be MDVFMKG as expected (see Fig. S1).
In conclusion, we have developed an inexpensive and quick method of purifying~60 μg recombinant aSyn in E. coli within 24 h Figure 4 shows the purification flowchart. Two critical steps are the use of sonication and narrow diameter tubes for centrifugation. We compared purified aSyn with commercially available aSyn (Fig. S2), which shows comparable purity. In the future, functional studies such as fibrillation kinetics are needed to compare the two sources of aSyn. In contrast to the chromatography-free method in this paper, chromatography-based purification techniques using denaturing agents take a few days. Recently, Gerding et al. reported a process of purifying~1 mg of aSyn in E. coli [43] involving two chromatographic steps. Although 60 μg of aSyn from 500 mL of E. coli culture in our paper is significantly less than~1 mg of aSyn, aSyn-induced protein aggregates are an important byproduct of the method described in this paper. The formation of aggregates enables chromatography-free purification but leads to the lower yield of purified aSyn. These aggregates might be useful as a model of Lewy bodies. However, further studies are needed to confirm whether or not these aggregates can be used as a model of Lewy bodies. The production of aSyn-induced aggregates, in addition to purified monomeric aSyn, is unique relative to the prior reports on aSyn purification. We have observed oligomers of aSyn in the supernatant, as suggested by WB (Figure 2c), before purifying monomeric aSyn (Figure 3a). The observation of aSyn oligomers is consistent with previous reports of the tetrameric form of aSyn [45,46]. However, we cannot conclude the precise stoichiometry because we did not perform any size exclusion chromatography in our chromatography-free method. The dramatic substrate promiscuity of aSyn enables purification and affects E. coli growth (Figure 1a). We used Congo red [49,50] to detect amyloid formation in E. coli (Figure 1b). In the future, we need functional data to define the toxicity of purified aSyn in mammalian cells and animal models. To this end, thioflavin T (ThT) [51] and 1-anilinonaphthalene-8-sulfonic acid (ANS) [52] may provide additional information about fibrillation. Both ThT and ANS emit enhanced fluorescence upon binding fibrils. However, ThT emits stronger fluorescence upon binding the β-sheet conformation of amyloid fibrils. In contrast, ANS emits enhanced fluorescence upon attaching to hydrophobic surfaces. As such, ANS is a suitable probe for partially unfolded protein conformations. The promiscuity of aSyn with E. coli proteins suggests that aSyn may have more physiological implications than neurodegeneration.
Transformation of the plasmid
The plasmid contains the DNA sequence for 140-residue wild type aSyn inserted into the pET11a vector between NdeI (N-terminal) and HindIII (C-terminal) restriction sites. We transformed the plasmid into Rosetta (DE3) pLysS E. coli (Millipore, Cat# 70956-4). We thawed 50 μL of cells on ice. 10 μL of plasmid at 100 ng/mL concentration was added to
Growth of Rosetta (DE3) pLysS cells
We prepared 35 mL of seed culture in LB media (Sigma-Aldrich L7658) from 3 mL of E. coli cells' glycerol stocks. We added 35 μL of ampicillin at 100 mg/mL concentration and 35 μL of chloramphenicol at 34 mg/mL concentration. The seed culture was grown overnight for 15 h in shaker-incubator at 37 C with 250 rpm orbital agitation until growth reached OD 600 ¼ 1.8. 10 mL of the seed culture was added to 500 mL sterile LB media with the same final concentrations of antibiotics in two 1 L conical flat-bottom glass flasks and grown at 37 C with 250 rpm orbital agitation. After the growth reached OD 600 ¼ 0.2, we induced the cells in one flask with 500 μL of 1 M IPTG (ChemCruz, Cat# SC-202185B). The cells in the other flask were not induced with IPTG to serve as the negative control. After induction, we grew cells in both flasks for 5 h in the incubator-shaker at 37 C with 250 rpm orbital agitation. We measured OD using a cell density meter (WPA Biowave, Model# C0800) to quantify the growth, as shown in Figure 1a. The cells were harvested and centrifuged in 15 mL centrifuge tubes at 10000 rpm for 10 min using a fixed angle centrifuge (Sorvall Lynx 4000 centrifuge with F12-6X500 rotor, Cat# 75006580). Note that LB media was sterilized by an autoclave (Panasonic, Model# MLS-3781L) following the preset liquid sterilization program (121 C for 15 min).
Congo red staining
We collected 1 mL of culture from 500 mL stock cultures with and without IPTG at OD 600 ¼ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, and 0.8. We centrifuged the 1 mL cultures in 1.5 mL microcentrifuge tubes at 10,000 rpm for 10 min. After discarding the supernatant, we added 200 μL of Congo Red stain (Abcam, Cat# ab150663) to each sample tube and vortexed briefly to reconstitute cells. We incubated the cells with Congo red at 37 C for 10 min. After centrifugation of samples at 10,000 rpm for 10 min, we removed the excess stain and added 500 uL of deionized water to reconstitute the stained E. coli by brief vortexing. We loaded 200 μL of each sample into thin-walled PCR tubes and measured the Congo red emission in the range of 565-650 nm using a DeNovix Fluorometer QFX with 470 nm excitation (Figure 1b).
Cell lysis and SDS PAGE to confirm the expression of aSyn
We took 10 mL of each growth and disrupted the cells using a sonicator (Branson Digital Sonifier, Model# BBT16031593A) at 30% amplitude for 10 min with a sequence of 10 s pulse ON and 20 s pulse OFF. We centrifuged the cell lysates and collected the supernatant containing soluble proteins. These samples were prepared for SDS PAGE by 1:1 dilution in sample buffer with β-mercaptoethanol (Sigma-Aldrich, Cat# M3148) made from 2x Laemmli buffer (Biorad, Cat# 161-0737). WB using an anti-aSyn antibody. A calorimetric image (gray bands on the left) of the markers superimposed with the WB image. (c) Bradford standard curve generated using BSA as the standard. The solid line is the best fit to a line y ¼ ax; the best-fit parameter is a ¼ 0:7 AE 0:1 (standard deviation of three replicates). The red square due to aSyn results in a concentration of~60 μg/mL (d) MALDI-TOF mass spectrum of aSyn shows the purity of aSyn. Two peaks appear due to aSyn: peak 1 (7.3 kDa and 7.5 kDa) and peak 2 (14.5 kDa and 14.7 kDa). For full gel and WB, see Fig. S3c-d.
We boiled the samples with the dye for 10 min and loaded 30 μL of each sample into the wells of a 15% polyacrylamide gel. The gel was run in 1X SDS running buffer at 70 V and 40 mA. After 30 min, we increased the voltage to 120 V and allowed the gel to run for an additional 1 h. Gels were removed from the running tank and stained in 100 mL of stain prepared using 30% ethanol, 10% acetic acid, and 0.5% Coomassie Brilliant Blue R-250 (Biorad, Cat# 161-0436). The gel was stained for 2 h and washed twice in DI water to remove excess dye, followed by destaining in 200 mL of 30% ethanol and 10% acetic acid for 18 h. We imaged the gel using an imager (Figure 1c).
Protein aggregation induced by centrifugation of high aSyn concentration in narrow tubes
We caused protein aggregation by aSyn as an integral part of our purification to pull down native E.coli proteins, and aSyn effectively selfpurified itself. We reconstituted 1 g cell pellet in 6 mL of protein buffer (50 mM Tris, 100 mM NaCl, pH 9.0) in a 15 mL tube and disrupted the cells using a sonicator as described before. The step of lysate centrifugation in a 15 mL tube at 10000 rpm for 10 min is critical. After centrifugation, we observed water-suspended structures. Note that the water-suspended structures did not form when we centrifuged the cell lysate in a 50 mL tube. We also noted that even with longer centrifugation, the water-suspended structures did not precipitate and form a pellet at the bottom of the tube and remained suspended in solution. The suspended insoluble structures were highly viscous and cohesive, like mucus. We could pull out the structures from the solution using a 1 mL pipet tip. The soluble supernatant was filtered using a 30 kDa Amicon filter tube (Millipore, Cat# UFC903024) at 5000 rpm for 20 min. The flow-through of filtration was then loaded into a 10 kDa Amicon filter tube (Millipore, Cat# ACS501012) and centrifuged at 5000 rpm for 20 min. The retentate from this filtration contained purified aSyn, which we checked using a 15% SDS PAGE. The gel was run at 150 V for 10 min and then 200 V for 45 min. After purification, we obtained 1 mL of~60 μg/ mL aSyn. We quantified the protein concentration using the Bradford assay (Figure 3c).
Identification of aSyn using Western blotting
We ran the SDS PAGE of the protein sample in 15% polyacrylamide gel as before. Then, we transferred the proteins to a nitrocellulose membrane using the Trans-Blot Turbo transfer system (Biorad, Model# 170-4155). Then, we dipped the membrane with transferred proteins into 30 mL of blocking buffer (skim milk) and placed on a shaker for 2 h. After removing the blocking buffer, we added the primary anti-aSyn antibody (Abcam, Cat# 138501) to a final concentration of 0.5 μg/mL in 10 ml of blocking buffer and placed on a shaker at 4 C for 16 h. After removing the primary antibody solution, we washed the blot 3 times with 30 mL of protein buffer (50 mM Tris, 100 mM NaCl, pH 9.0). After washing, we dipped the membrane into 50 mL of the secondary antibody (Bosterbio, Cat# BA1054) at a final concentration of 0.5 μg/mL for 2 h.
We removed the secondary antibody solution and rewashed the blot with 30 mL of protein buffer. 12 ml of the chemiluminescent substrate (Biorad, Clarity Western ECL substrate, Cat# 170-5060) was added to the blot and imaged using a Biorad imager.
Identification and quantification of aSyn
In addition to WB, we confirmed the identity of purified aSyn using mass spectroscopy and Edman sequencing performed by the Tufts Core Facility at Tufts Medical School. To quantify aSyn concentration, we used Pierce Detergent Compatible Bradford Assay Kit (Thermo Scientific, Cat# 23246). We created a standard curve using BSA with known concentrations in protein buffer (100 mM NaCl, 50 mM Tris, pH 9.0). We measured the absorbance at 595 nm and subtracted the blank reading without BSA from the absorbance measurements. We fitted the standard curve to a Susanta K. Sarkar: Developed the purification scheme; Conceived and designed the experiments; Analyzed and interpreted the data; Wrote the paper.
Funding statement
One grant to S.K.S. and J.K.S. from the National Institutes of Health (RGM137295A) partially supported this work.
Data availability statement
Data included in article/supp. material/referenced in article.
Declaration of interests statement
Patent pending.
Additional information
Supplementary content related to this article has been published online at https://doi.org/10.1016/j.heliyon.2020.e05874. | 2021-01-26T05:29:48.928Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "3239a7365e5c8a8789899c4d70d59d3acb767489",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S240584402032716X/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3239a7365e5c8a8789899c4d70d59d3acb767489",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21536599 | pes2o/s2orc | v3-fos-license | The Control of Myocardial Contraction with Skeletal Fast Muscle Troponin C*
The present study describes experiments on the myocardial trabeculae from the right ventricle of Syrian hamsters whose troponin C (TnC) moiety was ex-changed with heterologous TnC from fast skeletal muscle of the rabbit. These experiments were designed to help define the role of the various classes of Ca2+-binding sites on TnC in setting the characteristic sensitivities for activations of cardiac and skeletal muscles. Thin trabeculae were skinned and about 76% of their troponin C extracted by chemical treatment. Ten- sion development on activations by Ca2+ and Sr2+ was found to be nearly fully blocked in such TnC extracted preparations. Troponin C contents and the ability to develop tension on activations by Ca2+ and Sr2+ was permanently restored after incubation with 2-6 mg/ ml purified TnC from either rabbit fast-twitch skeletal muscle (STnC) or the heart (CTnC, cardiac troponin C). The native (skinned) cardiac muscle is character-istically about 5 times more sensitive to activation by Sr2+ than fast muscle, but the STnC-loaded trabeculae gave response like fast muscle. Attempts were also made to exchange the TnC in psoas (fast-twitch muscle) fibers, but unlike cardiac muscle tension response of the maximally extracted psoas fibers could be restored only with homologous and
The present study describes experiments on the myocardial trabeculae from the right ventricle of Syrian hamsters whose troponin C (TnC) moiety was exchanged with heterologous TnC from fast skeletal muscle of the rabbit. These experiments were designed to help define the role of the various classes of Ca2+binding sites on TnC in setting the characteristic sensitivities for activations of cardiac and skeletal muscles. Thin trabeculae were skinned and about 76% of their troponin C extracted by chemical treatment. Tension development on activations by Ca2+ and Sr2+ was found to be nearly fully blocked in such TnC extracted preparations. Troponin C contents and the ability to develop tension on activations by Ca2+ and Sr2+ was permanently restored after incubation with 2-6 mg/ ml purified TnC from either rabbit fast-twitch skeletal muscle (STnC) or the heart (CTnC, cardiac troponin C). The native (skinned) cardiac muscle is characteristically about 5 times more sensitive to activation by Sr2+ than fast muscle, but the STnC-loaded trabeculae gave response like fast muscle. Attempts were also made to exchange the TnC in psoas (fast-twitch muscle) fibers, but unlike cardiac muscle tension response of the maximally extracted psoas fibers could be restored only with homologous STnC. CTnC was effective in partially extracted fibers, even though the uptake of CTnC was complete in the maximally extracted fibers. The results in this study establish that troponin C subunit is the key in setting the characteristic sensitivity for tension control in the myocardium above that in the skeletal muscle. Since a major difference between skeletal and cardiac TnCs is that one of the trigger sites (site I, residues 28-40 from the N terminus) is modified in CTnC and has reduced affinity for Ca2+ binding, the possibility is raised that this site has a modulatory effect on activation in different tissues and limits the effectiveness of CTnC in skeletal fibers.
The thin filament-linked regulation of contraction in vertebrate striated muscles is initiated by the binding of Ca" to * This work was supported by National Institutes of Health Grants AM-33736 and HL-18824 and National Science Foundation Grant PCM-8303045, the Blakeslee Fund for Genetics (Smith College) and the Pew Foundation. Brief progress reports of these results have been presented (Babu, A,, and Gulati, J. (1986) Biophys. J. 49,83a; Babu, A., and Gulati, J. (1986) J. Mol. Cell. CardioL 1 8 , l O (Suppl. 3); and Gulati, J., Scordilis, S., and Babu, A. (1986) Fed. Proc. 45,3603. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
$ To whom requests for reprints should be addressed.
a special class of sites (Ca2+-specific) on the troponin C (TnC)' moiety (Leavis and Gergely, 1984). This produces dynamic conformational changes in distant regions of TnC and thereby modulates the interactions between the various regulatory components to promote activation (Ebashi and Endo, 1968;Grabarek et al., 1986). However, the detailed nature and pathways for the transfer of information of conformational changes to each of the components is still being worked out, and the question has remained whether the Ca2+ binding to the particular sites is also the final determinant of the actual activation characteristics in different tissues. To address this question, in the present study, we have adapted the extraction procedures for troponin C, recently worked out for myofibrils and skinned fibers of skeletal muscle (Cox et al., 1981;Brandt et al., 1984;Babu et al., 19861, to the myocardium. The extracted moiety could be replaced with troponin Cs from skeletal and cardiac muscles and their influence on the contractile properties are studied. TnC from the fast-twitch skeletal muscle is quite similar to the subunit from cardiac muscle in its molecular weight and both have two separate classes of Ca2+-binding sites (Ca2+specific and Ca2+-Mg2+). Ca2+-specific sites are putatively the trigger sites during activation. However, the amino acid sequence of one of the Ca2+-specific sites in cardiac muscle (site I, residues 28-40 from the N terminus) indicates several amino acid replacements (van Eerd and Takahashi;) and a tremendous reduction in its Caz+-binding affinity. STnC binds 4 mol of Ca2+ (2 mol to the low affinity Ca2+-specific sites I and 11, 2 mol to the high affinity Ca2+-M$+ sites I11 and IV), but CTnC thus binds only 3 mol and has only one trigger site in the physiological range (Holroyde et a[., 1980). Thus the ability to exchange CTnC for STnC offered the possibility of studying the functional role(s) of the Caz+specific sites. The relationships between isometric tension development and free Ca2+ for cardiac and skeletal muscles appear to be quite similar; but with Siz' , which can replace Ca2+ in physiological (Donaldson and Kerrick, 1975;Kitazawa, 1976;Gulati and Babu, 1985a) and biochemical (Ebashi and Endo, 1968) studies, major differences are seen in sensitivity of the two tissues for tension developments (Kitazawa, 1976) and actomyosin ATPase activities (Ebashi and Endo, 1968). These differences in the ST2+ activations of skeletal and cardiac muscles are exploited here in combination with TnC exchange in the myocardium both to investigate the role The abbreviations used are: TnC, troponin C; TnI, troponin I; TnT, troponin T; STnC, skeletal fast-twitch muscle TnC; CTnC, cardiac TnC; LCs, light chains (C-LC1 and C-LC2 of cardiac myosin and S-LC1, S-LC2 and S-LC3 of fast skeletal myosin); EGTA, ethylene glycol bis(~-amin~thyl ether)-N,N,~~-tetraacetic acid; SDS-PAGE, sodium dodecyl sulfate-polyacrylamide gel electrophoresis; pK, pCa or pSr for half-maximal activation; Po, tension made in pCa4 (180 m M ionic strength) by the native preparations.
Regulation of Contraction in
Heart Muscle of the Ca2+-specific sites in defining the activation characteristics in skeletal and cardiac muscles and thereby to gain further insights into the nature of the conformational changes in TnC moiety needed to initiate muscular contraction. The present study with myocardium seemed particularly worthwhile because recent similar attempts, in the converse experiment using skeletal muscle with CTnC, gave variable results and led to conclusions diametrically opposite of each other (Kerrick et al., 1985;Moss et al., 1986). Several new studies of TnC exchange were also made with skeletal fibers in an effort to seek plausible explanations for these differences.
Our results at near physiological ionic strength (180-190 mM) demonstrate the possibility of significant plasticity in the activation mechanism of the cardiac muscle contractile apparatus. On the basis of these findings the differences in the half-maximal activations for tension generation by divalent metal ions in different tissues could be positively assigned to the possible variations in the properties of the regulatory sites of TnC subunits. In addition, the modification of a trigger site (I) in cardiac TnC appeared to be critical for the tension control in skeletal muscle in 180 mM salt.
MATERIALS AND METHODS
Fiber Preparations-Skinned preparation of adult (6-9-month old) Syrian hamsters (Strain RB, Dr. M. J. Sole, University of Toronto) consisted of 80-150 pm (width) by 1-3 mm (length) trabeculae from the right ventricle. The skeletal single fiber preparation (30-80 pm wide) was from the psoas muscle. Skinning was accomplished by 30min treatment at 10 "C with 0.5% Lubrol-WX detergent in 140 mM KCl, 10 mM imidazole, 5 mM MgC12,5 mM ATP, 5 mM EGTA, 5 mM creatine phosphate. The relaxing and activating solutions contained about 100 mM KCl, 20 mM imidazole, 5 mM MgCl,, 5 mM ATP, 20 mM creatine phosphate and 250 units/ml of creatine phosphokinase, and either EGTA or Ca-EGTA, Sr-EGTA. The pH of each solution was adjusted to 7.00 f 0.01 at the appropriate temperature. The ionic strength of the solutions was kept between 180-190 mM, which is close to the physiological range? Free M e was kept at 1 mM, which is also close to the in vivo value (Gupta and Moore, 1981;Baylor et al., 1982).
Preparation of TnC-extracted Fibers-To achieve TnC extraction from both skeletal and cardiac preparations, they were first transferred from the relaxing solution at 4 "C to a rigor solution containing 20 mM imidazole, 165 mM KCl, 2.5 mM EGTA, 2.5 mM EDTA, and pH 7.0. The temperature was raised to 30 "C and, after 5 min, the preparations were placed in the extracting solution (5 mM EDTA, 10 mM imidazole, and pH 7.2) at 30 "C (Babu et al., 1986). The preparations were returned to the relaxing solution and checked for tension response in pCa4 (20 "C), at intervals of 5 min in the extraction solutions; extraction was stopped when the tension was down to 0-10% of the native fiber. This is referred to as "maximal" extraction in the present study. Extraction time for the trabeculae in the present study was 20-50 min, and for the skeletal fibers from 5-30 min. In a few cases, the skeletal fiber preparation was extracted at 4 "C and this is pointed out in the text when applicable. Reconstitution was attempted by 30-120-min incubation with 2-6 mg/ml TnC in the relaxing solution at 15-20 "C. We made activations at 20 "C in pCa4 and pSr4 except where indicated. The sarcomere length was adjusted at 2.2 (and in a few cases at 2.5 pm) for psoas fibers and 2.2 pm for the trabeculae, using laser diffraction, and was monitored throughout the experiment.
Selection of Fast-Twitch Fibers-Hamster psoas is a mixed muscle (80% dark (fast-twitch) and 20% light (slow-twitch)) by histochemical staining at alkaline (9.7) pH for ATPase? Thus to assure that only Inspection of the known values for the major intracellular constituents contributing to ionic strength (Table 6.2 in Kernan, 1972) gives an estimate for the intracellular ionic strength for mammalian fibers as at least 170 mM (assuming full activity; Palmer and Gulati, 1976). Also, under maximal activations, skinned frog fibers were found to be more stable in 180-200 mM salt than 100-140 mM in the 0-25 "C temperature range (Thames et al., 1974;Gulati and Podolsky, 1981;Gulati and Babu, 1985b First trace in each set shows the tension response prior to extraction, 2nd trace after extraction, and 3rd trace after reloading. Note nearly complete elimination of tension development by the extracted fiber, indicating effectively full TnC extraction. (horizontal bar, 10 s; vertical bar, 50 kN/m2). (C) 15% SDS-PAGE runs on three skinned fiber segments: control (nutiue), TnC-extracted, STnC-loaded. Silver stained. The identification of TnI, TnT, and tropomyosin (TM) bands was similar to Schachat et al. (1985).
fast-twitch fibers were employed in this study, we selected the fibers for experiments by the tension response to pSr5 activation, as described earlier (Babu et al., 1986). Fast fibers gave nearly zero force in pSr5 and slow fibers gave full force about equal to that in pCa4. Such slow fibers were discarded. Fast-twitch fibers had unloaded shortening velocity in the range 4.5-9 lengths/s (20 "C), by slack-test (Gulati and Babu, 1985b).
Mechanical Set-up-The attachment of the skinned preparations and the force transducer and the servo-motor were the same as described before (Gulati and Babu, 1985b), and the reaction solutions during the experiments were contained in thermoelectrically controlled chambers similar to those described by Gulati and Podolsky (1978).
Troponin C Purification-Purified troponin Cs used in the studies for reconstituting the extracted preparations were made according to the method of Szynkiewicz et al. (1985) from rabbit heart and psoas muscle. The final peak from the column was dialyzed against an actin-polymerization buffer overnight, and any contaminating actin was removed by centrifugation at 100,000 x g for 90 min.
Experimental Protocol- Fig. 1 shows the typical protocol on hamster muscles. The protocol was first established on psoas fiber before applying to the trabeculae. Two sets of tension traces on fibers are given. Activations were by Ca2+ (pCa4) and Sf+ (pSr4). In each case the first trace is the maximal tension response (Po) of the "native" skinned fiber. Both Sf+ and Ca2+ tensions were eliminated after TnC extraction (2nd trace in each set, Fig. 1) and could be recovered
Regulation of Contraction in Heart Muscle
almost fully on loading with purified skeletal TnC (3rd trace in each set in Fig. 1). After the experiment, the fibers were subjected to SDS-PAGE (panel C in Fig. 1; see "Gel Electrophoresis" below for technical details), confirming both the loss of TnC on extraction and reconstitution following incubation with purified TnC. Ca2+ and Sr" Activations: Relation between Skeletal and Cardiac Muscles- Fig. 2 compares the pCa-force and pSr-force relationships for the hamster skeletal (at sarcomere length, 2.2 pm) and cardiac preparations (also 2.2 pm). The data and the computer fits of Hill's equations are indicated. Sensitivities for Ca2+ activations appear to overlap for the two tissues at these sarcomere lengths (left-hand plot) but the marked difference in the sensitivities of skeletal and cardiac muscles is brought out by SP' activations. The native (skinned) cardiac preparation of the hamster is found to be about 5 times more sensitive to SF' than the skeletal fiber (pK = 4.4 for psoas, 5.0 for the trabeculae). The disparity in the S?' relationships between skeletal and cardiac muscles greatly facilitated testing of the effects of TnC exchange in these tissues.
Gel Electrophoresis-To establish the extraction procedure for trabeculae, cardiac bundles (5-10 times the sample size used for mechanical measurements) were used for initial gel runs. The treated bundles were dissolved in SDS sample application buffer with the addition of 6 M urea and analyzed by SDS-PAGE according to the method of Laemmli (1970). Fig. 3 shows the results. These gels (15%) were stained with Coomassie Brilliant Blue R250 and scanned on a Beckman DU-8 spectrophotometer at 584 nm to detect the peak maximal for apparent molecular weight determinations and for quantitation of troponin C. To correct for unequal loading of the gel lanes, these data were normalized to both the 38-and 26-kDa bands.
Densitometric scans of the lanes in these initial runs showed that 80-90% of the TnC had been extracted (the tension response of the extracted bundles (pCa4) was 0.07 Po). Proteins in the TnC band were the major ones extracted with the exception of one other band that had an apparent molecular weight 11,000. No effort was made to identify this low molecular weight component.
The gel runs (10 or 15%) of all other experimental fibers and trabeculae were silver stained for improved sensitivity (the expected TnC < 10 ng in our fiber segments) and scanned with LKB laser densitometer (Ultroscan XL). The gels were fixed in 5% glutaraldehyde overnight prior to staining; the solvents and the equipment (Protean 11) were obtained from Bio-Rad. The tissue samples for the gels were carefully dissolved in SDS sample buffers (without urea) with ultrasonication using tapered %-inch micro-tip (Branson Sonifier, Model 200). The fixed gels were thoroughly washed with double distilled (with a 3-stage Millipore Filter System) water prior to staining. To check for the resolution of silver staining, test gel lanes were run with two amounts of sample solution (20 and 40 pl) and TnC, TnI, LC1, LC2, and LC3 bands were found to be within 10% of o l -f -k j 1 6 77'1 I I 6.5 5.5 4.5 5.5 FIG. 2. Comparison of activation characteristics for fasttwitch muscle fibers and cardiac muscles. Number of skeletal fiber preparations was four and cardiac five, and the sarcomere length was 2.2 pm. A t 2.5 pm, the activation curves for psoas fibers were shifted to the left by 0.15 units (not shown) and such shifts are consistent with results in the literature (Stephenson and Wendt, 1984). the expected values (normalization in these cases was generally to LC1 band, by area of the densitometer peak). The results of analysis from the experimental preparations are discussed below.
Statistics-All data are given as mean f S.E. Curve fittings, wherever appropriate, were computed by the method of least squares on a microcomputer (Hewlett Packard-85).
RESULTS
Extraction of Troponin C from the Myocardium and Reconstitution- Fig. 4 shows the tension responses to Ca2+-activations on two typical trabeculae preparations before and after exposure to the TnC extraction procedure. Like skeletal fibers, TnC was extracted until force was close to zero (middle traces in Fig. 4, a and b). The last two traces in Fig. 4, a and b show the reconstitution with purified cardiac and skeletal TnC, respectively. Nearly full recovery of the tension was found on the trabeculae with both types of TnC. Resting tension in the relaxing solution was not affected by TnC extraction.
The gel runs (silver stained) on these preparations in Fig. 4c show the presence and absence of cardiac TnC, and the pooled quantitative data from gel scans on successful skeletal and cardiac preparations is summarized in Table I. Fig. 4c clearly shows the restored CTnC band, but the presence of STnC is difficult to ascertain in the third lane as STnC runs in the same spot as cardiac LC2. These results show 1) that tension development in cardiac muscle drops to practically zero when 75% or more of the native TnC is extracted, 2) that nearly complete reconstitution of the trabeculae is made on incubation with purified TnC, and 3) that the loss of material other than TnC (e.g. the 11,000-dalton component seen in the runs in Fig. 3 and a possible loss of at most 10% C-LC2 in Table IB) during the extraction procedure was not important for tension generation by contractile apparatus in the cardiac muscle (S-LC2 loss was significant in psoas fibers, as explained under "Discussion"), and 4) that any differences in the skeletal and cardiac TnCs are not critical in the reconstituted myocardium to develop the original level of cardiac tension. Additional studies to determine the effects of various TnCs on the sensitivity to Sr2+ are given below. ). c, 10% gels on CTnC-and STnC-loaded trabeculae (silver stained). The CTnC band was close to that in the native trabeculae when the data were normalized to the intensity of the LC1 band (see Table I). Also, note the near absence of CTnC in the 3rd lane.
Further evidence against the possibility of deleterious effects of the extraction procedure was derived from the stiffness measurements in rigor, since this parameter gives an index of the maximum possible number of cross-bridge attachments in the fiber. Table I1 compares the rigor stiffness of native and TnC-extracted preparations. For both the skeletal fiber and the myocardium, rigor stiffness was found to be unchanged by TnC extraction (Table 11).
pCa-Force and pSr-Force Relationships of Reconstituted Fi- Fiber stiffness in rigor Stiffness was measured by stretching the fiber by 0.5% of the length in 750 ps. Temperature 5 "C. Po was measured with pCa4 prior to TnC extraction.
Stiffness
Native TnC ( bers and Trabeculae: Loading with the Native-type (Homologous) TnC- Fig. 5 shows results on the reconstituted preparations over the entire activation range. The top two plots give the data (circles) on psoas fibers loaded with STnC following extraction. These experiments with psoas fibers serve as controls for the studies on trabeculae loaded with STnC (see below, Fig. 6). The lower plots in Fig. 5 (triangles) are on the trabeculae loaded with the homologous CTnC. The solid lines are transferred from Fig. 2 on native fibers and trabeculae, respectively, and they seem to adequately describe the data on both preparations extracted and reloaded with homologous TnCs. Loading the Cardiac Muscle with Foreign TnC-The results in Fig. 4 of maximal Ca2+ activations had indicated that the myocardial preparation was able to develop the original tension level even when loaded with skeletal muscle TnC. Similar observations were made with pSr4. To further examine the efficacy of the STnC in the myocardium, we next determined the entire pSr-tension relationships. The data are given in Fig. 6. They show that the force response of the STnC-loaded myocardium was shifted to the right by 0.7 pSr units (compare filled triangles with half-filled triangles for CTnC loaded trabeculae in Fig. 6). The shifted pK value (4.3) is similar to that found from the typical skeletal fiber pSr-force response (Fig. 2). From the fact that the skeletal fiber loaded with purified STnC gave normal response (filled triangles in Fig. 6), it is very unlikely that the shift in the STnC-loaded trabeculae was due to a modification of STnC on purification. Studies with Skeletal Fibers Reconstituted with Cardiac Troponin C-This was done next to test the efficacy of CTnC in a converse situation from the STnC myocardium at 180 mM salt. To our initial surprise the skeletal fibers, extracted so that the tension in pCa4 was between 0-10% (10-30-min extraction period; residual TnC 20-30%, Table I), showed only marginal recovery of Ca2+ and Sr2+-activated tensions on loading with CTnC. These results are summarized in Fig. 7 (top left panel). The same fibers could recover nearly full tension (mean value, 0.8 Po) with STnC (compare bars 3 and 4 in the top left panel of Fig. 7) indicating that the extraction procedure was not deleterious. Also, the same solution of CTnC used on skeletal fibers was fully effective in restoring the tension responses in the trabeculae, indicating that the limitation of CTnC-loaded skeletal fiber was not due to inactivity of CTnC on purification.
A number of additional experiments were performed to understand the limitation of CTnC in skeletal fibers. The possibility was considered that CTnC did not enter the fiber, but this was ruled out both with gels and with physiological studies. The gel runs in Fig. 8 compare an unextracted (native) psoas fiber segment (1st lane) with STnC-reconstituted fiber (3rd lane) and CTnC-reconstituted fiber (2nd lane). The 2nd lane indicates the loading with CTnC (physiological response of this segment is shown by the force traces in the lower left end of Fig. 9). The quantitative data from gel scans on a number of CTnC loaded fibers are summarized in Table IA and show that CTnC was accumulated to the same level as the original level of native STnC. In another experiment (Fig. lo), cardiac TnC-loaded skeletal fiber (pCa4 tension after extraction, -0.1 Po; after loading, 0.29 Po) was incubated with STnC for additional loading (for 90 min), but we found that there was no more effect on the tension level (0.29 Po) with the second loading. This is additional evidence that CTnC loads into the denuded TnC sites in the fiber. The same doubly loaded fiber was next reextracted and now reloaded directly with STnC, and the tension response was then closer for SZ+ activation of CTnC-loaded psoas fiber were made on two fibers and they were similar to Ca2+. Note that in this case cardiac TnC is less effective than the skeletal TnC. In contrast, tension in the trabeculae was equally well restored with cardiac-type and fastmuscle type TnC.
? - The 2nd lane (+CTnC) was more heavily loaded than the rest to help distinguish CTnC from residual STnC, but all the data were normalized to the intensity in LC1 band ( Table I).
to the original level ( f i n Fig. 10). These results suggest that partial recovery with CTnC at 180 mM ionic strength was due directly to the reduced effectiveness of CTnC in skeletal fibers. Fig. 9 shows the results of a series of experiments on CTnCloaded skeletal fibers where the extraction time was varied. The maximally extracted fibers (extraction time, 20-30 min; tension with pCa4 prior to CTnC-loading, 0-0.1 Po) showed relatively little tension recovery, as above. However, as the extraction time was reduced so that the tension level prior to loading with CTnC was >0.1 Po, the tension recovery following CTnC loading was progressively enhanced. These results show that the effectiveness of CTnC was increased in moderately extracted fibers which suggests the possibility of some cooperative interaction between the two types of TnCs under these conditions.
DISCUSSION
The successful reconstitution of the TnC-extracted myocardium with both CTnC and STnC indicates that the intrinsic differences amongst these moieties (e.g. the additional regulatory site in STnC) do not interfere in activating the contractile proteins in cardiac muscle. On the other hand, our results showing that CTnC is less effective in skeletal muscle fibers at close to physiological ionic strength indicate that the modified site I (of the class of regulatory sites I and 11, which trigger activation) is important in this situation. Despite that, however, the uptake of CTnC by the skeletal fiber was normal (Table I). Since the putative nonspecific Ca2+-M$+ sites I11 and I V are similar in the two TnCs, our results thus provide additional support for the idea (see Leavis and Gergely, 1984) that these nonspecific sites help largely in maintaining the structural integrity of the troponin complexes in the fiber.
Characteristic Activation Curves: The Role of TnC Subunit-The data in Figs. 2 and 5 indicated that Sr2' sensitivities for the activations of native (skinned) and CTnC reconstituted cardiac muscles are about &fold greater than skeletal muscle. Since on loading the trabeculae with STnC, the cardiac muscle now behaved like skeletal fibers (Fig. 6f, our results indicate that TnC moiety in the regulatory complex has the key role in setting the activation characteristics and that the origin of the difference in sensitivities between cardiac and skeletal muscle is in the trigger sites I and 11. The possibility is raised that site I, which is modified in CTnC, has a major role in positioning the activation curves in addition to placing the limitation on CTnC in the fast-twitch skeletal fiber for maximal activation. The data on mammalian slow-twitch fibers are also consistent with the present results. The amino acid sequence of TnC in slow-twitch fibers is similar to CTnC rather than STnC (Wilkinson, 1980); in tension response also the native slow fibers similarly show greater sensitivity with Sr2+ (and also with Ca2+) than fast fibers (Sr" by about 1 pSr unit and Ca2+ by 0.2 pCa unit, which disparities are actually slightly greater than with cardiac muscle in Fig, 2; unpublished data on hamster psoas and soleus at 2.5-pm sarcomere length and 20 "C4; also, Stephenson and Wendt, 1984).
Increased Effectiveness of Cardiac TnC in Moderately Extracted Skeletal Fibers-Although CTnC was largely ineffective in the maximally extracted psoas fibers, the results in Fig. 9 indicated that the effectiveness of cardiac TnC, as judged by Ca2+-activated tension, tended to increase in lightly extracted fibers (1-10-min extraction). One possible explanation was that prolonged extraction (10-30 min) might have caused fiber deterioration, but this appears unlikely since the homologous skeletal) TnC restored the tension response of similarly extracted fibers close to the native level (Figs. 1 and 7). Also, this effect was found to be the same whether the fiber was extracted at 4 "C or 30 "C, except that the extraction periods were greatly prolonged at the lower temperature.
The results on CTnC loaded fibers could be explained if the changes produced by Ca2+ binding to a TnC moiety could be communic$ted to the adjacent TnC sites, possibly through the 410-423 A-long tropomyosin molecules. Accordingly, the increased residual STnC in the moderately extracted fibers would exert a greater cumulative influence on the interspersed CTnC, and this cooperative effect might be sufficient to swtich on the entire thin filament. If 50% tension response of the lightly extracted fiber implied that the residual STnC was at a level 50-60% of the original TnC (see Fig. 8 of Moss et al. 1985), and assuming uniform distribution on the thin filament, our results would suggest that each cardiac TnC on the average must be separated by no more than one tropomyosin molecule from the native TnC for the cooperative action to be effective at 180 mM ionic strength. This explanation thus implies that while the various individual segments of the actin filament may be activated in an isolated fashion, ideally, during maximal activation, the entire thin filament is turned on in a concerted manner with communication between the adjacent segments.
If the presence of active'cross-bridges affected the properties of TnC this might also explain our results. Moderately extracted fibers on Ca2+ activation would have a greater number of bridges as a result of the higher level of residual STnC, and these bridges could in turn exert an increased influence on CTnC improving the apparent effectiveness of CTnC. Gordon and his co-workers (Ridgway and Gordon, 1984) as well as Guth et al. (1986) have recently shown that cross-bridges may increase the Ca2+ sensitivity of TnC, but this explanation of the present results demands in addition that, in the case of CTnC in moderately extracted skeletal fibers, the more numerous bridges should help achieve a more complete conformational change in TnC during activation. As such, our results would also raise the possibility that the overall (conformational) changes produced in TnC by CaZf in native cardiac muscle are below those in native skeletal muscle.
Relation to Other Studies on TnC Exchange-Previously there was debate on the molecular origin for the marked A. Babu and J. Gulati, unpublished data.
Regulcltion of Contraction in Heart
increase in sensitivity to Sr2+ of the cardiac muscle over skeletal fast-twitch fibers, and the extent of TnC involvement was questioned. Ebashi and Endo (1968) were the first to find the differences in the sensitivities of these tissues. They evaluated the actomyosin ATPase activities by making superprecipitation measurements of the various composites of actomyosin, troponin, and tropomyosin. Both the skeletal and cardiac actomyosin preparations had greater sensitivity with cardiac troponin than with skeletal troponin. This is consistent with our results on skinned fibers. Indeed, since our experiments were done by replacing TnC (instead of whole troponin), the results fix TnC as the main determinant of the characteristic selectivity. In contrast, Kerrick et a[., (1980), on repeating the experiments of Ebashi and Endo (1968), by measuring the rates of ATP hydrolysis by skeletal actomyosin, found little difference between the regulation effects of cardiac and skeletal troponin-tropomyosins. More recently they (Kerrick et aL., 1985) made studies with skeletal fibers comparing the results with STnC and CTnC and found no difference here too, which is opposite to the results with trabeculae in the present study. On the other hand, Moss et al. (1986), also using CTnC-loaded fibers, obtained complicated results, possibly because the residual STnC was quite substantial and the salt concentration too low (Babu et al., 1987), but arrived at a similar conclusion to ours that the properties of TnC determine the activation curves in fibers? The tension recovery in Kerrick et al. (1985) was incomplete whether the loading was with STnC or CTnC (maximal tension after loading was about 55%), and this is a major caution in interpreting those results.
In the present instance, then, cardiac muscle was a more convenient preparation for the TnC exchange studies.
Comments on the Results from SDS-PAGE of Extracted Fibers-Close inspection of the data in Table I, A and B provides some additional interesting insights. For instance, distribution of the light chains in the native (skinned) fibers indicates LCl:LC2:LC3 of 1.6:2:0.4 in single fibers from hamsters, which is similar to that reported on myosin purified from rabbit muscle (1.35:2:0.65;Sarkar, 1972). Thus the majority of the myosin heads in the psoas fiber (about 75% after taking the difference in molecular weights) are in the LC1:LCB configuration and the remaining few (less than 25%) of the heads are in the LC3:LC2. On the other hand, the trabeculae results appear to be consistent with an equimolar LCl:LC2 configuration for all heads, which is the expected result since the myocardial tissue has no alkali LC3 moiety.
Further inspection of the data shows that TnC extraction procedure caused no significant loss of LC3 or TnI in both the myocardium and skeletal muscle. In the trabeculae there was less than 10% loss of LC2, which is within the uncertainties of gel measurements? There was a greater tendency for In part because the fibers were only lightly extracted of STnC (40-50%) in Moss et al. (1986b), near maximal tension level was achieved with CTnC loading. Under these conditions of a mixture of STnC and CTnC, they found that CTnC had a major effect on "n" and not on pK of the activation curve (Hill's coefficient n is known to be lower on native slow fibers than on native fast fibers; Stephenson and Wendt, 1984; also unpublished experiments of A. Babu and J. Gulati). These results may also point to interactions between STnC and CTnC when present as mixtures in fibers.
A tighter binding of LC2 to cardiac myosin than to skeletal myosin is indicated. This could be the result of intrinsic differences within the heavy chains. Alternatively, recent studies indicated effects of LC2 on the exchangeability of the alkali light chains in purified skeletal (Pastra-Landis and Lowey, 1986) and scallop myosins (Ashiba and Szent-Gyorgyi, 1985), thereby suggesting a definite interaction between the two types of light chains (LC2 and alkali) on their binding to the heavy chains. Thus it is worth considering that LC1, LC3 heterogeneity may have a different effect on LC2 in skeletal muscle than pure LC1 in cardiac muscle.
Consistent with this, in tests on four fibers we found additional recovery of tension (to 0.91 Po) with purified LC2 on top of STnC loading; LC2 also increased the tension recovery of a CTnC loaded fiber by the same amount in a separate experiment, but the final tension of this maximally extracted fiber was still far below the STnC loaded fibers?
The finding that the mean values for residual TnC in extracted fibers and trabeculae were still close to 25% of the original level when the Ca2+-activated tension fell nearly to zero (less than 0.1 Po) may deserve a comment as well. Since at the present sarcomere lengths approximately 25% of the thin filament is under nonoverlap, this could have suggested that the unextracted TnC was restricted to this region. This possibility is opposed by the findings of Yates et al. (1986) that the extractions were enhanced at long sarcomere lengths. As another possibility, the 25% residual level of TnC for zero tension may be indicative of a threshold level for thin filament activation to initiate cross-bridge cycling, which incidentally also would be consistent with the action of the thin filament as a cooperative unit. | 2018-04-03T04:07:31.059Z | 1987-04-25T00:00:00.000 | {
"year": 1987,
"sha1": "b40d5028a792b1d6989055b8a457443c099613a5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s0021-9258(18)45648-7",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5558f8f56c1d0fdb3ba3f25b6374d7ae203c4a18",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
236882070 | pes2o/s2orc | v3-fos-license | Clinical Characteristics and In-Hospital Mortality of Cardiac Arrest Survivors in Brazil: A Large Retrospective Multicenter Cohort Study
Supplemental Digital Content is available in the text.
In contrast, the impact of early indicators of systemic severity and organ dysfunction on mortality after return of spontaneous circulation (ROSC) is not completely understood. Furthermore, high heterogeneity of patient populations and care practices (e.g., TTM implementation, palliative care practices) may influence outcome of CA survivors (7). We hypothesized that a lag exists between modern guidelines recommendations of post-CA care practices and their implementation in developing countries such as Brazil, which may be addressed in future initiatives aiming at improving CA outcomes. The aims of our study were to describe patient profiles and survival rates, as well as identify clinical and process of carerelated predictors of in-hospital mortality in a large sample of CA patients admitted to Brazilian ICUs.
Design, Setting, and Patients
We performed a restrospective analysis of prospectively collected data from ORganizational CHaractEeriSTics in cRitcal cAre study-a multicenter cohort study of critical care organization and outcomes in Brazilian ICUs (8). We retrieved deidentified data from the Epimed Monitor System (Epimed Solutions, Rio de Janeiro, Brazil)-a cloud-based registry for ICU quality improvement and benchmarking purposes (9). The local ethics committee at the D'Or Institute for Research and Education (Approval Number 334.835) and Brazilian National Ethics Committee (CAAE 19687113.8.1001.5249) approved the study and waived need for informed consent.
All consecutive patients with a CA diagnosiseither as a primary admission diagnosis or occurring after admission-admitted to 92 ICUs from 55 public and private hospitals in Brazil, from January 2014 to December 2015, were included. Readmissions to ICU and patients less than 16 years old were excluded.
Patient Level Covariates and Outcomes
Age, gender, comorbidities (individually and the Charlson Comorbidity Index [CCI]), disease severity scores at admission (Simplified Acute Physiology Score [SAPS] III and Sequential Organ Failure Assessment [SOFA]), and arterial lactate (normal < 2, 2-4, and > 4 mmol/L) on the first day of admission were recorded. Additional variables evaluated included source of admission (operating room, emergency department [ED], cardiac catheterization laboratory, outside hospital transfers), organ support requirements (i.e., vasopressors, mechanical ventilation) during the ICU stay, and use of TTM. Additionally, several process of carerelated variables were recorded, such as delayed admission to the ICU (defined as remaining in the ED for > 24 hr), the presence of a rapid response team, having a TTM protocol implemented, and being a public or private hospital. CA-specific data, including type of arrest rhythm, time to ROSC, and definite CA location (in-hospital CA [IHCA] vs out-of-hospital CA [OHCA]), were not available. Clinical status and functional outcomes data after hospital discharge were also not available. The primary outcome was in-hospital mortality.
Statistical Analysis
Descriptive statistics were reported as medians with interquartile ranges (IQRs) for continuous data and counts and percentages for categorical data. To compare patient characteristics between survivors and nonsurvivors, we used chi-square or Fisher exact test and Wilcoxon rank-sum test for categorical and continuous variables, respectively. We performed univariate analyses of in-hospital mortality using Kaplan-Meier survival curves. We considered age, gender, sources of admission, primary diagnoses, and markers of organ dysfunction as the variables of clinical relevance that were available. Differences among survival curves were evaluated with the log-rank test (confidence level: 0.05).
To assess the independent association between each predictor and hospital mortality at the patient level, we used a random-effects multivariable Cox proportional hazards model where the hazard is death. Due to the different case-mix among the hospitals, we considered the hospital variable as a source of random variability (random intercept). We estimated the hazard ratio (HR) and its corresponding 95% CI for each variable. We reported the full final model including all nonredundant variables associated with the primary outcome. We also reported a sensitivity analysis with a reduced model that included age, gender, and significant nonredundant covariates, in order to assess the robustness of the estimates found in the main model. Some variables such as age and gender were forced into the models based on their clinical relevance (10), and meaningful interactions were tested and reported. Results from the models are presented as the HR for in-hospital mortality with 95% CIs. We used the tolerance statistic and variance inflation factor to assess multicollinearity within the model.
We performed all analyses in R 4.0.2 (R Core Team, Vienna, Austria, 2020).
Missing Data
There was no missing information regarding admission diagnosis or hospital outcome. For clinical data at admission and ICU resource use, if the number of missing values was less than 1%, we imputed using the most frequent category; otherwise, we used a multiple imputation technique using chained equations (11).
Survivors Versus Nonsurvivors
As shown in Table 1, age was higher and female gender was more frequent among nonsurvivors. The CCI was worse in nonsurvivors, whereas individual premorbid conditions and frailty were not significantly different. In univariate analyses, survival over 90 days was progressively worse for patients with increasing SOFA scores and higher lactate levels ( Fig. 1, B and D (log-rank p < 0.001 for both). Also, patients arriving in the ICU transferred from the operating room or catheterization laboratory and those with a primary CA admission diagnosis had higher survival rates ( Fig. 1A) (log-rank p < 0.001) and ( Fig. 1C) (log-rank p = 0.029). In addition, survival was higher in patients who did not present with temperature dysregulation or hypotension upon ICU arrival. Stratification by age and sex was not associated with survival in univariate analysis ( Supplementary Fig. 2, http://links.lww.com/ CCX/A706).
Predictors of In-Hospital Mortality
We performed a multivariate random-effects Cox proportional hazards regression analysis to identify characteristics independently associated with in-hospital mortality (Fig. 2). After adjusting for age, gender, and severity, each additional point in SOFA score increased the hazard of death by 6%. Furthermore, markers of shock and hypoperfusion (systolic blood pressure [SBP] < 100 mm Hg and arterial lactate > 4 mmol/L) were associated with higher mortality (odds ratio [
DISCUSSION
To our knowledge, this is the largest cohort of CA survivors from South America. We found an exceedingly high in-hospital mortality rate in post-CA patients admitted to medical-surgical ICUs in Brazil and negligible rates of TTM in postarrest care. Almost half of patients died within 48 hours of admission, and death was associated with organ dysfunction, temperature dysregulation, source of admission other than the operating room or catheterization laboratory, and CA secondary to other primary admission diagnoses.
Reported mortality rates of CA survivors vary significantly depending on the location and etiology of arrest, type of arrest rhythm, and specific measures of post-CA care (1,(12)(13)(14)(15)(16)(17)(18). Even in the setting of clinical trials of OHCA with cardiac etiology undergoing TTM, survival rates ranged from 27% to 48% (19,20). In a randomized trial of TTM following OHCA and IHCA with nonshockable rhythm, overall reported mortality reached 82% (21).
A recent analysis of OHCA patients from the International CA Registry treated with TTM showed profound differences in the rates of good functional outcomes, which persisted after adjustment for patientspecific factors, with risk-adjusted good outcomes ranging from 20% to 50% (22). High-performing centers reported greater use of temperature targets 33°C, faster TTM implementation, and higher rates of early cardiac revascularization. In a large mixed IHCA and OHCA population from an observational multicenter European study, in-hospital mortality reached 53% (23). Although data suggest temporal trends toward improved survival for IHCA (24), outcomes remain worse than OHCA in most cohorts. Analyses of large registry data showed highly variable mortality rates across centers, with unadjusted mortality of 82% in the United Kingdom and adjusted mortality rates ranging from 77% to 88% in the United States (13,25). A recent systematic review including 40 studies found an overall 1-year survival rate of 13%, with large betweenstudy variability, and increased survival trends over time (18).
In comparison with previous studies, ours had higher in-hospital mortality than most cohorts (83%), but similar to large registries of real life IHCA and to a pragmatic TTM clinical trial in nonshockable rhythm. This finding may be partially explained by the characteristics of the population studied. Although data specifically related to CA location were not prospectively collected, our cohort comprised IHCA in its majority, as at least 65% suffered CA while in the ICU. Furthermore, of the 35% who were admitted in the ICU with CA diagnosis, some may have also experienced the CA in the hospital (ward or ED). Furthermore, our study included only medical-surgical ICUs, excluding dedicated coronary care units. Although cardiac patients from general ICUs were analyzed in our study, this may have introduced a selection bias reducing the prevalence of cardiac etiologies, which are known to have better outcomes. Multivariate analyses showed that the presence of spontaneous normothermia at admission and transfer from the operating room or the catheterization laboratory were associated with better survival. In addition, clinical markers of shock (elevated arterial lactate and hypotension with SBP < 100 mm Hg) were independently associated with increased mortality. Since almost all patients did not receive TTM, we hypothesize that temperatures between 35.5ºC and 36.5ºC in the first hour of ICU were probably related to less severely ill patients and may have been protective in comparison with those presenting with hyperthermia. Transfer from the catheterization laboratory suggest a cardiacrelated etiology (18,26). Mortality rates of patients with a cardiac-related cause (i.e., ST-elevation myocardial infarction [STEMI]) are lower than of those with other etiologies such as acute respiratory failure (27). A recent analysis of the International Cardiac Arrest Registry showed that overall survival was greater in those with STEMI compared with those without (28), whereas a systematic review found that patients with a likely cardiac cause had better survival compared with noncardiac etiology patients (18). Our findings that patients with presumed cardiac etiology (i.e., transfer from catheterization laboratory, CA as primary admission) had better survival reinforce this. Among patients who died, these variables were independently associated with better survival. Additionally, early hypotension and markers of hypoperfusion (i.e., elevated lactate) have previously been associated with worse mortality (29, 30). In our cohort, mortality was more likely if each of these factors were present. Thus, correct identification of these characteristics may identify patients who would benefit from a more aggressive and structured care pathway of management. In a systematic review, the implementation of structured care pathways that included early coronary intervention, TTM, and standardized post-CA care was associated with a higher likelihood of favorable functional outcome compared with standard care (1).
Finally, we found that delayed admission to the ICU modified the association of probable OHCA and better survival. Patients admitted with a primary diagnosis of CA and who remained in the ED for greater than 24 hours had similar mortality to those admitted with another primary diagnosis and arrested in the ICU (all IHCA). Time from CA to ICU admission has been linked to worse survival after CA (12, 31). However, these studies revealed differences of a few hours when comparing patients with favorable and unfavorable outcomes. In our cohort, 16% of patients remained in ED for at least 1 day despite having survived a CA. This may be due to reduced availability of ICU beds and/or underestimation of patients' severity. Our results underline the absolute necessity of access to an ICU bed for these patients, better screening, and streamlined ICU transfer. Additionally, unlike the majority of published CA cohorts, only 6% of patients in our study underwent limitation or withdrawal-of-life support. This finding is probably related to cultural and legal issues in Brazil and raises concerns and opportunities. With such an elevated overall mortality, we hypothesize that many patients may have received futile treatment with prolonged length of stay and very little chance of functional recovery (32). In contrast, observational studies from developed countries showed that early withdrawal-of-life support is common and associated with excess mortality (33, 34). As early withdrawal-of-life support is not part of standard of care in Brazil (35), studies evaluating late prognostication in developing countries become feasible and could improve our understanding of the natural history of recovery after CA.
Our study has significant limitations. First, the database analyzed lacked specific arrest-related data (e.g., arrest location, initial rhythm [shockable vs nonshockable], time to ROSC and bystander cardiopulmonary resuscitation), which precluded a more granular analysis. Data collection was not diagnosis specific; however, we could identify patients with ventricular fibrillation and estimate IHCA using surrogates such as admission diagnosis and time of CA during hospitalization. Second, our database did not provide data on percutaneous coronary intervention, which has been associated with better outcomes. However, we were able to show that patients coming from the catheterization laboratory had lower mortality than those who did not. Third, long-term functional outcomes were not available. In-hospital mortality, although, is an important clinical outcome after CA and may be used as a target measure of improved management and processes of care. Fourth, we used routinely collected data to conduct this analysis at scale, and thus some degree of information was unavailable or missing. However, we used robust imputation techniques to account for the missing data. Finally, data on TTM and palliative care implementation were not mandatory in the registry. This may have underestimated the actual use of both measures. However, due to the limited availability of TTM devices and implemented TTM protocols in Brazil, we hypothesize the data reflected the reality of postarrest care (7).
CONCLUSIONS
We demonstrated in a large cohort of CA survivors that in-hospital mortality is elevated and that TTM implementation is negligible in Brazilian ICUs. Furthermore, nearly half of nonsurvivors died within 48 hours of ICU admission with severe hemodynamic compromise and organ dysfunction. These findings unveil great opportunities to improve post-CA care in developing countries. Future studies should focus on the implementation of structured pathways including prompt ICU transfer and TTM in comatose patients. Implementing a combination of these evidence-based measures may positively impact CA outcomes in lowand middle-income countries. 21 | 2021-08-04T05:33:05.536Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "04714de89c1455b69ee881e70638de9c0bf723a5",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/cce.0000000000000479",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "2d0bfde85ff53b6ebcfc2520ef478cd85f114c85",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
89359227 | pes2o/s2orc | v3-fos-license | Alleviatory activities in mycorrhizal tobacco plants subjected to increasing chloride in irrigation water
The effects of presence and absence of arbuscular mycorrhizal (AM+ and AM-) fungus (AMF) Glomus intraradices on agronomic and chemical characteristics of field-grown tobacco ( Nicotiana tabacum L.) Virginia type (cv. K-326) plants exposed to varying concentrations of chloride 10, 40, 70 and 100 mg Cl L –1 (C1-C4) were studied over two growing seasons (2012-2013). Mycorrhizal plants had significantly higher uptake of nutrients in shoots and number of leaves regardless of intensities of chloride stress. The cured leaves yields of AM+ plants under C2-C4 chloride stressed conditions were higher than AM- plants. Leaf chloride content increased in line with the increase of chloride level, while AMF colonised plants maintained low Cl content. AM+ plants produced tobacco leaves that contained significantly higher quantities of nicotine than AM- plants. AM inoculation ameliorated the chloride stress to some extent. Antioxidant enzymes like superoxide dismutase, catalase, ascorbate peroxidase, and glutathione reductase as well as non-enzymatic antioxidants (ascorbic acid and glutathione) also exhibited great variation with chloride treatment. Chloride stress caused great alterations in the endogenous levels of growth hormones with abscisic acid showing increment. AMF inoculated plants maintained higher levels of growth hormones and also allayed the negative impact of chloride. The level of 40 mg L –1 in combination with arbuscular mycorrhizal can be considered as the acceptable threshold to avoid adverse effects on Virginia tobacco.
Introduction
Soil salinity problems arose during the last year in coastal plains of Mediterranean areas, caused mainly by the poor quality of the irrigation water and increasing noticeably during the dry season (Selvakumar et al., 2014;Sifola and Postiglione, 2002).
Excessive quantities of chloride in the cured leaf reduce the rate of burn and cause certain adverse effects such as increased hygroscopicity, dinginess, uneven colours and undesirable odors in cured tobacco leaves (Karaivazoglou et al., 2006).Salt stress causes physiological drought to plants, imbalance in nutrient composition and excessive toxicity due to Na and Cl ions thereby leading to reduction in osmotic potential of plants, disruption of cell organelles and their metabolism.These ultimately affect plant growth and reduce the yield.
Arbuscular mycorrhizal fungi (AMF) are associated with the roots of over 80% terrestrial plant species (Smith and Read, 1997) including halophytes, hydrophytes and xerophytes.AMF have been shown to promote plant growth and salinity tolerance by many researchers; They promote salinity tolerance by employing various mechanisms, such as enhancing nutrient acquisition (Abeer et al., 2015), producing plant growth hormones, improving rhizospheric and soil conditions (Selvakumar et al., 2014), increased root hydraulic conductivity, enhanced water uptake due to extraradical hyphae, osmotic adjustment that promotes turgor maintenance and accumulation of antioxidant compounds (Colella et al., 2014) and altering the physiological and biochemical properties of the host (Gamalero et al., 2010).In addition, AMF can improve host physiological processes like water absorption capacity of plants by increasing root hydraulic conductivity and favourably adjusting the osmotic balance and composition of carbohydrates (Kumar et al., 2015;Ruiz-Lozano, 2003).This may lead to increased plant growth and subsequent dilution of toxic ion effect (Daei et al., 2009).These benefits of AMF have prompted it to be a suitable candidate for bio-amelioration of saline soils.
To date, no information is available about the interaction between of AM fungi and high chloride concentration in irrigation water on the agronomical and physiological responses of tobacco.Therefore, the purpose of this research was to determine the effect of different levels of chloride stress and AM inoculation on various morpho-biochemical parameters of tobacco.This study may be helpful in order to further understand salt tolerance mechanisms in AM plants.
The soil was kept fallow for a year to reduce indigenous mycorrhizal fungi and decompose the root fragments of previous crop to eliminate propagules.Since mycorrhizal spore propagules extracted from the native soil were extremely low (1-2 per kg), no attempt was made to fumigate the soil.Vermiculite-based mycorrhizal inoculum carrying arbuscular mycorrhizal (Glomus intraradices Schenck & Smith; in recent years known as Rhizophagus irregularis; Schußler and Walker, 2010), which is known to colonize the roots of N. tabacum L. (Cosme and Wurst, 2013) fungus was applied in a tobacco nursery at a rate of 100 g m -2 just prior to sowing seeds on sowing lines marked at a distance of 5 cm apart.The inoculum was originally cultured in maize roots; heavily colonized roots (100 g) carrying the propagules (spores, infected roots, soil) were diluted in sterile vermiculite (1 kg).The mycorrhizal and non-mycorrhizal tobacco nurseries were maintained separately.The fertility status of the nursery soil was similar to that of the experimental soil.The inoculated and noninoculated tobacco plants were irrigated once every 2-3 days.Tobacco roots were tested for mycorrhizal colonization at the end of 4 weeks after inoculation (28 days after sowing).After establishment of the symbiosis (approximately 40% root colonization), mycorrhizal and non-mycorrhizal tobacco plants were transplanted in the main field at a spacing of 0.5 m between plants and 1 m between rows, in during early June of two years (2012 and 2013).On an average, 2 plants per square meter were maintained in each plot measuring a dimension of 10 m length × 6 m width (60 m 2 ).Also, all the experimental plots were surrounded with earth dikes, and a distance of 3 m between plots was left bare in order to prevent the lateral spread of water.The tillage practices, including cultivation and disking, were the common conventional practices in the region.
The plants were drip irrigated with four level of chloride included: 10, 40, 70 and 100 mg Cl L -1 (C1-C4).Chloride was added to the water as CaCl2.The fact that the10 mg Cl L -1 concentration in water is considered very low and without adverse effects on tobacco (Karaivazoglou et al., 2006) led us to the decision to take this chloride concentration as control.
Before transplanting 53 kg ha -1 of P and 125 kg ha -1 of K were added to the top 0.2 m of soil.N fertilizers of 120 kg ha -1 were distributed as follows 50% at transplanting as ammonium sulphate (21% N) and 50% as ammonium nitrate (26% N) at five side dressing.The latter was split into applications, one at seedling establishment and one at the beginning of rapid stem elongation.
During the first half of August, when approximately 50% of plants per plot were flowering, the plants were topped at a height of 24-25 leaves per plant.An average of about 64 plants per treatments was harvested from the central part of each plot (32 m 2 ) to determine yield.
Rainfall and temperatures during the cultivation period (May-September) in the two experimentation years are shown in Table 1.Rainfall and temperature during two years of experiments were similar and in accordance with the regional average.The same number (10) of irrigations (same amount of chloride) was applied in each year.
A 2×4 factorial randomized block design included two mycorrhizal treatments (with AM, AM+ or without AM, AM-) and four chloride levels in irrigation water (C1-C4) replicated four times on agronomic and chemical properties of Virginia tobacco (cv.K-326; The selected cultivar is the highest quality commercial cultivars in the north of Iran).
Photosynthesis measurement
Carbon exchange rate (CER), transpiration rate (E) and stomatal conductance (gs) were measured by an infrared gas analyser (Li-6400; LI-COR, Lincoln, NE, USA) on four replications per treatment from 9:30 to 10:40 am at a sunny day before harvest.Measurements were recorded when the total coefficient of variation was less than 0.5%.
Growth measurements and biochemical analysis
Ten plants were randomly selected from each experimental plot in each replication at flowering stage and the following parameters were recorded: dry shoot mass, dry root mass, number of leaves, percentage mycorrhizal root colonization, pigments and plant growth regulators, antioxidative enzymes and non-enzymatic.Water use efficiency (WUE), was calculated by dividing the total leaves yield (kg ha -1 ) by the quantity of water consumed inclusive of precipitation (mm) as indicated by Boyer (1995) and mycorrhizal dependency at each chloride level.Mycorrhizal dependency (MD) or response to mycorrhizal colonization was calculated for plants in each chloride treatment by using the following formula (Gerdemann, 1975): The AM fungi spore count in native field soil was minimal (~3 spores 100 g -1 air-dried soil).Root colonization by AM was determined by preparing root samples at 1 g in each experimental unit according to the method of Philips and Hayman (1970), and roots were stained using the Gridline-Intersect Method (Giovannetti and Mosse, 1980).
Each plant was extracted from the soil by digging a trench around it 0.3 m 2 by 0.6 m deep and removing it as a block.The roots were washed well to remove all traces of soil and the plants then separated into leaves, stalks and roots.The fresh weight of the sample plant parts was recorded and the samples were then dried to a constant weight in an oven at 70°C whereon dry weight was then recorded.
Leaves from control and Cl-stressed plants were excised at flowering stage to measure relative water content (RWC) and osmotic potential (Ψs) according to Turner (1981) and Martinez-Ballesta et al. (2004), respectively.
Extraction and quantification of pigments and plant growth regulators
Indole acetic acid (IAA) and abscisic acid (ABA) were extracted and purified as described by Kusaba et al. (1998).The method described by Lee et al. (1998) was followed for extraction and estimation of gibberellic acid (GA3) by gas chromatograph-mass spectrometer (GC-MS).Photosynthetic pigments (chlorophyll a, chlorophyll b and carotenoids) in the leaves were determined as the method described by Moran (1982).
Free proline content and Electrolyte leakage (EL) in plant material were determined as the method described by (Bates et al., 1973;Dionisio-Sese and Tobita, 1998) respectively.
Cured leaf yield and chemical composition
Leaf yield was determined from an average of 64 plants per plots harvested from the central part of each plot.All plants were harvested by hand, in 5 primings, by removing 4-5 leaves each time at weekly intervals starting 6 weeks after transplanting.The harvested leaves, cured in typical oven for Virginia tobacco.Yield of cured leaves was determined at standard moisture content of 19% for each of the four stalk positions.
Minerals and chemicals composition of leaf chloride content was analysed using the standard AOAC (1997) method.Total N was analysed employing the Kjeldahl procedure (Bremmer and Mulvaney, 1982).Nicotine and reducing sugars were measured using CORESTA recommended methods No. 35 (CORESTA, 1994a) and No. 38 (CORESTA, 1994b), respectively.In addition, K, P and Mg were determined.K was determined by flame emission spectroscopy, P by the molybdenum blue-ascorbic acid method (Olsen and Sommers, 1982), Ca and Mg by atomic absorption spectroscopy.
Statistical analysis
Yield and other agronomic and chemical traits were subjected to analysis of variance (ANOVA).However, since the same number (10) of irrigations (same amount of chloride) was applied in each year, the response to chloride was relatively similar from year to year, as well as Bartlett's test and the combined analysis of the two growing seasons were applied.Bartlett's X 2 test showed that combining the data from both years was acceptable.In the analysis that follows, all values given are the averages of the data for the 2 years combined.Means were compared using least significant difference test at 5% level.
Mycorrhizal colonisation and mycorrhizal dependency
Twenty eight day old mycorrhiza-inoculated tobacco seedlings had 40% colonisation at the time of transplanting while non-inoculated seedlings registered only 1-2% colonisation.After a month of exposure to varying concentrations of chloride level in the main field, none of the tobacco plants in the non-inoculated treatments were colonised by the AM when examined during the experimentation (Table 2).The highest AM root colonisation was observed in C1 treatment, and it decreased significantly with increasing chloride levels (Table 2).Previous research has shown that salinity, not only affects negatively the host plant but also the AMF.It can hamper colonisation capacity, spore germination, and growth of fungal hyphae and hyphal spreading after initial colonisation (Kumar et al. 2010(Kumar et al. , 2015;;Miransari, 2010).
The data presented in Table 2 reveal that the mycorrhizal dependency (MD) of tobacco plants were significantly reduced by increased chloride levels (Table 2).In AM tobacco, the highest MD values belonged to C1in comparison with other chloride levels.A decrease in MD at higher chloride level (C2-C4) could be due to the inhibitory effect of chloride on AM fungal growth and spore density (Kumar et al., 2015).
Plant growth parameters
The treatments of increasing chloride level significantly decreased all the growth attributes such as number of leaves, dry shoot mass and dry root mass of both AM-and AM+ tobacco plants (Table 2).However, in growth parameters, AM+ plants were higher than AM-plants regardless of concentrations of chloride level.The highest chloride level, AM+ plants were comparable to C1, AM-plants.
In Vigna unguiculata L, Abeer et al. (2015) demonstrated that length as well as fresh and dry biomass of shoot and root declined with the increasing salinity.Exposure to stress reduces hydraulic conductivity and disturbs extension of cell wall causing considerable decline in morphological attributes of plants (Selvakumar et al., 2014).An increased root length and density or an altered root system morphology, as enhancing soil exploration and water extraction, have been hypothesised as potential mechanisms for the improved stress resistance of mycorrhized plants (Gamalero et al., 2010;Candido et al., 2013Candido et al., , 2015)).
Cured leaf yield and water use efficiency
The cured leaf yield and water use efficiency (WUE) of tobacco decreased significantly under increasing concentrations of chloride.Conversely, mycorrhizal inoculation enhanced the tobacco leaf production and WUE regardless of concentrations of chloride level (Table 2).Mycorrhizal colonisation improved growth, water status, nutrient content, yield and quality of tobacco leaf when exposed to varying concentrations of chloride level.The 2-year field study suggests that AM inoculation improves salinity tolerance of tobacco plants as a secondary consequence of enhanced nutritional status of the host plant, especially N and P.
Tobacco leaf yield decreased with salinity, in agreement with many studies on tobacco (Karaivazoglou et al., 2005;Sifola and Postiglione, 2002).Reduced yield under chloride stress treatments could be attributed to CaCl2 increasing the osmotic potential of the solution and the activity of Cl in the root zone (Karaivazoglou et al., 2005(Karaivazoglou et al., , 2006)).Those changes may have affected plant growth and, consequently, yield through their effects on plant-water rela-tionship and on nutritional imbalances (Abeer et al., 2015).Direct effects of salinity include accumulation of salt in old leaves, which may hasten leaf death.This prevents the supply of assimilates or hormones to the growing regions which eventually affects the plant growth.The improved nutritional status and relative water content caused by mycorrhizal colonisation would have alleviated salinity impacts and promoted tobacco leaf production under varying concentrations of chloride.Because mycorrhizal treatments consistently increased leaf yields under varying concentrations of chloride, WUE of AM plants were much higher than control plants.
In this study, mycorrhizal tobacco plants had higher WUE values compared with non-mycorrhizal plants, as indicated by lower water loss and higher RWC, probably because of improvement of water absorption capacity by AM fungus.
Our results also confirmed the water status (WUE and RWC) of tobacco plants positively correlated with photosynthesis activity.Mycorrhizal plants showed higher levels of photosynthesis activity indicating that AM symbiosis had a positive impact on mass flow of water to the leaf surface, and an increased water absorption by extraradical hyphae (Zhu et al., 2011).On the other hand, AM inoculated plants had better water status, which allow host plants to sustain higher stomatal conductance and transpiration rate (Table 3), consequently reducing leaf epidermal resistance and improving photosynthetic activity.
Leaf mineral composition
The effect of chloride on the content of K, N, Mg and P was not significant, although a slight decreasing trend was recorded with the increase of chloride in irrigation water (Table 2).Colonisation of AMF caused increase in these mineral ions as compared to control non-AMF plants and also reduced the chloride stress induced impact to marked extent (Table 2).It is well known that salinity causes nutrient imbalance in plants.AMF help plants to uptake more nutrient.Chloride ion has antagonistic relationship with several other ions like K. In present study higher concentration of chloride and lowered concentrations of other ions like K, P, Mg under increasing of concentrations of chloride is in concurrence with the findings of Bilgili et al. (2011) for canola and Abeer et al. (2015) for cowpea.AMF not only reduced the deleterious effect of excess chloride by reducing its uptake but also caused a significant increase in uptake of other important mineral elements like K, P and Mg.AMF colonisation in wheat significantly increased the shoot concentrations of P, K, and Zn whereas decreased Na and Cl concentrations (Daei et al., 2009).
Regression analysis between levels of chloride in irrigation water and chloride concentrations in the leaves showed that chloride concentrations in leaves had a linear response to rates of irrigation water chloride (Figure 1), the rate of linear increase of leaf chloride concentration was higher in AM-plants than AM+ plants.Chloride concentration in leaves can be predicted with the equations are shown in Figure 1, according to chloride level in irrigation water and arbuscular mycorrhizal inoculation.It is considered that an acceptable Virginia tobacco should contain less than 1% of chloride.Leaves with higher chloride concentration are of poor quality with reduced burning rate (Sifola and Postiglione, 2002).Based on the above results it is preferable to use irrigation water with chloride concentration below 25 mg L -1 since at this level the chloride concentration in the leaves remained around 1%. On the other hand, the chloride level of 40 mg L -1 in irrigation water in combination with AMF can be considered as the threshold upper limit.In such high concentrations the use of AMF are recommended, because keep the leaf chloride concentrations around the acceptable level (Figure 1).
Nicotine and reducing sugar
In the present study, reducing sugar content was significantly enhanced with an increase in chloride level from C1 to C4, regardless of mycorrhizal treatments (Table 2).AM+ plants had considerably higher amount of reducing sugar compared to AM-plants (Table 2).Increasing chloride level enhanced soluble carbohydrate concentration in tobacco regardless of mycorrhizal treatments and is mainly because, carbohydrate plays a crucial role in maintaining osmotic balance in plant exposes to salt stress and hence protects plant from adverse salt effect (Datta and Kulkarin, 2014).Under increasing chloride level, AMF colonisations in tobacco increased reducing sugar accumulation, which is required for better osmoprotection [this kind of data is supported by Datta and Kulkarni (2014)].
The effect of chloride on nicotine concentration of leaves was not significant and showed inconsistent trend (Table 2).The nicotine concentration in the leaves of the tobacco was enhanced by AMF regardless of the chloride levels (Table 2).Similar inconsistent results and a very slight effect of chloride on nicotine content in Virginia, Burley, and Maryland tobacco have been reported by others (Collins and Hawks, 1993).
In our study the direction of microbial effects on nicotine concentrations in the leaves was often associated with the direction of their effects on N concentration, and independent of leaves yield.AMF is known to incorporate inorganic N outside the roots into amino acids, and translocate it from the extraradical to the intraradical mycelium as arginine (Cosme and Wurst, 2013), which is a precursor of nicotine (Fritz et al., 2006).This may be the reason why in our set up the positive effects of AMF on leaves N concentration were associated with the greatest increases nicotine.
Photosynthetic activity, chlorophyll contents and carotenoids
Increasing of concentrations of chloride in irrigation water markedly decreased their carbon exchange rate (CER) and E in the AM-plants (Table 4).Chloride stress decreased their gs in all plants.Mycorrhizal plants had higher CER, E and gs than the AMplants under all concentrations of chloride level (Table 4).
Significant reduction in photosynthesis was found in saltstressed tobacco (Sifola and Postiglione, 2002).In the present study, Increasing of concentrations of chloride also reduced the CER, E and gs in both AM-and AM+ plants.However, mycorrhizal plants had significantly higher CER, E and gs than the nonmycorrhizal plants under varying concentrations of chloride level (Table 4).A number of studies have demonstrated that, during abiotic stress, mycorrhizal plants often maintain higher gas exchange rates than non-mycorrhizal plants (Ruiz-Lozano, 2003).These positive effects may also have accounted for the enhanced plant growth of AM-colonised plants, most probably by enhancing CO2 fixation under salt stress.The higher values of photosynthetic 4).The stress of increasing concentrations of chloride significantly decreased the concentrations of chlorophyll a, b and carotenoid (Table 4).Reduction in chlorophyll contents due to increasing of chloride level is in line with the findings of Karaivazoglou et al. (2005), Kumar et al. (2015), and Abeer et al. (2015), who have reported a considerable decline in chlorophyll contents of Nicotiana tabacum; Jatropha curcas and Vigna unguiculata plants exposed to salinity stress.AMF inoculation increases chlorophyll content because of its direct influence on the uptake of Mg, which is an important component of chlorophyll pigment (Abeer et al., 2015).
Membrane stability index and proline content
Electrolyte leakage from the cellular membranes of tobacco plants increased considerably under increasing of chloride level (Table 4).However, application of AMF checked the electrolyte leakage significantly in the AM-inoculated plants exposed to chloride stress (Table 4).Proline accumulation increased in AMF inoculated as well as chloride stressed plants as compared to control (Table 2).However increase was more conspicuous under chloride stressed plants.Inoculation of AMF in chloride stressed plants further enhanced the accumulation of proline (Table 2).More decrease of electrolyte leakage in AM-colonised tobacco than in non-AMF ones seem to be related to a high accumulation of proline in shoot of the plants.proline may act as osmolyte and stabilising protein.
Membrane permeability usually appraised as electrolyte leakage is a key indicator of membrane integrity in plants subjected to stress conditions (Datta and Kulkarni, 2014).Mycorrhizal colonisations in plants lowered electrolyte leakage concentration in tobacco plants.Hence it can be suggested that, mycorrhizal associations in tobacco helped to improve membrane structure and its stability under chloride stress condition.Similar type of finding was observed when mycorrhizal Acacia Arabica and Lycopersicon esculentum plants were allowed to grow under saline condition and had less membrane permeability over non-mycorrhizal plants (Datta and Kulkarni, 2014;He et al., 2007).
In many plants, various solutes such as proline have been shown to accumulate during salinity.Their accumulation might be of importance by regulating cytosolic pH and NDA/NDAH rate, stabilising proteins and scavenging hydroxyl radicals protecting cells from the adverse effect of ROS (Abeer et al., 2015).Similar results were reported by Kumar et al. (2015), who postulated that the proline level increases in the stressed AM-Jatropha curcas plants.
Osmotic potential and relative water content
The leaf relative water content (RWC) and osmotic potential of AM+ and AM-tobacco plants altered significantly (Table 4).The RWC decreased progressively with increasing concentrations of chloride level and consequently, osmotic potential also decreased in order to maintain turgor potential values, which were even increased with concentrations of chloride (Table 4).However, chloride stressed AM+ tobacco plants maintained higher RWC regardless of concentrations of chloride in irrigation water and were comparable to C1 AM-plants.Mycorrhizal inoculation increased osmotic potential and consequently decreased turgor potential for AM+ tobacco plants.
Improved water uptake as a result of AMF is possibly due to the direct influence of AMF hyphae on the root morphology and the improved N and P nutritional status (Wu et al., 2014).The P content of mycorrhizal tobacco plants was consistently higher than non-mycorrhizal plants regardless of intensities of chloride stress.A close relationship between P content and salt tolerance has been reported earlier (Selvakumar et al., 2014).
Antioxidant enzymes and non-enzymatic activities
Results regarding activities of antioxidant enzymes are depicted in Table 3. Chloride stress caused a significant increase in activities of antioxidant enzymes studied and increase was consistent with the increase in concentration of chloride (Table 3).AMF alone increased the activities of SOD, CAT, GR and APX (Table 3).In combination with chloride treatment AMF inoculation further enhanced the activities of antioxidant enzymes studied.
Results pertaining to the combined effect of chloride and AMF on ASA, GSSG and GSH are depicted in Table 3. Increasing concentrations of chloride level reduced ASA content while as GSSG and GSH was increased.However inoculation of AMF caused considerable increase in these attributes (Table 3).AMF inoculation in chloride stressed plants further enhanced the contents of GSSG and GSH (Table 3).
Antioxidant enzymes play an important role in scavenging of reactive oxygen species and hence averting the oxidative stress induced damaging effects on several sensitive molecules like proteins nucleic acids and lipids.In our results increase in activities of SOD, CAT, GR and APX due to chloride stress is in concurrence with the findings of Abd_Allh et al. (2015) for Sesbania sesban.SOD is involved in scavenging of superoxide radicals into water and hydrogen peroxide (Mittler, 2002).H2O2 produced is converted into water and oxygen either by CAT and APX (Mittler, 2002).Increased activities of antioxidant enzymes in AMF plants support the findings of Abd_Allh et al. (2015) for Sesbania sesban and Latef and Chaoxing (2011) for tomato.GR, APX, reduced glutathione (GSH), oxidized glutathione (GSSG) and ascorbic acid (ASA) are the important components of ascorbate-glutathione pathway which is actively involved in scavenging of ROS (Mittler, 2002).Ascorbate-glutathione cycle involves a series of redox reactions where the net electron flow is from NADPH to H2O2 resulting in the conversion of H2O2 into water.Increased activity of GR helps in enhanced production of reduced glutathione.Reduced glutathione produced from the reduction of oxidised glutathione acts as electro donor during the conversion of dehydroascorbate (DHA) into ASA and ASA acts a electron donor in conversion of H2O2 into water and oxygen (Mittler, 2002).Decrease in content of ASA and increase in GSH found in our study is in concurrence with the findings of Abd_Allh et al. (2015) for Sesbania sesban.
Endogenous phytohormone
Drastic decline was observed in endogenous levels of IAA and GA3 due to chloride stress (Table 5).Inoculation of AMF not only increased the growth hormone levels but also ameliorated the chloride induced deleterious effects (Table 5).However ABA levels decreased due to AMF inoculation while increased under chloride stress conditions (Table 5).
Chloride stressed tobacco plants showed drastic decline in the endogenous synthesis of IAA, and GA3 while as AMF inoculated plants showed higher contents of these growth regulators.Endophytic fungi cause increase in endogenous levels of IAA and GA (Abd_Allah et al., 2015).ABA plays an important role in plant responses to abiotic stresses including salinity.As expected, we found out that ABA levels in non-colonised and AM plants increased as a consequence of salinity.Previous studies showed that AM symbiosis can alter the levels of ABA in the host plant and that, under salinity stress, the levels of ABA are lower in AMcolonised than in non-colonised plants (Aroca et al., 2013), as we found here under non-saline conditions.These results, together with those of other physiological parameters, support that AM symbiosis improves plant fitness.Babu et al. (2012) demonstrated that salinity stressed tomato plants showed increment in the concentration of ABA and IAA leading to better adaptation of tomato to salt stress.Hamayun et al. (2010) observed that soybean cultivars subjected to salt stress exhibited increase in ABA and decrease in GA3 synthesis.
Conclusions
It is shown here that AM symbiosis alleviates the negative effects of salt stress in tobacco plants by altering the hormonal productions and affecting plant physiology in the host plant, allowing plants to grow better under these unfavourable conditions.The results confirm the potential of arbuscular mycorrhizas in protecting host plants against unfavourable environmental conditions and pave the way for applying AM symbiosis in sustainable agriculture in Mediterranean conditions.Due to the variability of plant response to mycorrhizal treatments requires, however, additional multi-year studies on a wider range of tobacco genotypes are needed.A further indication emerging from this study is that, under the climatic conditions of Northern Iran, the optimum chloride level in irrigation water for acceptable Virginia tobacco is below 25 mg L -1 , whereas the level up to 40 mg L -1 in combination with AMF can be considered as the upper threshold limit.
Figure 1 .
Figure 1.Correlation between levels of irrigation water chloride and chloride concentrations of leaves in presence and absence of arbuscular mycorrhizal fungi (average of two growing seasons, 2012-2013).
Table 1 . Monthly distribution of the number of irrigations, irrigation volumes, rainfall and temperature in the two years of study. Month Irrigation (n) Volume (mm) Rainfall (mm) Mean temperature (ºC) 2012 2013 2012 2013 2012 2013 2012 2013
Irrigation water supplied at transplanting (average of 22 mm) was not included. | 2019-04-01T13:14:36.179Z | 2016-08-30T00:00:00.000 | {
"year": 2016,
"sha1": "800e40f157672dfd7c9169f77f7a870d32aa9c05",
"oa_license": "CCBYNC",
"oa_url": "https://agronomy.it/index.php/agro/article/download/792/833",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "800e40f157672dfd7c9169f77f7a870d32aa9c05",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
250429802 | pes2o/s2orc | v3-fos-license | Disentangling Hierarchical and Sequential Computations during Sentence Processing
Sentences in natural language have a hierarchical structure, that can be described in terms of nested trees. To compose sentence meaning, the human brain needs to link successive words into complex syntactic structures. However, such hierarchical-structure processing could co-exist with a simpler, shallower, and perhaps evolutionarily older mechanism for local, word-by-word sequential processing. Indeed, classic work from psycholinguistics suggests the existence of such non-hierarchical processing, which can interfere with hierarchical processing and lead to sentence-processing errors in humans. However, such interference can arise from two, non mutually exclusive, reasons: interference between words in working memory, or interference between local versus long-distance word-prediction signals. Teasing apart these two possibilities is difficult based on behavioral data alone. Here, we conducted a magnetoen-cephalography experiment to study hierarchical vs. sequential computations during sentence processing in the human brain. We studied whether the two processes have distinct neural signatures and whether sequential interference observed behaviorally is due to memory-based interference or to competing word-prediction signals. Our results show (1) a large dominance of hierarchical processing in the human brain compared to sequential processing, and (2) neural evidence for interference between words in memory, but no evidence for competing prediction signals. Our study shows that once words enter the language system, computations are dominated by structure-based processing and largely robust to sequential effects; and that even when behavioral interference occurs, it need not indicate the existence of a shallow, local language prediction system.
Introduction
A central question in linguistics pertains to the nature of linguistic structures and their processing.According to generative linguistics, language processing is enabled via a humanspecific ability to mentally transform and represent sequentially incoming words into a hierarchical tree structure (1)(2)(3)(4).Others, however, claim that sequential structure has considerable explanatory power and that hierarchical processing is often not involved in language use (5).According to this view, language ability does not rely on an innate competence to process hierarchical structures, and it is rather acquired through a process in which children build up a repertoire of increasingly complex constructions by gradually abstracting from sequential transition probabilities (6,7).
The ubiquity of hierarchical structures in language has been shown, however, in numerous analyses of linguistic phenomena such as binding (8), question construction (9) language acquisition (10,11), prosody (12) and phonology (13,14).Neuroimaging studies have provided further support for the hierarchical view by showing that neural activity in specific regions of the human brain is sensitive to the hierarchical structure of sentences (e.g., [15][16][17][18][19][20][21][22].Together, these studies provide largely incontrovertible evidence for nested tree-like representations in humans, making the hierarchical view currently dominant in the field. However, hierarchical structure processing could co-exist with simpler, shallower, and perhaps evolutionarily older mechanism for local, word-by-word predictions.Indeed, the co-existence of two distinct mechanisms was already identified in the case of the processing of sequences of nonlinguistic items (23)(24)(25)(26)(27)(28)(29).Specifically, using the Local-Global Paradigm, a variant of the oddball auditory paradigm, Bekinschtein et al. (23) showed that transition-based (local) processing can be distinguished from that of chunking (global) and from possibly higher levels, each eliciting distinct violation signals in different brain areas and timings (30)(31)(32)(33)(34)(35).
The co-existence of two such distinct mechanisms was also identified in artificial neural language models.Lakretz et al. (36,37) showed that, during training, neural language models develop two types of mechanisms to process word dependencies in language.One mechanism was shown to be sensitive to the latent hierarchical syntactic structure of sentences, and therefore capable of computing long-distance dependencies, whereas the other mechanism was only sensitive to local sequential transitions between words and therefore structure agnostic.
Finally, there is also a long history in psycholinguistics already pointing to the existence of two types of cognitive mechanisms involved in the processing of long-range agreements.Much behavioral work from classic psycholinguistics has shown that humans succumb, at least in part, to local effects that interfere with higher, hierarchical, processing of sentences.In a classic work, Bock and Miller (38) showed that human participants make many long-range agreement errors on sentences such as (1) (1) The boy near the girls likes climbing Det N 1 P Det N 2 V N In this example, the main ('boy') and embedded ('girls') nouns have different numbers (one is singular, the other plural).Such incongruent sentences were found to be more error-prone than when the two nouns are congruent (e.g., both singular).This local behavioral interference effect was later replicated in many studies across a variety of syntactic structures (e.g., [39][40][41][42][43]. Such a behavioral effect could, however, arise from various underlying reasons.In particular, two possible explanations for local interference are discernible: Let N 1 , N 2 , and V denote the main noun (e.g., 'boy'), embedded noun ('girls') and verb ('likes') in sentences of the form 1 (such that V should agree with N 1 ).One explanation for the local interference of noun N 2 could be a memory interference between N 1 and N 2 : the mental representations of the two nouns would interact during sentence processing, such that the memorized number of N 1 would occasionally be overridden by the more recent number of N 2 , in memory encoding or retrieval.Such noun-noun interference would be in line with theories from psycholinguistics, such as the cue-based retrieval theory (40).A second alternative explanation for local interference could attribute it to conflicting prediction signals from N 1 and N 2 onto V .The correct, long-distance, or global expectation for the corresponding grammatical number of V , based on N ′ 1 s number, would be affected by a recent expectation generated by the more recent N 2 .Such a local influence would be in line with findings from non-linguistic auditory stimuli (the local effect observed by Bekinschtein et al. (23)) as well as from the above-described neural language models, where two neural predictive mechanisms coexist and can, in some cases, generate opposing, competing predictions.
Since behavioral evidence alone is compatible with both explanations, here, we tested neurally the possible coexistence of two neural mechanisms in the human brain, one which is sensitive to the latent structure of sentences, and another, simpler, possibly evolutionarily older, which would be sensitive to mere word transitions.If the predictions from the models and from the processing of auditory stimuli hold in the case of language processing in humans, then we would expect to find neural evidence for the two distinct types of mechanisms.Alternatively, it is possible that language processing is entirely based on hierarchical mechanisms and that behavioral interference can be solely accounted for by interference between the two representations of N 1 and N 2 in memory.
To test these hypotheses, we created a 2×2 design in which both local transitions and long-distance relations of agreement were manipulated orthogonally and could be independently respected or violated.We recorded neural activity from both human participants (n = 22, magnetoencephalography; MEG) and a neural language model and studied whether the neural signatures of the two levels can be disentangled.In both cases, we used temporally resolved multivariate decoding techniques (32) to probe the existence of long-distance, local, and memory-interference mechanisms.
To anticipate the results, in the model, we found evidence for two distinct neural effects, which correspond to the two co-existence of long-distance and local mechanisms, corroborating our previous results (36,37).In human brain signals, however, only a main effect related to the hierarchical struc-ture of sentences was found.While this effect was modulated by noun-noun congruity, there was no evidence of any brain signal reflecting a local sequential effect.These findings suggest that, unlike low-level processing of auditory stimuli, in sentence processing, once words enter the language system, computations are dominated by structure-based processing and are largely robust to sequential effects.Furthermore, they suggest that memory interference between representations of incongruent nouns is the dominant factor underlying local interference observed behaviorally in psycholinguistic work.
Methods
A total of 22 participants with normal or corrected to normal vision were included in the M/EEG experiment.The number of the selected participants was based on previous studies in subject-verb agreement experiments (44), where an average of 23.3 participants was shown to be sufficient to detect subject-verb agreement violations, with a few studies reporting results with less than 15 participants.According to Molinaro et al. (45), a sufficient amount of participants for an agreement study is of 20.In compliance with the institutional guidelines, all participants gave written, informed consent prior to the experiment and were compensated with 100 Euros for their participation.The participants were native English speakers and prior to the participation, the subjects had to perform an online sentence reading task (Dialang1 reading & structures task-subjects accepted with a placement greater than C1 in both tests).The procedure and the consent were approved by the local ethical committee.
Stimuli
To contrast hierarchical and sequential mechanisms, we make use of long-range grammatical agreements as in sentence 1, where we manipulate: 1.The structural relation between N 1 and V .
2. The sequential intervention of N 2 on the processing of V .
These two dimensions span the two-by-two design of the paradigm, and its corresponding two main effects are defined as follows (Figure 1): A structural effect, which contrasts conditions in which N 1 and V agree and disagree on grammatical number.In the case of disagreement, a syntactic violation occurs, and neural response to this violation is expected, as was extensively studied in past studies (46,47).The second one is a transition effect, which contrasts conditions in which N 2 and V match and mismatch with respect to grammatical number.The transition effect corresponds to local word transitions and was not identified in neural recordings thus far.Following results from the classic local-global paradigm and from simulations in neural language models, we hypothesized that a local number mismatch would violate local word-transition expectations, which, in turn, would generate an identifiable neural response, independently of whether a syntactic violation simultaneously occurs.For example, in sentence 1, the frequency of 'girls likes' is two To disentangle two possible types of processing during sentence comprehension, the experimental design contrast: (i) a structural dependency between a target verb and a noun, which either holds, or creates a violation at the verb, and (ii) a linear (sequential) interaction between the target verb and another noun, which either facilitates or interferes with verb processing.(A) Tree representations of the two sentence constructions explored in the experiments.Below, is an illustration of the main effects of the design: Structural effect (orange), which depends on the syntactic relation between the main subject and target verb (colored path in the tree representation).Transition effect (magenta), which refers to the (mis)match between the target verb and a linearly intervening noun (attractor), with respect to either grammatical number or animacy.congruity effect, which refers to the (mis)match between the two nouns; In the left construction, the structural effect is long-range and the transition (linear) one is short-range.On the right construction, it is the opposite.(B) Experimental Paradigm: subjects were presented with sentences in a rapid serial visual presentation (RSVP), and their task was to report whether the sentences are grammatically correct.At the end of each trial, visual feedback on their performance was given.
orders of magnitude smaller than that of 'girl likes' (logfrequency = -8 and -6, respectively; Google's n-gram).Lowlevel brain regions of the language network might be sensitive to such transition probabilities, in which case, a greater neural response is predicted for the low -compared to the highfrequency word pair.This is consistent with a predictivecoding framework (48)(49)(50), which suggests that cortical circuits form an internal model of input sequences and that this model continuously generates predictions about upcoming items, confronting them with incoming stimuli.The local effect would thus reflect a prediction error that results from an internal model based on transition probabilities.
In the construction with a PP, the global (structural) and local (transition-based) effects are, however, correlated with linear proximity.That is, the global effect between the subject N 1 and verb V is long-range, whereas the local effect between the intervening noun N 2 and verb is short-range.To decouple syntactic dependency and linear proximity, we included a control construction in the design, in which syntactic dependency and linear proximity are reversed by the replacement of only a single word.Specifically, we replaced the PP with an object-relative clause (ObjRC; Figure 1, Table 1): (2) ObjRC: "The boy that the girls like. . ." ("Det N 1 that det N 2 V "), In this case, the structural dependency (in bold) is now between N 2 and V , whereas the intervening noun is the distant N 1 .The difference between the two sentences 1 and is minimal -only at the third word ('near'/'that'), and the number of words that precede the target verb V is the same.This allows us to test the impact of the length of the subject-verb dependency on the global effect, and the impact of the proximity between N 2 and V on the local effect.In particular, to test the prediction that a neural response to local-transition violations would occur only in the case of a PP but not in the
ObjRC case.
So far, the examples shown contained variations of a sentence with respect to the feature of grammatical number.The number feature and the corresponding agreement phenomena are generally perceived as a proxy for syntactic processing.However, violation responses are known to vary depending on whether the violation is semantic or syntactic.While syntactic violations typically generate a late positive neural response P600 (51), semantic violations were found to elicit an earlier negative response N400 (52).We therefore further manipulated the type of feature in the agreements, and it includes sentences of the form 1, in which violations are with respect to animacy, for example: (3) PP: "The boy near the car likes climbing" ("Det N 1 near det N 2 V "), In these sentences, the subject N 1 and verb V always agree on the number, however, the sentence contains a semantic violation, which is either local ('car likes'), or global (as in, "The boy near the girl rusts badly").
Table 1 summarizes the three constructions of the design: PP-Number, PP-Animacy, ObjRC-Number, along with example sentences.Note that due to a time constraint on the entire experimental duration, we did not include a fourth case in the design, which includes sentences with an object-relative clause and semantic violations.For each of the three constructions, we generated 16 stimuli per block for a total of 10 blocks.Half of these stimuli contained a violation.The stimuli were generated using a simple algorithm that sampled without replacement words from the lexicon.Each participant was presented with a different set of stimuli.The lexicon consisted of 19 animate nouns, 7 inanimate nouns, and a total of 15 verbs.The stimuli were controlled for low-level features such as length and unigram frequency. 2Each participant was presented with an equal number of sentences, ensuring that the count of sentences with a singular first noun matched those with a plural first noun.Similarly, for the semantic conditions, the number of sentences featuring an animate first noun was identical to those with an inanimate first noun.Additionally, a sentence could not have the same noun in both singular and plural form (e.g: a sentence such as: "The boy near the boys. . ." was forbidden).More examples of stimuli are presented in appendix C.
Experimental Paradigm
The participants undertook a rapid serial visual presentation (RSVP) reading task and were asked to report whether a sentence contained a grammatical or semantic violation by pressing a button on a MEG response device. 3To verify that participants understood the task, prior to recording, they went through a short training phase (10 minutes).The task was divided into 10 equal runs, where each run contained the same number of trials (n = 48).A 600ms fixation cross interval preceded the onset of the first word (Figure 1B).All the sentences had the same length.The words were presented with a stimulus onset asynchrony (SOA) of 500ms.After the offset of the last word and following a time interval of 1s, a decision panel with the words "OK" and "WRONG" appeared on the screen.To control for motor preparation, the location of the words (left or right) was randomized at each trial.As soon as the participants stated their decision, the decision panel disappeared and the subjects received immediate visual feedback on their performance.If their response was correct, they were presented with a green cross, otherwise with a red one.Decision duration was limited to 1.5s, after which a blue fixation cross appeared and the experiment continued.The interval to the next trial (ITI) was 1.5s.All time intervals were set to multiples of the video projector refresh rate (60Hz).
M/EEG recordings Recordings took place in two different MEG centers: NeuroSpin in Saclay, France (N = 15), and ICM in Paris (N = 7).Participants performed the task while sitting in an electromagnetically shielded room.Brain magnetic fields were recorded with a 306-channel, wholehead MEG by Elekta Neuromag® (Helsinki, Finland), in 102 triplets: one magnetometer and two orthogonal planar gradiometers.In NeuroSpin and ICM, EEG was recorded with a 60 and 64-channel MEG-compatible Neuromag EEG cap, respectively.The brain signals were acquired at a sampling rate of 1000Hz with a hardware highpass filter at 0.03Hz.Eye movements and heartbeats were monitored with vertical and horizontal electrooculogram (EOGs) and electrocardiograms (ECGs).The subjects' head position inside the helmet was measured at the beginning of each run with an isotrack Polhemus Inc. system from the location of four coils placed over frontal and mastoïdian skull areas.All EEG sensors were digitized as well.
Preprocessing Bad sensors per sensor type were automatically detected at the run level based on a variance criterion.Channels in which the variance exceeded the median channel variance by 6 times, or was less than the median variance divided by 6, were marked as bad.A visual inspection was followed to verify the detection accuracy.Prior to the variance detection, oculomotor and cardiac artifacts were removed at the run level, using signal-space projection (SSP) implemented with MNE Python (53,54).To compensate for head movement and reduce non-biological noise, the MEG data were Maxwell-filtered (55) using the implementation of Maxwell filtering in MNE Python.The bad EEG sensors were interpolated using the spherical spline method (56) implemented in the same package.Following Maxwell filtering, the linear component of the data was removed, and the time series were clipped at the upper and lower bound values of (-3,3) interquartile range (IQR) around the median.The data were then bandpass filtered between 0.4 and 50 Hz using a linear-phase FIR filter (hamming) with delay compensation, implemented in MNE-python version 0.16 (53).Finally, the continuous time series were segmented into 3.5s epochs of interest (first word onset to panel onset) and the SSP procedure was applied to the epoched data to remove heart-beats and ocular motions.
Decoding Analyses We used a temporally-resolved decoding approach to classify neural activity from two conditions at the trial level (32,57).These analyses were implemented in MNE-python version 0.16 (53).Prior to model fitting, the data were standardized using the Scikit-Learn package (58).We used a linear classifier (logistic regression) with default Scikit-Learn parameters.The evaluation metric was the Area Under the Curve (AUC).The estimator was trained and tested on data from the same condition.To prevent overfitting, we used a 5-fold stratified cross-validation procedure.
Statistical Analyses
The reported statistics for the behavioral data are performed with a mixed-effects logistic regression with the subject number as a random variable.These were performed using the lme4 package in R ( 59).The reported statistics for the neural data correspond to group-level analyses and were performed using the Statsmodels package in Python3 and in MNE-python version 0.16 (53).The statistical significance of the decoding performance over time was evaluated and corrected for multiple comparisons using a cluster-based permutation approach (60), using a total of 1000 permutations.The significance threshold (alpha level) for all analyses was set to 0.05.
Neural Language Models
In the computational experiments, we use a Neural Language Model (NLM).An NLM is a language model implemented by a neural network, defining a probability distribution over sequences of words.It factorizes the probability of a sentence into a multiplication of the conditional probabilities of all words in the sentence, given the words that precede them: This type of language model can thus be used as nextword predictor: given the preamble of a sentence, it outputs a probability distribution over potential next words.We exploit this fact in our experiments.
Model description
The specific NLM we use in our experiments is the English NLM made available by Gulordava et al. (62) 4 .It is a recurrent LSTM language model (63), consisting of two layers, each with 650 Long-Short Term Memory units (64), input, and output embedding layers of 650 units and input and output layers of size 50000 (the size of the vocabulary).The weights of the input and output embedding layers are not shared (65).The last layer of the model is a softmax layer, whose activations sum up to 1 and as such corresponds to a probability distribution over all words in the NLM's vocabulary.
Model training
The weights of an NLM are typically tuned by presenting them with large amounts of data (a training corpus) and providing them feedback on how well they can predict each next word in the running text.This allows them to adjust their parameters to maximize the probabilities of the sentences in the corpus.Our NLM was trained on a sample of the English Wikipedia text, containing 100M word tokens and 50K word types.Further details can be found in Gulordava et al. (62).
Results
We first present the behavioral results from the experiment, based on the performance of the subjects in a forced-choice, violation-detection task (Figure 1C).We then present classification results in a time-resolved manner on the main effects of the design, for both humans (MEG & EEG) and artificial language models.Behavioral Results. Figure 3 shows the mean error rates across all participants for the three main constructions.We present the error rates with respect to the structural and congruity effects, whereby congruity effect we refer to whether the two nouns N 1 and N 2 agree on number or not.In the following, we report results from a mixed-effects logistic regression model (see Methods).
We first tested whether the structural effect (Figure 1C) is present at the behavioral level.Indeed, this effect was significant across all constructions.This indicates that participants made more errors in detecting a violation, compared to affirming that a sentence was grammatical, which is akin to 'grammatical illusion' (66) We then examined the effect of congruity on error rate.Notably, for PP-Number the effect was significant (b = −1.0592,SE = 0.2268, z = −4.671,p < .001).For ObjRC-Number, the congruity effect was also significant (b = −1.1894,SE = 0.1885, z = −6.308,p < .001).However, for PP-Animacy, the congruity effect was not significant (b = −0.03540,SE = 0.24583, z = −0.144).
Finally, we investigated the transition effect, which corresponds to a mismatch between the intervening noun and the target verb, e.g., N 1 and V in (2).Notably, for PP-Number this effect was significant (b = 0.3935, SE = 0.1905, z = 2.066, p = .0388).For ObjRC-Number, the effect was marginally significant (b = 0.2719, SE = 0.1416, z = 1.921, p = .0548).However, for PP-Animacy, the transition effect was not significant (b = 0.03901, SE = 0.19601, z = 0.199).The transition effect for number is consistent with 'grammatical asymmetry' (43), where participants make more agreement errors on ungrammatical compared to grammatical sentences.
In summary, at the behavioral level, we observed a significant structural effect, for both constructions (PP & ObjRC) and both features (grammatical number and animacy).Transition and congruity effects were significant for both constructions but for grammatical number only.And, finally, participants made more errors in agreement in an embedded clause compared to the prepositional phrase.
Structural but not Transition Effects are Decodable in
Human Data.The transition and congruity effects for grammatical number in the behavioral data suggest that the representation of the two nouns, N 1 and N 2 , interact during sentence processing.There could be two types of interactions: (1) Interference in memory.An incongruent number of N 2 interferes with that of N 1 in memory; (2) Conflicting prediction signals.A (global) expectation for the corresponding grammatical number of V , based on N ′ 1 s number, is affected by a (local) expectation generated by N 2 .However, the behavioral results cannot tell apart the two, since both alternatives predict the same behavioral outcome.Therefore, we next tested whether these effects are traceable in the neural data.
We sought the effects in both the model and in humans.We presented the same stimuli to human participants and to the model, and recorded network activity after the presentation of each word.For humans, neural activity was recorded with a magnetoencephalography (MEG) machine.For the model, we extracted the hidden activity of all recurrent units of the network (Methods).
To identify the main effects in the data, we used standard decoding techniques: for each effect, at each time point, a linear binary classifier was trained to separate trials from the two conditions, and then tested on unseen data in a crossvalidation manner.Figure 4 shows the decodability of the main effects for both the artificial (panel A) and human data (panel B).
For the model (Figure 4A), for all three constructions, all effects were decodable with high performance, measured in terms of the Area Under of Curve (AUC).The structural effect reached full decodability after the onset of the target word.Indeed, prior to the onset of the target verb, the model cannot predict the grammaticality of the sentence.The transition effect reached full decodability also after the onset of the target word.Here too, prior to verb onset, a mismatch between the verb and the non-head noun cannot be predicted.Finally, the congruity effect was decodable already after the onset of the second noun.Indeed, information about feature mismatch between the two nouns is already available at this time point.
For the MEG data, and in contrast to the model, only the structural effect was decodable.The onset and the peak decodability of the effect varied across constructions.The effect becomes significant first, for the ObjRC-number (t = 300ms), then for the PP-Number (530ms), and lastly for the PP-animacy (760ms).The significance of the decodability was calculated based on cluster-based permutation testing (60; Methods).The decodability of the structural effect reached its highest value for the ObjRC-number construction (AU C : 0. We observed a discrepancy between the behavioral results, the decoding from the neural-network model, and the decoding from the neural data.At the behavioral level, the structural effect was the dominant factor, and effects emerging from the interaction with the intervening noun were significant only for the number feature.In the model, we observed the same sensitivity across all three main factors, namely, significant effects arising from the intervening noun for both the animacy and the number feature.Lastly, in the human neural data, only the structural effect was decodable, whereas the performance of decoding the other effects remained at chance level.
PP-Number ObjRC-Number PP-Animacy
In summary, we found strong positive evidence from the neural data for a structural effect in humans, but negative evidence for the transition effect.This goes against the conflicting-prediction-signals hypothesis and is in favor of the memory-interference one.We therefore next sought to identify positive evidence in favor of the memoryinterference hypothesis.
The influence of the attractor on the structural effect.. In our search for positive evidence for the noun-noun congruency effect, we reasoned that this effect should modulate the long-distance congruity effect: on trials where the two nouns are incongruent, the memory of the first noun N 1 would suffer and therefore the brain's response to a long-distance violation would be weaker and/or delayed compared to trials with two congruent nouns.To test this idea, for each construction, we first trained a linear binary classifier on the structural effect, and then, separately, tested its predictions on different conditions in the test data (Figure 5A).At training time, the classifier was trained to separate all violation trials from nonviolation trials, regardless of whether they were congruent or not.Then, at test time, the classifier was separately tested on unseen trials that were either congruent (continuous lines) or incongruent (dashed).
For PP-Number, we found that the structural effect was modulated by congruity.The difference between the congruent and incongruent trials became significant at 580ms after the onset of the target and was sustained for 200ms.For ObjRC-number, the structural effect was also modulated by congruity.This modulation became significant later compared to the one for the PP-Number construction, starting at 780ms and up until 980ms after the target onset.Finally, for PP-animacy, the structural effect was not modulated by congruity, in complete agreement with the behavioral results, where no behavioral interference was observed.
In sum, these results provide positive evidence for the congruency effect and in favor of the memory-interference explanation, where behavioral interference arises from interference between the two noun representations in memory.
The neural correlates of the markedness effect.Our design allowed us to study also a well-established phenomenon related to the processing of long-range agreement, known as the markedness effect.In classic work on grammatical agreement, it was observed that for sentences with a long-range subject-verb agreement, participants make more errors if the attractor is plural compared to singular (38).For example, comparing the two following sentences, 1. "The boy near the girls ___".
"
The boys near the girl ___".
On average, participants made more errors on (1) compared to (2).This phenomenon is known as the markedness effect, since in English, plural is the marked form (English: Thus far, this phenomenon has been studied mainly through behavioral results.We thus took advantage of our data to investigate whether we could identify a neural correlate of this phenomenon. We previously saw that the congruity effect modulated the structural effect (Figure 5A).To investigate the markedness effect, we took this analysis one step further: as before, we trained the classifier on the structural effect, but now we analyzed its performance separately for each value of congruity and attractor number (Figure 5B&C).
When examining the modulation of the structural effect by congruity, we observed no difference between the congruent and incongruent trials in the case of a singular attractor, for none of the three constructions.However, when examining the same effect for the plural attractor, the decodability of the congruent trials was significantly higher compared to the incongruent ones.This difference was significant for the time interval between 620ms and up until 810ms after the onset of the target word (3-way interaction cluster-based permutation.p < 0.05; Maris and Oostenveld 60).
In summary, our results provide the first demonstration of the markedness effect in human neurophysiological data.
Discussion
Human language processing requires the processing of hierarchical representations in the form of nested tree structures.The existence of such representations in humans is hardly in doubt, since it was demonstrated across a wide range of studies in linguistics, psycholinguistics, and neuroscience.However, online language processing might also involve other, lower-level, processes, unrelated to hierarchical structures and subject to general cognitive constraints.
We focused on long-range subject-verb agreement in language, such as in sentence 1, because this is a domain in which previous behavioral research suggests the involvement of both hierarchical and non-hierarchical processes.The long-range agreement requires hierarchical processing since to determine the correct grammatical number of the verb, one must represent the hierarchical structure of the sentence and use it to neglect other nouns that occur in intermediate prepositional or relative phrases but are not the genuine subject of the verb.For example, in sentences like "The boy near the girl likes climbing," the brain employs hierarchical processing to determine that it is the 'boy', not the 'girl', who enjoys climbing.However, a variety of behavioral experiments also indicate that the processing of such sentences can suffer from interference from non-hierarchically linked nouns, such as N 2 in sentence 1 (e.g., 38,41).
We reasoned, however, that N 2 could interfere linearly in the processing of the verb in various ways, which could lead to similar behavioral patterns.We studied two such, nonmutually exclusive, mechanisms.In the first, a local, sequential mechanism, sensitive to local transition probabilities, N 2 generates a prediction opposite to that of N 1 about the upcoming verb.In the second, the processing of N 2 interferes with the stored representation of N 1 , leading to a more fragile or downright erroneous representation of the grammatical number of N 1 .Both mechanisms would lead to errors in the processing of the verb, but due to different reasons: one is based on conflicting predictions, the other on memory interference.
Studies on the processing of sequences of non-linguistic stimuli, in both humans and monkeys, have previously supported the first alternative.They showed that there exist two distinct neural mechanisms involved in the processing of nonlinguistic sequence patterns, one sensitive to global regularities in the sequence, and the other sensitive only to local transitions between adjacent items (23).Such global and local predictions would be analog to predictions from the main (N 1 ) and local (N 2 ) nouns, respectively.Furthermore, analyses of artificial neural networks have concluded that both structure-sensitive and structure-agnostic mechanisms contribute to next-word prediction in AI language models (73).Findings from these models predict that, in humans too, two distinct mechanisms might underlie next-word prediction.
Support for the second alternative comes from numerous studies showing interference effects among words stored in memory (e.g., [74][75][76].In particular, the cue-based retrieval model (40) suggests that during the processing of the main verb, the local noun N 2 can interfere with the retrieval of the main noun N 1 and cause an erroneous representation of N 1 's features (e.g., its grammatical number).This can in turn lead to subject-verb agreement errors.
In this work, we conducted a neuroimaging experiment using magnetoencephalography (MEG) to study the neural correlates of long-distance agreement during sentence processing and to tease apart these alternatives.We found that contrary to the conflicting prediction signals hypothesis, there was no evidence for a local transition effect.Remarkably, the human brain does not generate any decodable error signal in response to local violations of grammatical number such as "the girls likes", as long as these violations can be accounted for by long-distance agreement with a distant subject, for instance, "The boy near the girls likes climbing".The only decodable signals arose from violations of long-distance agreement.This observation is all the more im-pressive that, in the non-linguistic domain, exactly the converse is found: local violation signals are early and large, while global violation signals are slower and more delayed (31,33,77,78) .Furthermore, our results present positive evidence for the memory-interference hypothesis: the congruity of the two nouns modulated the size of the response to longdistance violations, as predicted by the memory-interference account.We next discuss the implications of these results in view of previous findings from the literature.
A. Distinct mechanisms for the processing of sentences versus non-linguistic stimuli.The local-global paradigm is a variant of the auditory oddball paradigm (23), in which participants are presented with sounds in short sequences instead of in a continuous manner as in the classic oddball paradigm, thus allowing participants to chunk them and to predict an entire chunk.The local-global paradigm shows that when participants are presented with aaaab sequence patterns, in which the first four tones are identical and the fifth differs, the deviancy of the last tone generates a mismatch response (MMR) followed by a late surprise-elicited P 3b wave (23).A repetition of the same aaaab sequence reduces the P 3b component, however, while the "local" effect of MMR remains, suggesting that the MMR is an automatic response to local transition probabilities.The disappearance of the P 3b component suggests that a "global" expectation for a deviant fifth tone was generated.Indeed, when a aaaaa pattern is subsequently presented, the P 3b wave reappears, showing that a monotonic sequence can be surprising if it violates prior expectations (79).
The neural signatures of the local and global effects further differ in several ways.First, the local effect is early (100-200ms) and transient, whereas the global effect requires an additional 100-200 ms to rise, and it remains stable (32).Second, the local effect is automatic (does not require attention) and unconscious, whereas the global one disappears when participants are not attending or unconscious (23,33).Third, while the local effect was traced back to auditory cortices (30,80), the global one is distributed across the superior temporal sulcus, inferior frontal gyrus, dorsolateral prefrontal, intraparietal, anterior, and posterior cingulate cortices (77).Taken together, these findings show the co-existence of two distinct neural mechanisms involved in the processing of sequence patterns, with very different properties and which are sensitive to regularities at different time scales (34,35).
In contrast to the processing of auditory stimuli, the results from sentence processing differ substantially.For all three constructions, we found no transition-based effect, and only the structural effect was significant (Figure 4).This suggests that during language processing, structurebased computations dominate over transition-based computations, which is opposite to the case of non-linguistic stimuli.Specifically, in the local-global paradigm with sequences of auditory tones, transition effects are easily detectable, while structural effects can be fragile and get reduced, for instance, when subjects are distracted.For language processing, this is the opposite: while the structural effect is large and easily detectable, the transition effect is barely detectable or non-existent.Thus, once sequence items enter into the language system, as is the case here for features of number and animacy, sentence-level computations are entirely dominated by structure-sensitive processes, and largely robust to low-level transition effects.
B. Distinct mechanisms underlying long-range agreement in humans and in language models.Recent studies on neural language models show that structure-sensitive neural mechanisms dedicated to the processing of long-range agreements naturally emerge in the models during training (36).Specifically, it was found that long-range agreements are processed in the models by a small neural circuit, composed of a few specialized units.The core of this neural circuit contains two types of units, termed 'Syntax units' and 'Long-range number units'.The syntax units are sensitive to the latent structure of the sentence, and they convey this information to the number units, which in turn carry the grammatical information of the main subject up to the verb.This mechanism was found to emerge in the models during training without explicit supervision (the models were merely trained on a next-word prediction task), consistently across languages and for various grammatical features (grammatical number and gender) (37).
Furthermore, it was found that grammatical agreement is, in fact, processed by both a structure-sensitive and a structure-agnostic mechanism.The latter is a much simpler mechanism, which processes words independently of the latent hierarchical structure of the sentence (36).This second type of mechanism was shown to be sensitive to mere local word transitions in language and to be carried by a larger set of units, termed 'Short-range number units'.The short-range units carry predictions about upcoming words based on surface statistical regularities among word sequences, whereas the long-range units generate predictions based on the hierarchical structure of the sentence.
Thus, in language models, predictions about upcoming words arise from two processes occurring at distinct time scales and distinct levels of encoding of incoming sequences.In the present study, we confirmed this dual organization using decoding.For all three constructions tested, we found significant decoding of the structural, transition, and congruity effects (Figure 4).For the structural and transition effects, maximal decoding was reached at the verb onset, whereas for the congruity effect it occurred on the preceding noun, since information about congruity is already available at this point.The presence of both structural and transition effects is in accordance with the predictions from previous analyses, which identified two distinct types of mechanisms in the models.
By contrast, the absence of a transition effect in humans points to a substantial difference between the neural mechanisms underlying long-range agreements in humans and language models.In humans, the structural effect is the dominant one and transition-based predictions were undetectable.In the models, although both effects were equally well decoded, the transition effect corresponds to a larger number of units compared to the structural one.Specifically, the transi-tion effect corresponds to neural activity of the 'short-range units', which are abundant in the model, whereas the structural effect corresponds to neural activity of only a small set of units and was shown to be highly 'sparse' (37).Behavioral evidence for such sparsity was also recently reported for the newer Transformer architecture (81).In summary, similarly to the case of the processing of non-linguistic stimuli, the structural effect is the dominant one when it comes to human language processing, unlike what occurs in current language models.
C. Neural correlates of memory interference between the main and embedded nouns during sentence processing.
Memory-based models of sentence processing (40,43,(82)(83)(84)(85)(86) suggest that new incoming materials, such as an inflected verb, trigger memory retrieval of previous information, stored in sentence constituents in memory, in order to complete the noun-verb pairing process.The cue-based model is a two-stage mechanism in which the parser predicts the number of the verb and only engages in a retrieval process when this prediction mismatches the bottom-up input.The second stage of this mechanism is sensitive to interference effects and might lead to the retrieval of the wrong feature (e.g., grammatical number).
Note that since predictions about the verb in the cuebased retrieval model depend on a parser, they are only structural, determined by the main noun N 1 only.Low-level sequential predictions, as in the case of sequences of nonlinguistic stimuli and neural language models, are not considered in the cue-based retrieval model.Therefore, in memorybased models, a local prediction signal arising from N 2 alone does not occur and is not needed to explain the observed behavioral interference.Our results are fully consistent with this view, without having to assume any additional low-level prediction mechanisms (as present, for instance, in neural language models).First, our behavioral results show a positive interaction between grammaticality and congruity (i.e., a transition effect; Figure 3).That is, an incongruity between the main noun and the local noun elicited more errors in ungrammatical compared to grammatical sentences, more so than in the congruent conditions.This finding is akin to a previous effect reported in studies using self-paced reading, which was termed 'grammatical asymmetry' (43,70).In these studies, it was shown that agreement attraction facilitated the processing of ungrammatical but not grammatical sentences, in the case of incongruent nouns, in accordance with a cue-based retrieval model-a retrieval process was triggered only when the prediction of the verb mismatched the bottom-up input.Our behavioral results, therefore, replicate the 'grammatical asymmetry' effect.
Furthermore, in MEG signals, we found that the structural effect was modulated by congruency (Figure 5).This provides positive neural evidence for memory interference between incongruent nouns during agreement processing, in support of memory-interference models.Since a transition effect was not observed in the neural data, memory interference provides a sufficient explanation for agreement errors observed behaviorally and rejects the need to assume that transition-based predictions are active alongside structured predictions.In sum, our results reject the conflicting prediction signal hypothesis and corroborate the memoryinterference one.
D. Discrepancy between the processing of grammatical number and animacy.
The behavioral data revealed a large difference between a violation with respect to grammatical number and with respect to animacy.Participants were able to detect both number and animacy violations and, in both cases, were better at affirming that a sentence is grammatical/felicitous than at detecting a violation.However, an intervening incongruent noun, with a mismatching grammatical number, induced a large behavioral interference in the case of number violations, while a similar intervention of a noun with a mismatching animacy feature did not affect participant performance in the case of animacy violations.
Although both number and animacy are syntactic as well as semantic features, number is borne by an overt morpheme in both nouns and verbs, whereas animacy information can only be retrieved from the lexicon.Thus, number may have been processed at a morphosyntactic stage earlier than, and more susceptible to interference than, the lexicosemantic stage which is needed to detect animacy violations.Our results suggest that the latter stage is totally structuredependent and immune to intervention by an incongruent noun.At the very least, they indicate that during language processing, grammatical number, and animacy are processed and integrated into an ongoing sentence representation in quite different ways and that the processing of animacy is more robust to intervening material.This result is, again, consistent with memory-based models of sentence processing (40) for which it was suggested that morphosyntactic processing is relatively 'fragile' compared to the processing of animacy (87).Memory-based models of sentence processing suggest that new incoming materials, such as an inflected verb, trigger memory retrieval of previous information, stored in sentence constituents in memory, in order to complete the noun-verb pairing process.This retrieval process is sensitive to similarities among items in memory, and can therefore explain the observed discrepancy.Morphosyntactic features were found to be weaker cues compared to animacy (87), thus resulting in higher similarities among memory items that only differ in morphosyntactic marking.This could make grammatical-number processing more prone to confusion errors, compared to animacy, and therefore to more erroneous grammaticality judgments.Our results therefore corroborate the robustness of animacy processing compared to grammatical number.E. General discussion.Our study is subject to several limitations.First, the absence of evidence is not evidence of absence, and the transition effect might have been undetected due to the low signal-to-noise ratio of the MEG data.However, the modulation of the structural effect by congruity (Figure 5A), and further, by grammatical number (Figure 5B&C), shows that our decoding techniques are quite sensitive and highly likely to reflect linguistic processing.This suggests that, even if transition-based computations do occur during sentence processing, their neural traces are small compared to those related to structure-based computations.
Second, our participants were explicitly instructed to detect either grammatical or semantic violations.It is therefore possible that the decoding of neural responses to these violations reflected, at least in part, the imposed task demands rather than pure linguistic processing.Do note that we randomized the assignments of the motor responses within each subject, thus making them orthogonal to violations and therefore unlikely to be at the source of the observed decodings.Still, task demands could have amplified the detection of violations by drawing attention to them.Nevertheless, we see no reason why attention could not have had the additional effect of amplifying a local transition effect if such an effect existed.It is, therefore, reasonable to assume that the core of our conclusions would still hold, albeit with even more reduced and therefore harder-to-detect MEG signals, if we had adopted a passive sentence reading paradigm.In particular, the conclusions regarding the large discrepancy between the structural and transition effects as well as the modulation of the structural effect by congruency, which is orthogonal to such possible amplification, would still hold.A passive task would have also incurred other concerns, for example, the difficulty of participants to engage in the task throughout the experiment, and the relative invisibility of the small morphological markers of agreement in French.
Third, our study is entirely focused on grammatical agreement, and therefore its conclusions are limited to this scope.There might well be transition effects in simpler conditions, and indeed word-transition and syllable-transition probabilities effects have been reported during story listening (88).Our study merely shows that, once the stimuli enter a linguistic processing stage in which the morphological markers for number are involved, then no local transition probability effects are detectable.
Last, we note that our study is not the first to provide neural evidence in favor of the cue-based retrieval model (e.g., 89).However, the unique setup proposed in our study is to directly contrast hierarchical and sequential processing during sentence comprehension, and to further disentangle two possible alternatives for behavioral interference effects.
In conclusion, we summarize the main contributions of our study.First, at the expense of the processing of nonlinguistic stimuli and with neural language models, sentence processing in the human brain seems largely robust to lowlevel transition-based, predictions.The present study provides neural evidence in favor of the dominance of structurebased over transition-based processing during sentence processing.Second, we showed positive neural evidence for memory interference during sentence processing, which provides support for cue-based retrieval models in explaining long-range subject-verb agreement errors.Last, we provide, to our knowledge, the first neural evidence for the markedness effect in sentence processing.
Fig. 1 .
Fig. 1.Structural vs. linear intervention in sentence processing-experimental design and paradigm.To disentangle two possible types of processing during sentence comprehension, the experimental design contrast: (i) a structural dependency between a target verb and a noun, which either holds, or creates a violation at the verb, and (ii) a linear (sequential) interaction between the target verb and another noun, which either facilitates or interferes with verb processing.(A) Tree representations of the two sentence constructions explored in the experiments.Below, is an illustration of the main effects of the design: Structural effect (orange), which depends on the syntactic relation between the main subject and target verb (colored path in the tree representation).Transition effect (magenta), which refers to the (mis)match between the target verb and a linearly intervening noun (attractor), with respect to either grammatical number or animacy.congruity effect, which refers to the (mis)match between the two nouns; In the left construction, the structural effect is long-range and the transition (linear) one is short-range.On the right construction, it is the opposite.(B) Experimental Paradigm: subjects were presented with sentences in a rapid serial visual presentation (RSVP), and their task was to report whether the sentences are grammatically correct.At the end of each trial, visual feedback on their performance was given.
Table 1 .
Design and prototypical examples.The experiment utilizes two linguistic constructions, a Prepositional Phrase (PP) and an Object Relative Clause (ObjRC) as well as two features of interest (Number & Animacy).The manipulation of the violation factor corresponds to the structural effect.
Fig. 2 .
Fig. 2. Graphical description of a two-layer recurrent neural language model with LSTM cells (not discussed here; see, e.g., 61).At each time step, the model processes an input word and outputs a probability distribution over the potential next words in the sentence.The prediction of the output word depends on both the input word and on the previous state of the model, which serves as longer-term context (the horizontal arrows in the figure represent the recurrent connections carrying the previous state through).
Figure 2 :
Figure 2: Behavioral results.Interaction plots for the Violation and Congruency effects (N=22).The main effects for Violation and Congruency as well as their interaction, are significant in the number constructions (p<0.05).In the animacy condition, only the main effect of Violation is significant.The error bars indicate the standard error of the mean (SEM) calculated across participants.
Fig. 3 .
Fig. 3. Behavioral results.Interaction plots for the Structural and congruity effects (N=22).The main structural and congruity effects, as well as their interaction, are significant in the number of constructions (p < 0.05).In the animacy condition, only the structural effect is significant.The error bars indicate the standard error of the mean (SEM) calculated across participants.
Figure 3 :Fig. 4 .
Figure 3: Structural but not linear effects are decodable in human data.In contrast, all effects are decodable in LSTM activations.(A) Decoding of the main effects originating from the activations of an LSTM architecture.All main effects are decodable.(B) Neural decoding of the three main effects using all sensor types (magnetometers, gradiometers and eeg).A different decoder was evaluated per time-point and modifier.The evaluation metric is the Area Under the Curve (AUC).Only the main effect of Violation (A) is decodable.The dotted lines indicate statistically significant time intervals (p < 0.05; corrected -spatio-temporal clustering permutation test).The decoding for the main effects of Transition (B) and congruency (C) remained at chance level until the end of the time of interest.Results shown for correctl responses (See S3 for all responses).Data smoothed with a 100ms moving Gaussian kernel for visualization purposes.
Fig. 5 .
Fig. 5. Modulation of the structural effect by congruity and neural correlates of the markedness effect.(A)A classifier was trained on the structural effect (grammatical vs. ungrammatical trials), per construction, and subsequently evaluated on the structural effect when contrasting the congruent (continuous lines) and incongruent (dashed) trials.Color stripes indicate statistically significant time intervals (p < 0.05; corrected using spatio-temporal clustering permutation test).Congruity was found to modulate the structural effect for both PP-number and ObjRC-number, but not for PP-Animacy.(B&C) For each construction, a classifier was trained on the structural effect, and subsequently tested on the same effect when trials were split for both congruity and attractor number.For PP-number, a statistically significant difference was found for the plural but not the singular case (p < 0.05; corrected-cluster-based permutation test).For ObjRC-Number and PP-Animacy, no significant differences were found.Results are shown for correct responses only. | 2022-07-12T15:26:49.256Z | 2023-10-25T00:00:00.000 | {
"year": 2023,
"sha1": "92f374e8cd324850e53cc9b8d52cd293d585e940",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2022/07/10/2022.07.08.499161.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "92f374e8cd324850e53cc9b8d52cd293d585e940",
"s2fieldsofstudy": [
"Linguistics",
"Computer Science",
"Psychology"
],
"extfieldsofstudy": [
"Biology"
]
} |
262218421 | pes2o/s2orc | v3-fos-license | Effect of Hypoxia Conditioning on Body Composition in Middle-Aged and Older Adults: A Systematic Review and Meta-Analysis
Background The effects of hypoxia conditioning, which involves recurrent exposure to hypoxia combined with exercise training, on improving body composition in the ageing population have not been extensively investigated. Objective This meta-analysis aimed to determine if hypoxia conditioning, compared to similar training near sea level, maximizes body composition benefits in middle-aged and older adults. Methods A literature search of PubMed, EMBASE, Web of Science, Scopus and CNKI (China National Knowledge Infrastructure) databases (up to 27th November 2022) was performed, including the reference lists of relevant papers. Three independent reviewers extracted study characteristics and health outcome measures. Search results were limited to original studies of the effects of hypoxia conditioning on body composition in middle-aged and older adults. Results Twelve studies with a total of 335 participants were included. Hypoxia conditioning induced greater reductions in body mass index (MD = -0.92, 95%CI: -1.28 to -0.55, I2 = 0%, p < 0.00001) and body fat (SMD = -0.38, 95%CI: -0.68 to -0.07, I2 = 49%, p = 0.01) in middle-aged and older adults compared with normoxic conditioning. Hypoxia conditioning improved lean mass with this effect not being larger than equivalent normoxic interventions in either middle-aged or older adults (SMD = 0.07, 95%CI -0.12 to 0.25, I2 = 0%, p = 0.48). Subgroup analysis showed that exercise in moderate hypoxia (FiO2 > 15%) had larger effects than more severe hypoxia (FiO2 ≤ 15%) for improving body mass index in middle-aged and older adults. Hypoxia exposure of at least 60 min per session resulted in larger benefits for both body mass index and body fat. Conclusion Hypoxia conditioning, compared to equivalent training in normoxia, induced greater body fat and body mass index improvements in middle-aged and older adults. Adding hypoxia exposure to exercise interventions is a viable therapeutic solution to effectively manage body composition in ageing population.
Key Points
• A meta-analysis was conducted to evaluate the effects of hypoxia conditioning on body composition in middleaged and older adults.• Compared to normoxic equivalent, hypoxia conditioning provides greater reductions in body fat and body mass index.However, the magnitude of improvement in lean mass was not greater in hypoxic than normoxic conditions.• Hypoxia conditioning of at least 60 minutes per session in moderate hypoxia (FiO 2 > 15%) is a promising intervention to improve body composition in middle-aged and older adults.
Keywords Hypoxic training, Normobaric hypoxia, Body fat, Lean mass, Older adults
Background
The number and proportion of middle-aged adults (ages 40-65 years) and older adults (aged older than 65 years) is increasing globally [1].Body composition changes occur with ageing, including fat accumulation and muscle loss, which may lead to obesity and sarcopenia in ageing population [2][3][4].A smaller muscle mass is related to cardiovascular and metabolic diseases, compromised quality of life, increased risk of falls, fractures, and mortality [5][6][7][8].Obesity is another important health risk factor affecting middle-aged and older adults that leads to insulin resistance, chronic inflammation, cardiovascular diseases, frailty, and mental disorders [9][10][11].Therefore, identifying effective, enjoyable, and safe interventions to improve body composition in ageing population is a priority.
Exercise is a cornerstone therapy to improve body composition by reducing body fat and signalling pathways controlling muscle mass [12][13][14].Various exercise modalities including aerobic training, high-intensity interval training and resistance training can improve body composition in the elderly [15,16].Compared to those without training, older adults with sarcopenia obesity undertaking eight weeks of moderate-intensity aerobic training, resistance training, and combined aerobic and resistance training interventions significantly improved muscle mass and fat mass [17].Twelve weeks of elastic-band resistance training combined with walking/running on a treadmill and bicycle significantly decreased body weight, body mass index (BMI), and body fat percentage in obese, older men [18].While exercise training provides valuable health benefits for managing and even reversing some signs of ageing, no single exercise modality can be recommended.
Exercising regularly, at a moderate-to-vigorous intensity, and combining activities that challenge the heart, lungs and muscles, assists in managing cardio-metabolic health [19][20][21].Higher exercise intensities have generally been associated with larger improvement in body composition [22][23][24][25].For instance, training at an intensity corresponding to 70% compared to 50% of maximal oxygen consumption leads to larger benefits for fat mass, total abdominal fat, and subcutaneous abdominal fat in middle-aged females [22].Despite larger improvement in body composition, exercising at a higher intensity is eventually accompanied by higher injury risk [26] and reduced pleasure [27], likely preventing its wide application in sedentary populations.Furthermore, sedentary middle-aged and older adults are often suffering from an increased prevalence of chronic musculoskeletal disorders (i.e., osteoarthritis or osteoporosis), in turn decreasing training adherence especially for higher intensity workouts [28].Innovative approaches are warranted for effective body composition management in ageing populations beyond what is achieved to date.
Hypoxia conditioning relates to active (i.e., during exercise) exposure to systemic (whole body) and/or local (tissue) hypoxia, resulting in a decrease in arterial oxygen availability [29].During the last decade, the popularity of training at low-to-moderate exercise intensity combined with systemic hypoxia (low oxygen conditions) to bring equivalent or additional physiological and functional benefits, compared with high-intensity exercise in normoxia (normal oxygen conditions), has grown in obese cohorts [30].Several studies recruiting young, apparently healthy individuals have demonstrated that resistance training in hypoxia is effective for improving skeletal muscle fibre cross-sectional area, lean body mass, muscle strength, and exercise capacity [31][32][33][34][35][36].To date, however, whether hypoxia conditioning compared to normoxia leads to improved body composition in middle-aged and older adults remains controversial.Training in hypoxia (inspired oxygen fraction [FiO 2 ] = 15%) twice weekly for four weeks induced larger improvements in physical fitness and body composition than similar training in normoxia, with also the advantage of lower sustained workloads during actual training sessions in hypoxia [37].In overweight and obese adults, training in hypoxia was also more effective at reducing abdominal fat than in normoxia [38,39].Conversely, others failed to report greater improvements in markers of body composition (i.e., BMI, total lean mass, and total fat mass) after training in oxygen-deprived conditions [35,[40][41][42][43].
Therefore, this meta-analysis aimed to determine if hypoxia conditioning, compared to similar training near sea level, maximizes body composition benefits in middle-aged and older adults.
Methods
This meta-analysis was conducted in accordance with the Preferred Reporting Items for Systematic Review and Meta-analyses (PRISMA) 2020 Statement [44].
Eligibility criteria
Each eligible study had to meet all of the following criteria for the type of study, participants, interventions, and outcomes: (1) randomized controlled trials; (2) participants aged 40 years and older with no serious diseases; (3) the study design included comparisons between the same exercise training protocol in hypoxia (exposures were required to be simulated altitude ≥ 2000 m or a FiO 2 < 16.4%) and normoxia condition; (4) reported outcomes of body composition (BMI, lean body mass, body fat percentage, fat-free mass and/or fat mass).Studies were excluded if at least one of the following criteria were met: (1) abstracts or reviews; (2) not written in English or Chinese; (3) non-randomized controlled trials; (4) participants aged under 40; (5) participants with severe diseases including cancer, cardiovascular (i.e., heart failure and coronary artery disease), neurological (i.e., Parkinson's disease, Alzheimer's disease, multiple sclerosis), and respiratory (i.e., end-stage chronic obstructive pulmonary disease, pulmonary fibrosis, or severe asthma) conditions; (6) blood flow restriction as the hypoxia intervention; (7) full-text or data extraction unavailable; (8) no physical activity or exercise prescribed as part of the intervention protocol.Titles and abstracts of the initial retrieved studies were assessed by three independent researchers, any disagreement was discussed until reaching consensus.Subsequently, the full text of the potentially eligible studies was independently evaluated by two researchers.Disagreements were solved initially via discussion between the two independent researchers, while a third researcher was consulted for dispute resolution.
Data extraction
Data were extracted independently by two researchers into a pre-designed spreadsheet, including: (1) basic information (i.e., authors, year of publication); (2) experimental design (i.e., frequency, duration, intervention); (3) participant characteristics (i.e., age, sex, BMI and the number of participants); (4) outcome data suitable for analysis based on mean, standard deviation (SD) and sample size.The corresponding author of the included study would be contacted directly if the original data were not reported or incomplete.For studies that did not report outcomes including pre-post change as "Mean ± SD" or the outcomes were presented as "Mean ± SE/SEM (standard error/ standard error of mean)", calculations were conducted using the following formulas: SD 1 , SD 2 = standard deviation of pre and post; SE 1 , SE 2 = standard error of pre and post; Mean 2 , Mean 1 = Mean of pre and post; R = 0.4/0.5 Values presented as figures were digitized using graph digitizer software (WebPlotDigitizer), and the means and SD were measured manually at the pixel level to the scale provided on the figure.
Methodological quality assessment
The quality of each included study was determined by two researchers independently using The Cochrane Collaboration's tool for assessing risk of bias [45], followed by a cross-check of the results.The assessment was conducted according to the Cochrane manual 5.1.0.from the following domains: (1) random sequence generation; (2) allocation concealment; (3) blinding of participants, personnel, and outcome assessors; (4) blinding of outcome assessment; (5) incomplete outcome data; (6) selective reporting; (7) other sources of bias.A judgement was made on each of the domains as to whether studies were 'low risk' , 'high risk' or 'unclear risk' .Any disagreement between the two researchers on assessment results was solved initially via discussion, while a third researcher was consulted for dispute resolution.
Statistical analysis
All statistical analyses were conducted using the Review Manager software (version 5.3, the Cochrane Collaboration, Oxford, UK).Heterogeneity between trials was assessed using the I 2 statistic and the Chi-squared test.
The Cochrane guidelines explain I 2 statistics as follows: I 2 < 25%, low heterogeneity is assumed; I 2 < 75% and > 25%, moderate heterogeneity is assumed; I 2 > 75%, high heterogeneity is assumed [46].The random-effects model was chosen when I 2 > 25%, otherwise the fixed-effects model was used.If the outcomes were continuous variables, the sample size, the mean values, and SDs were extracted and used for statistical analyses.Besides, mean difference (MD) with 95% confidence intervals (95%CI) or standardized mean difference (SMD) with 95%CI were used based on whether the outcome data were reported in the same unit or not [45].The significance was set at p < 0.05.The criteria to interpret the magnitude of the effect size (Cohen's d effect size) were as follows: < 0.2 = no effect, 0.20-0.49= small effect, 0.50-0.79= moderate effect, and ≥ 0.80 = large effect.Small study effects were explored using funnel plots of MD versus SE, and by quantifying Egger's linear regression intercept [45].A large and statistically significant Egger statistic indicates the presence of a small study effect.Subgroup analyses were performed for investigating potential moderators including age (middle-aged and older adults), hypoxia severity (FiO 2 = 15% as the cut off point for moderate hypoxia [47]) and hypoxia exposure duration per session (< 60 min and ≥ 60 min).
Risk of bias of included studies
All included studies used randomization methods, while none of them reported the specific methods of allocation concealment (evaluated as 'unclear risk' of bias).The blinding of participants and personnel was not described clearly in two studies and was evaluated as a 'high risk' of bias in one study.Besides, ten studies did not report the blinding of outcome assessment ('unclear risk' of bias), while two studies were classified as 'low risk' of bias.Except for one study that was considered as 'high risk' of bias for incomplete outcome data, all the other studies were evaluated as 'low risk' of bias for the three assessment domains (incomplete outcome data, selective reporting, and other bias).Given that the use of complete blinding methods may not be feasible in exercise interventions, the overall risk of bias in the included studies was evaluated as 'low-to-moderate' (Fig. 2).
Discussion
This meta-analysis is the first to compare the effects of hypoxia conditioning with similar training in normoxia on body composition in middle-aged and older adults.A total of twelve studies with 335 participants training using various exercise modalities (resistance exercise, aerobic exercise, whole-body vibration training, high-intensity interval training, and a combination of aerobic and resistance exercise) were selected.Overall, results showed that hypoxia conditioning induces larger improvements in body fat and BMI, but not in lean mass.
Effects of hypoxia conditioning on body fat
This meta-analysis demonstrated that, compared to normoxia, hypoxia conditioning is more effective for reducing body fat in adults aged between 40 and 80 years old.
Our results for body fat are consistent with previous studies [37,47,48,51] and are also in accordance with findings obtained in cohorts of different age (16 to 40 years old) and individuals with obesity [30,54,55].One potential underlying mechanism to explain reported lower body fat with hypoxia conditioning relates to appetite reduction, eventually through a decrease of acylated ghrelin concentration associated with acute hypoxic exposure, thus restricting energy intake and contributing to energy deficit [56].Hence, energy deficit is the most important consideration for losing weight, regardless of the dietary approach being used [57].It was reported that hypoxia could modulate leptin levels, and increase adipocytokines, metabolic rate, mitochondrial function, and fat oxidation [56,58].Fat oxidation can be enhanced by an upregulation in the activity of mitochondrial enzymes [59].On the basis that hypoxic inducible factor-1 expression induced by repeated hypoxic exposure can maximize the number and efficiency of mitochondria, exercise training in oxygen-deprived conditions is now considered a viable therapeutic approach for treating obesity [39,47].There were several differences, including intervention design, dietary control, and age span, between the included studies despite the relatively low heterogeneity.Firstly, we found that there was one study [48] with a much larger effect than other investigations for reducing body fat.While hypobaric hypoxia was used in this study, all others included exercise sessions only performed in normobaric hypoxia.This difference in the nature of the hypoxic stimulus may be a source for the heterogeneity observed.Besides, daily dietary and energy intake of participants should be strictly controlled given that it might Fig. 2 Risk of bias summary of included studies significantly influence the magnitude of fat loss [57].None of the included studies reported the daily energy intake during the intervention period, while six studies simply requested (i.e., not verified) participants to maintain their normal diet.As a result, studies investigating the effect of hypoxia conditioning with a strict control of daily energy intake may need to be conducted in order to evaluate the effects of any additional benefits of hypoxia exposure for reducing body fat.
Effects of hypoxia conditioning on body mass index
Hypoxia conditioning favoured a reduction in BMI compared to normoxia.Baseline BMI levels of tested participants ranged from 23.9-33.1 kg/m 2 , indicating that only overweight-to-obese individuals were recruited.Given that lean mass did not increase significantly in most of the included studies (see next section), and the overall effect calculated from this meta-analysis was not significant either, we could speculate that BMI improvements are mainly due to a reduction in body fat.Among all investigations, the study by Nishiwaki et al. [48], is the only one showing a significant effect on BMI.Interestingly, the exercise program was conducted in hypobaric hypoxia (simulated altitude of all the other included studies were performed in normobaric hypoxia.Higher metabolic rate, reduced food intake, and a rise in leptin levels caused by hypobaric hypoxia [58] could explain why the largest effects on both BMI and body fat were found in the only hypobaric hypoxia study [48].Regardless, the validity of BMI as a body fat predictor continues to be debated since individuals with high muscle mass may be incorrectly classified as obese based on their BMI [60].However, participants included in this meta-analysis were inactive middle-aged and older adults with normal muscle mass.Therefore, larger improvements in BMI using hypoxic exposure should be considered a positive outcome of managing body composition.
Effects of hypoxia conditioning on lean mass
Hypoxia conditioning was not superior than equivalent normoxic training to modify lean mass, while there were also no significant differences between middle-aged and older adults.These findings are inconsistent with previous observations made in younger adults [61][62][63][64].These authors reported that resistance exercise at moderate intensity (70-85% 1RM) in hypoxia is more effective for improving lean body mass and fibre cross-sectional area than higher exercise intensities (> 85%1RM) in normoxia.
The metabolic stress caused by both resistance exercise and hypoxia can lead to an increased muscle fibre recruitment, elevated hormonal release, altered myokine production, and cellular swelling [62,65], in turn explaining larger increases in force production capacity compared to normoxic interventions.In our meta-analysis, however, the superiority of hypoxia conditioning compared to normoxia to boost muscle mass in middle-aged and older adults could not be verified.
Several suggestions can be made to explain why hypoxic exposure had no additional benefits on muscle mass.First, the hormonal response observed in younger adults might not occur, or occur at a relatively low degree, in middle-aged and older adults.In fact, resistance training in hypoxia suppressed the growth hormone response to exercise in older adults, while other hormones and metabolic markers were unaffected both acutely and chronically by hypoxia [66].Reportedly, acute erythropoietin expression after hypoxia exposure in young people was significantly higher than in older individuals [67].Secondly, exercise interventions reviewed here may not be optimally designed for muscle growth in middle-aged and older adults.Of all studies that considered lean mass as a main outcome, three of them used resistance exercise, four selected aerobic training, two performed whole-body vibration training, one used high-intensity interval training, and one combined aerobic and resistance exercises.A progressive overload is an effective strategy for inducing muscle hypertrophy using resistance exercise, but not necessarily for other forms of exercise [68,69].Consequently, additional hypoxia exposure might not be effective to induce muscle hypertrophy for 'non-resistance' exercise modalities.Thirdly, given that sufficient energy and protein intake are required to maximize the hypertrophic response, the apparent lack of significant effect in the included resistance training studies may be partly due to an uncontrolled diet [70].Pending confirmatory research, a hypoxia-induced appetite reduction may also explain this observation.Future studies on resistance exercise in hypoxia should consider interventions designed to specifically induce a hypertrophic response (e.g., intensity, volume, time under tension, Rating of Perceived Exertion (RPE)) [65], with a strict control of energy and protein intake.
Subgroup analysis Age
Subgroup analysis showed that the effects of exercise in hypoxia on body fat did not differ between middle-aged and older adults.In normal weight and active subjects, ageing had no deleterious effect on cardiac and ventilatory responses to hypoxia [71].As such, middle-aged and older adults likely responding similarly to exercise in hypoxia should achieve quite comparable benefits with hypoxia conditioning.Improved insulin sensitivity through hypoxic training is linked with a decrease in body fat in both obese and elderly individuals [47].However, several factors could contribute to ageinduced modifications of body composition, such as decreases in physical activity, energy production within the cell, and mitochondrial protein synthesis.The exact mechanisms responsible for comparable effects of exercise in either hypoxia or normoxia for reducing body fat remain unknown.
Hypoxia severity and duration
Hypoxia severity ranged from 2000 to 4500 m simulated altitudes (FiO 2 = 12.0-16.4%)in included studies.In our subgroup analysis, we used an arbitrary value of FiO 2 = 15% to delimit moderate hypoxia (FiO 2 > 15%) and more severe (FiO 2 ≤ 15%) hypoxic levels [47].A previous meta-analysis showed significant body weight and fat mass reductions induced by moderate altitude exposure combined with exercise [72].Contrastingly, in our study, there was no significant effect on improving body fat that favoured hypoxia conditioning for groups exposed to either moderate or more severe hypoxia.However, considering body fat, a moderate effect size favouring hypoxia was obtained in the subgroup with less severe hypoxia levels (FiO 2 > 15%).This observation may be due to the limited number of eligible studies.Besides, hypoxia exposure duration per session varied among studies that used moderate hypoxia, which may explain why non-significant effects were reported.Within the moderate hypoxia subgroup, one study exposed participants to hypoxia for 120 min per session (with significantly larger effects) [48], while the other one only used 45 min (with smaller effect) [43].Compared with normoxia, the effects of hypoxia conditioning on both body fat (moderate) and BMI (large) were significant in the subgroup with longer exposure duration (≥ 60 min).Studies reported that sufficient daily hypoxia exposure time and prolonged duration of the training program significantly reduced body fat and body weight [72,73], which is in agreement with our findings.Therefore, exercise in moderate hypoxia combined with exposure duration of at least 60 min is recommended for improving body fat and BMI in middle-aged and older adults.
Limitations
This study is not without limitations.With most of the included studies published in the past five years, hypoxia conditioning can be considered a fast-developing research area.Because only a few studies focused on the effects of exercise in hypoxia on body composition in middle-aged and older adults, the number of eligible studies was small.The heterogeneity of the protocols used, including resistance exercise, aerobic exercise, whole-body vibration training, high-intensity interval training, and a combination of aerobic and resistance exercise, poses a challenge when determining the effects of hypoxic conditioning.This challenge is further exacerbated by the limited number of investigations available for subgroup analysis.As such, we were unable to conduct subgroup analysis of the effects of different exercise modes, intensities, volumes and frequencies, all known to modulate cardio-metabolic stimulation and eventually the efficacy of hypoxia conditioning.
The inclusion of studies on endurance exercise, which is not anticipated to increase lean mass regardless of oxygen conditions, in the meta-analysis focusing on changes in lean mass may introduce inaccuracies.Even within a given type of exercise modality, such as resistance exercise or high-intensity interval training, the effect of adding hypoxia is likely to differ depending on a combination of factors.These factors include exercise structure (i.e., exercise-to-rest ratio, exercise duration), the hypoxic dose (i.e., exposure duration and severity), and the background of the participants being tested [74].It is common to observe variable results, with some individuals experiencing greater improvements in health markers with a specific form of training, a higher training dose, and/or hypoxia [75].
Another limitation was the utilization of an age limit of > 40 years to select studies that focused on middle-aged or older individuals.This criterion disregards the mean age of the population under investigation in each study, potentially resulting in the inclusion of individuals below 40 years old and introducing a potential source of bias.This is especially significant considering the SD reported for age by some studies (Table 1).
Further research considerations
In the absence of a well-accepted metric for defining the 'hypoxic dose' (severity and duration of the hypoxic stimulus), it remains challenging to directly compare literature findings.Recently, an index that integrates both the external (FiO 2 ) and internal (arterial oxygen saturation or SpO 2 ) stimuli to characterize individual responses to normobaric hypoxia has been introduced (i.e., the socalled 'SpO 2 to FiO 2 ratio' [74]).This metric based upon the magnitude of the stimulus (i.e., SpO 2 as a reflection of the 'internal' physiological stimulation), as opposed to the altitude elevation (i.e., only representing the 'external' stress), and also considering the duration of hypoxia exposure might be relevant for comparing studies [76].Additionally, sufficient energy and protein intake are required to maximize the hypertrophic response in terms of the effects of an intervention on lean mass.However, none of the studies included in this meta-analysis were designed to specifically induce a maximal hypertrophic response (e.g., intensity, volume, time under tension, RPE), with a strict control of energy and protein intake.In addition to modulating energy intake, exposure to hypoxia likely increases the body's reliance on carbohydrate as a fuel for substrate oxidation in reference to normoxia [77].While these considerations are important when designing interventions to lose weight and effectively manage body composition, diet was apparently not carefully controlled in most included studies.Future studies are required to delineate the isolated and combined effects of hypoxia and diet.Finally, the included studies did not account for potential differences between men and women, and it is plausible that a sex-related effect could have influenced the observed results.
Conclusion
Hypoxia conditioning, compared to equivalent training in normoxia, induced greater body composition improvement in terms of body fat and BMI in middle-aged and older adults.Adding hypoxia exposure to exercise interventions is a viable therapeutic solution to effectively manage body composition in ageing population.Our findings provide a valuable starting point for health professionals to explore innovative treatment options aimed at improving body composition in middle-aged and older adults.It is important to note that when prescribing hypoxic conditioning, there are multiple effective approaches, and no single modality can be universally recommended as the best for improving health outcomes.However, a guiding principle is to incorporate hypoxia conditioning sessions lasting at least 60 min within a moderate hypoxic environment (FiO 2 > 15%) to support middle-aged and older adults in achieving improved health outcomes.It is crucial to consider contextual factors such as individual characteristics, exercise preferences, availability of time and resources, as well as access to hypoxicators and/or simulated altitude chambers when implementing these interventions.
Fig. 1
Fig. 1 Flow diagram of study selection
Fig. 3
Fig. 3 Meta-analysis of the effects of exercise in hypoxia versus normoxia on lean mass in middle-aged and older adults.'a' , 'b' , 'c' , 'd' represents different outcome indicators of lean mass reported in the same study.Filled green square represents study-specific estimates, and filled diamond represents pooled estimates of random-effects.CI confidence interval, SD standard deviation
Fig. 4
Fig. 4 Meta-analysis of the effects of exercise in hypoxia versus normoxia on BMI in middle-aged and older adults.Filled green square represents study-specific estimates, and filled diamond represents pooled estimates of random-effects.CI confidence interval, SD standard deviation
Fig. 5 Fig. 6 Fig. 7 Fig. 8
Fig. 5 Hypoxia exposure severity subgroup analysis (BMI).Filled green square represents study-specific estimates, and filled diamond represents pooled estimates of random-effects.CI confidence interval, SD standard deviation
Fig. 9
Fig. 9 Hypoxia exposure duration (mins per session) subgroup analysis (body fat).'a' , 'b' , 'c' , 'd' represents different outcome indicators of body fat reported in the same study.Filled green square represents study-specific estimates, and filled diamond represents pooled estimates of random-effects.CI confidence interval, SD standard deviation
Table 1
Studies investigating changes in body composition markers following exercise training in hypoxia compared to normoxia in middle-aged and older adults 19, 95%CI -2.00 to 2.38, M male, F female, NR not reported, HYP hypoxia, NOR normoxia BMI body mass index, wks weeks, RM repetition maximum, RPE rating of perceived exertion, mins minutes, FiO 2 fraction of inspired oxygen, SpO 2 arterial oxygen saturation, W peak maximal workload, VȮ 2peak peak oxygen uptake, VȮ 2max maximal oxygen uptake, HR max maximal heart rate, LM lean mass, FM fat mass, FM% fat mass percentage, LM% lean mass percentage, MM% muscle mass percentage, BF body fat | 2023-09-25T13:45:19.627Z | 2023-09-25T00:00:00.000 | {
"year": 2023,
"sha1": "d27265c687a5e6b94ce7dc651a88a18d14f7fc2b",
"oa_license": "CCBY",
"oa_url": "https://sportsmedicine-open.springeropen.com/counter/pdf/10.1186/s40798-023-00635-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3cbdc2f719c73de8b71d12aeee80bd481b5dc6bb",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211132391 | pes2o/s2orc | v3-fos-license | Understanding the Limitations of Conditional Generative Models
Class-conditional generative models hold promise to overcome the shortcomings of their discriminative counterparts. They are a natural choice to solve discriminative tasks in a robust manner as they jointly optimize for predictive performance and accurate modeling of the input distribution. In this work, we investigate robust classification with likelihood-based generative models from a theoretical and practical perspective to investigate if they can deliver on their promises. Our analysis focuses on a spectrum of robustness properties: (1) Detection of worst-case outliers in the form of adversarial examples; (2) Detection of average-case outliers in the form of ambiguous inputs and (3) Detection of incorrectly labeled in-distribution inputs. Our theoretical result reveals that it is impossible to guarantee detectability of adversarially-perturbed inputs even for near-optimal generative classifiers. Experimentally, we find that while we are able to train robust models for MNIST, robustness completely breaks down on CIFAR10. We relate this failure to various undesirable model properties that can be traced to the maximum likelihood training objective. Despite being a common choice in the literature, our results indicate that likelihood-based conditional generative models may are surprisingly ineffective for robust classification.
INTRODUCTION
: Linear interpolations of inputs and respective outputs of a conditional generative model between two MNIST and CIFAR10 images from different classes. X-axis is interpolation steps and Y-axis negative log-likelihood in bits/dim (higher is more likely under model). MNIST interpolated images are far less likely than real images, whereas for CIFAR10 the opposite is observed, leading to high confidence classification of ambiguous out-of-distribution images.
Conditional generative models have recently shown promise to overcome many limitations of their discriminative counterparts. They have been shown to be robust against adversarial attacks (Schott et al., 2019;Ghosh et al., 2019;Song et al., 2018;Li et al., 2018;Frosst et al., 2018), to enable robust classification in the presence of outliers (Nalisnick et al., 2019b) and to achieve promising results in semi-supervised learning (Kingma et al., 2014;Salimans et al., 2016). Motivated by these success stories, we study the properties of conditional generative models in more detail.
Unlike discriminative models, which can ignore class-irrelevant information, conditional generative models cannot discard any information in the input, potentially making it harder to fool them. Further, In this work, we analyze conditional generative models by assessing them on a spectrum of robustness tasks. (1) Detection of worst-case outliers in the form of adversarial examples; (2) Detection of average-case outliers in the form of ambiguous inputs and (3) Detection of incorrectly labeled indistribution inputs. If a generative classifier is able to perform well on all of these, it will naturally be robust to noisy, ambiguous or adversarially perturbed inputs.
Outlier detection in the above settings is substantially different from general out-of-distribution (OOD) detection, where the goal is to use unconditional generative models to detect any OOD input. For the general case, likelihood has been shown to be a poor detector of OOD samples. In fact, often higher likelihood is assigned to OOD data than to the training data itself (Nalisnick et al., 2019a). However, class-conditional likelihood necessarily needs to decrease towards the decision-boundary for the classifier to work well. Thus, if the class-conditional generative model has high accuracy, rejection of outliers from the wrong class via likelihood may be possible.
Our contributions are:
Provable Robustness We answer: Can we theoretically guarantee that a strong conditional generative model can robustly detect adversarially attacked inputs? In section 2 we show that even a near-perfect conditional generative model cannot be guaranteed to reject adversarially perturbed inputs with high probability.
Assessing the Likelihood Objective We discuss the basis to empirically analyze robustness in practice. We identify several fundamental issues with the maximum likelihood objective typically used to train conditional generative models and discuss whether it is appropriate for detecting out-of-distribution inputs.
Understanding Conflicting Results
We explore various properties of our trained conditional generative models and how they relate to fact that the model is robust on MNIST but not on CIFAR10. We further propose a new dataset where we combine MNIST images with CIFAR background, making the generative task as hard as CIFAR while keeping the discriminative task as easy as MNIST, and investigate how it affects robustness.
CONFIDENT MISTAKES CANNOT BE RULED OUT
The most challenging task in robust classification is accurately classifying or detecting adversarial attacks; inputs which have been maliciously perturbed to fool the classifier. In this section we discuss the possibility of guaranteeing robustness to adversarial attacks via conditional generative models.
Detectability of Adversarial Examples
In the adversarial spheres work (Gilmer et al., 2018) the authors showed that a model can be fooled without changing the ground-truth probability of the attacked datapoint. This was claimed to show that adversarial examples can lie on the data manifold and therefore cannot be detected. While (Gilmer et al., 2018) is an important work for understanding adversarial attacks, it has several limitations with regard to conditional generative models. First, just because the attack does not change the ground-truth likelihood, this does not mean the model can not detect the attack. Since the adversary needs to move the input to a location where the model is incorrect, the question arises: what kind of mistake will the model make? If the model assigns low likelihood to the correct class without increasing the likelihood of the other classes then the adversarial attack will be detected, as the joint likelihood over all classes moves below the threshold of typical inputs. Second, on the adversarial spheres dataset (Gilmer et al., 2018) the class supports do not overlap. If we were to train a model of the joint density p θ (x, y) (which does not have 100% classification accuracy) then the KL divergence KL (p(x, y)||p θ (x, y)), where p(x, y) is the data density, is infinite due to division by zero (note that KL (p θ (x, y)||p(x, y)) is what is minimized with maximum likelihood). This poses the question, whether small KL (p(x, y)||p θ (x, y)) or small Shannon-Jensen divergence is sufficient to guarantee robustness. In the following, we show that this condition is insufficient.
Class 1 Density Class 0 Density q(x | 0) = ∪ ) Uniform( Figure 2: Counter example construction. Shown on the left are the two class data densities, on the right the Bayes-optimal classifier for this problem (assuming λ 1 > λ 2 ) and the model we consider. Despite being almost optimal, the model can be fooled with undetectable adversarial examples (red arrows). Detailed description in section 2.
Why no Robustness Guarantee can be Given The intuition why conditional generative models should be robust is as follows: If we have a robust discriminative model then the set of confident mistakes, i.e. where the adversarial attacks must reside, has low probability but might be large in volume. For a robust conditional generative model, the set of undetectable adversarial attacks, i.e. high-density high-confidence mistakes, has to be small in volume. Since the adversary has to be ∆ close to this small volume set, the ∆ area around this small volume set should still be small. This is where the idea breaks down due to the curse of dimensionality. Expanding a set by a small radius can lead to a much larger one even with smoothness assumptions. Based on this insight we build an analytic counter-example for which we can prove that even if where p = p(x, y) is the data distribution, and q = q(x, y) is the model, we can with probability ≈ 0.5 take a correctly classified input sampled from p, and perturb it by at most ∆ to create an adversarial example that is classified incorrectly and is not detectable.
We note that the probability in every ball with radius ∆ can be made as small as desired, excluding degenerate cases. We also assume that the Bayes optimal classifier is confident and is not affected by the attack, i.e. we do not change the underlying class but wrongfully flip the decision of the classifier.
The counter-example goes as follows: Let U (a, b) be the density of a uniform distribution on an annulus in dimension d, {x ∈ R d : a ≤ ||x|| ≤ b} then the data conditional distribution is with p(y = 0) = p(y = 1) = 1/2. Both classes are a mixture of two distributions, uniform on the unit sphere and uniform on an annulus, as shown in Fig. 2. The model distribution is the following: i.e. for y = 1 the model is perfect, while for y = 0 we replace the mixture with uniform distribution over the whole domain. If λ 1 λ 2 then points in the sphere with radius 1 should be classified as class y = 0 with high likelihood. If λ 2 >> 1 (1+∆) d then the model classifies points in the unit sphere incorrectly with high likelihood. Finally if 1 >> λ 1 then almost half the data points will fall in the annulus between 1 and 1 + ∆ and can be adversarially attacked with distance lesser or equal to ∆ by moving them into the unit sphere as seen in Fig. 2. We also note that these attacks cannot be detected as the model likelihood only increases. In high dimensions, almost all the volume of a sphere is in the outer shell, and this can be used to show that in high enough dimensions we can get the condition in Eq. 1 for any value of and ∆ (and also the confidence of the mistakes δ). The detailed proof is in the supplementary material.
This counter-example shows that even under very strong conditions, a good conditional generative model can be attacked. Therefore no theoretical guarantees can be given in the general case for these models. Our construction, however, does not depend on the learning model but on the data geometry. This raises interesting questions concerning the source of the susceptibility to attacks: Is it the model or an inherent issue with the data?
THE DIFFICULTY IN TRAINING CONDITIONAL GENERATIVE MODELS
Most recent publications on likelihood-based generative models primarily focus on quantitative results of unconditonal density estimation (van den Oord et al., 2016;Kingma & Dhariwal, 2018;Salimans et al., 2017b;Kingma et al., 2016;Papamakarios et al., 2017). For conditional density estimation, either only qualitative samples are shown (Kingma & Dhariwal, 2018), or it is reported that conditional density estimation does not lead to better likelihoods than unconditional density estimation. In fact, it has been reported that conditional density estimation can lead to slightly worse data likelihoods (Papamakarios et al., 2017;Salimans et al., 2017b), which is surprising at first, as extra bits of important information are provided to the model.
Explaining Likelihood Behaviour One way to understand this seemingly contradictory relationship is to consider the objective we use to train our models. When we train a generative model with maximum likelihood (either exactly or through a lower bound) we are minimizing the empirical approximation of E x,y∼P [− log(P θ (x, y))] which is equivalent to minimizing KL(P (x, y)||P θ (x, y)). Consider now an image x with a discrete label y, which we are trying to model using P θ (x, y). The negative log-likelihood (NLL) objective is: If we model P θ (y|x) with a uniform distribution over classes, then the second term has a value of log(C) where C is the number of classes. This value is negligible compared to the first term E x∼P [− log(P θ (x))] and therefore the "penalty" for completely ignoring class information is negligible. So it is not surprising that models with strong generative abilities can have limited discriminative power. What makes matters even worse is that the penalty for confident mis-classification can be unbounded. This may also explain why the conditional ELBO is comparable to the unconditional ELBO (Papamakarios et al., 2017). Another way this can be seen is by thinking of the likelihood as the best lossless compression. When trying to encode an image, the benefit of the label is at most log(C) bits which is small compared to the whole image. While these few bits are important for users, from a likelihood perspective the difference between the correct p(y|x) and a uniform distribution is negligible. This means that when naively training a class-conditional generative model by minimizing E (x,y)∼P [− log(P θ (x|y))], typically discriminative performance as a classifier is very poor.
OUTLIER DETECTION
Another issue arises when models trained with maximum likelihood are used to detect outliers. The main issue is that maximum likelihood, which is equivalent to minimizing KL(P (x, y)||P θ (x, y)), is known to have a "mode-covering" behavior. It has been shown recently in (Nalisnick et al., 2019a) that generative models, trained using maximum likelihood, can be quite poor at detecting out-of-distribution example. In fact it has been shown that these models can give higher likelihood values, on average, to datasets different from the test dataset that corresponds to the training data. Intuitivily one can still hope that a high accuracy conditional generative model would recognize an input conditioned on the wrong class as an outlier, as it was successfully trained to separate these classes. In section 4.2 we show this is not the case in practice.
While (Nalisnick et al., 2019a) focuses its analysis into dataset variance, we propose this is an inherit issue with the likelihood objective. If it is correct then the way conditional generative models are trained is at odds with their desired behaviour. If this is the case, then useful conditional generative model will require a fundamentally different approach.
EXPERIMENTS
We now present a set of experiments designed to test the robustness of conditional generative models. All experiments were performed with a flow model where the likelihood can be computed in closed form as the probability of the latent space embedding (the prior) and a Jacobian correction term; see Sec A.1 for a detailed explanation. Given that we can compute p(x, y) for each class, we can easily compute p(y|x) and classify accordingly. Besides allowing closed-form likelihood computation, the flexibility in choosing the prior distribution was important to conduct various experiments. In our work we used a version of the GLOW model; details of the models and training is in the supplementary material sec. B. We note that the results are not unique to flow models, and we verified that similar phenomenon can be seen when training with the PixelCNN++ autoregressive model (Salimans et al., 2017a) in sec. E.
TRAINING CONDITIONAL GENERATIVE MODELS
Here we investigate the ability to train a conditional generative model with good likelihood and accuracy simultaneously. Usually in flow models the prior distribution in latent space z is Gaussian. For classification we used aclass-conditional mixture of 10 Gaussians p(z|y) = N (µ y , σ 2 y ) We compare three settings: 1) A class-conditional mixture of 10 Gaussians as the prior (Base). 2) A classconditional mixture of 10 Gaussians trained with an additional classification loss term (Reweighted). 3) Our proposed conditional split prior (Split) As we can see, especially on CIFAR10, pushing up the accuracy to values that are still far from stateof-the-art already results in non-negligible deterioration to the likelihood values. This exemplifies how obtaining strong classification accuracy without harming likelihood estimation is still a challenging problem. We note that while the difference between the split prior and re-weighted version is not huge, the split prior achieves better NLL and better accuracy in both experiments. We experimented with various other methods to improve training with limited success, see sec. C in the supplementary material for furture information. Next we show that even conditional generative models which are strong classifiers do not see images with the corrupted labels as outliers. To understand this phenomenon we first note that if we want the correct class to have a probability of at least 1 − δ then it is enough for the corresponding logit to be larger than all the others by log(C) + log 1−δ δ where C is the number of classes. For C = 10 and δ = 1e − 5 this is about 6, which is negligible relative to the likelihood of the image, which is in the scale of thousands. This means that even for a strong conditional generative model which confidently predicts the correct label, the pair {x i , y w = y i } (where w is the leading incorrect class) cannot be detected as an outlier according to the joint distribution, as the gap log(p(x i |y i )) − log(p(x i |y w )) is much smaller than the variation in likelihood values. In Fig. 3 we show this by plotting the histograms of the likelihood conditioned both on the correct class and on the most likely wrong class over the test set. In other words, in order for log(p(x i |y w )) to be considered an outlier the prediction needs to be extremely confident, much more than we expect it to be, considering test classification error.
ADVERSARIAL ATTACKS AS WORST CASE ANALYSIS
We first evaluate the ability of conditional generative models to detect standard attacks, and then try to detect attacks designed to fool the detector (likelihood function). We evalulate both the gradient based Carlini-Wagner L 2 attack (CW-L 2 ) (Carlini & Wagner, 2017b) and the gradient free boundary attack (Brendel et al., 2018). Results are shown in table 2 on the left. It is interesting to observe the disparity between the CW-L 2 attack, which is easily detectable, and the boundary attack which is much harder to detect.
Attacking
Classification Classification and Detection Next we modify our attacks to try to fool the detector as well. With the CW-L 2 attack we follow the modification suggested in (Carlini & Wagner, 2017a) and add an extra loss term det ( For the boundary attack we turn the C-way classification into a C + 1-way classification by adding another class which is "non-image" and classify any image above the detection threshold as such. We then use a targeted attack to try to fool the network to classify the image into a specific original class. This simple modification to the boundary attack will typically fail because it cannot initialize. The standard attack starts from a random image and all random images are easily detected as "non-image" and therefore do not have the right target class. To address this we start from a randomly chosen image from the target class, ensuring the original image is detected as a real image from the desired class. From table 2 (right side) we can see that even after the modification CW-L 2 still struggles to fool the detector. The boundary attack, however, succeeds completely on CIFAR10 and fails completely on MNIST, even when it managed to sometimes fool the detector without directly trying. We hypothesize that this is because the area between two images of separate classes, where the boundary attack needs to pass through, is correctly detected as out of distribution only for MNIST and not CIFAR10. We explore this further below.
AMBIGUOUS INPUTS AS AVERAGE CASE ANALYSIS
To understand why the learned networks are easily attacked on CIFAR but not on MNIST with the modified boundary attack, we explore the probability density of interpolations between two real images. This is inspired by the fact that the boundary attack proceeds along the line between the attacked image and the initial image. The minimum we would expect from a decent generative model is to detect the intermediate middle images as "non-image" with low likelihood. If this was the case and each class was a disconnected high likelihood region, the boundary attack would have a difficult time when starting from a different class image.
Given images x 0 and x 1 from separate classes y 0 and y 1 and for α ∈ [0, 1] we generate an intermediate image x α = α · x 1 + (1 − α)x 0 , and run the model on various α values to see the model prediction along the line. For endpoints we sample real images that are classified correctly and are above the detection threshold used previously. See Fig. 1 for interpolation examples from MNIST and CIFAR.
In figure 4 (a) we see the average results for MNIST for 1487 randomly selected pairs. As expected, the likelihood goes down as α moves away from the real images x 0 and x 1 . We also see the probability of both classes drop rapidly as the network predictions become less confident on the intermediate images.
Sampling 100 α values uniformly in the range [0, 1] we can also investigate how many of the On CIFAR images, using 1179 pairs, we get a very different picture (see fig. 4 (b)). Not only does the intermediate likelihood not drop down, it is even higher on average than on the real images albeit to a small degree. In classification we also see a very smooth transition between classes, unlike the sharp drop in the MNIST experiment. Lastly, 100% of the interpolated images lay above the detection threshold and none are detected as a "non-image" (for reference the detection threshold has 78.6% recall on real CIFAR10 test images). This shows that even with good likelihood and reasonable accuracy, the model still "mashes" the classes together, as one can move from one Gaussian to another without passing through low likelihood regions in-between. It also clarifies why the boundary attack is so successful on CIFAR but fails completely on MNIST. We note that the basic attack on MNIST is allowed to pass through these low density areas which is why it sometimes succeeds.
4.5 CLASS-UNRELATED ENTROPY IS TO BLAME In this section, we show that the difference in performance between CIFAR10 and MNIST can largely be attributed to how the entropy in the datasets is distributed, i.e how much the uncertainty in the data distribution is reduced after conditioning on the class label. For MNIST digits, a large source of uncertainty in pixel-space comes from the class label. Given the class, most pixels can be predicted accurately by simply taking the mean of the training set in each class. This is exactly why a linear classifier performs well on MNIST. Conversely on CIFAR10, after conditioning on the class label there still exists considerable uncertainty. Given the class is "cat," there still exists many complicated sources of uncertainty such as where the cat is and how it is posed. In this dataset, a much larger fraction of the uncertainty is not accounted for after conditioning on the label. This is not a function of the domain or the dimensionality of the dataset, it is a function of the dataset itself.
To empirically verify this, we have designed a dataset which replicates the challenges of CIFAR10 and places them onto a problem of the same discriminative difficulty as MNIST. To achieve this, we simply replaced the black backgrounds of MNIST images with randomly sampled (downsampled and greyscaled) images from CIFAR10. In this dataset, which we call background-MNIST (BG-MNIST), the classification problem is identically predictable from the same set of pixels as in standard MNIST but modeling the data density is much more challenging.
To further control the entropy in a fine-grained manner, we convolve the background with a Gaussian blur filter with various bandwidths to remove varying degrees of high frequency information. With high blur, the task begins to resemble standard MNIST and conditional generative models should perform as they do on MNIST. With low and no blur we expect them to behave as they do on CIFAR10. Table 3 summarizes the performance of conditional generative models on BG-MNIST. We train models with a "Reweighted" discriminative objective as in Section A. The reweighting allows them to perform well as classifiers but the likelihood of their generative component falls to below CIFAR10 levels. More strikingly, now when we interpolate between datapoints we observe behavior identical to our CIFAR10 models. This can be seen in Figure 6. Thus, we have created a dataset with the discriminative difficulty of MNIST and the generative difficulty of CIFAR10.
MNIST BG-MNIST-5 BG-MNIST-1 BG-MNIST-0 CIFAR10 One common belief is that adversarial attacks succeed by moving the data points off the data manifold, and therefore can possibly be detected by a generative model which should assign them low likelihood values. Although this view has been challenged in (Gilmer et al., 2018), we now discuss how their setting needs to be extended to fully study robustness guarantees of conditional generative models.
Recent work (Song et al., 2018;Frosst et al., 2018;Li et al., 2018) showed that a generative model can detect and defend adversarial attacks. However, there is a caveat when evaluating detectability of adversarial attacks: the attacker needs to be able to attack the detection algorithm as well. Not doing so has been shown to lead to drastically false robustness claims (Carlini & Wagner, 2017a). In (Li et al., 2018) the authors report difficulties training a high accuracy conditional generative model on CIFAR10, and resort to evaluation on a 2-class classification problem derived from CIFAR10. While they do show robustness similar to our Carlini-Wagner results, they do not apply the boundary attack which we found to break our models on CIFAR10. This highlights the need to utilize a diverse set of attacks. In (Schott et al., 2019) a generative model was used not just for adversarial detection but also robust classification on MNIST, leading to state-of-the-art robust classification accuracy. The method was only shown to work on MNIST, and is very slow at inference time. However, overall it provides an existence proof that conditional generative models can be very robust in practice. In (Ghosh et al., 2019) the authors also use generative models for detection and classification but only show results with the relatively weak FGSM attack, and on simple datasets. As we see in Fig. 1 and discuss in section 4, generative models trained on MNIST can display very different behavior than similar models trained on more challenging data like CIFAR10. This shows how success on MNIST may often not translate to success on other datasets.
CONCLUSION
In this work we explored limitations, both in theory and practice, of using conditional generative models to detect adversarial attacks. Most practical issues arise due to likelihood, the standard objective and evaluation metric for generative models by which probabilities can be computed. We conclude that likelihood-based density modeling and robust classification may fundamentally be at odds with one another as important aspects of the problem are not captured by this training and evaluation metric. This has wide-reaching implications for applications like out-of-distribution detection, adversarial robustness and generalization as well as semi-supervised learning with these models.
A.1 LIKELIHOOD-BASED GENERATIVE MODELS AS GENERATIVE CLASSIFIERS
We present a brief overview of flow-based deep generative models, conditional generative models, and their applications to adversarial example detection.
where z N has a known simple distribution, e.g. Gaussian, and all f i are parametric functions for which the determinant of the Jacobian can be computed efficiently. Using the change of variable formula we have log(p(x)) = log(p(z N )) + . The standard way to parameterize such functions f i is by splitting the input z i−1 into two z i−1 = (z 1 i−1 , z 2 i−1 ) and chose which is invertible as long as s(z 1 i−1 ) j = s j = 0 and we have log(| det(J i (z i−1 ))|) = j log(|s j |). For images the splitting is normally done in the channel dimension. These models are then trained by maximizing the empirical log likelihood (MLE).
A straightforward way to turn this generative model into a conditional generative model is to make p(z N ) a Gaussian mixture model (GMM) with one Gaussian per class, i.e. p(z N |y) = N (µ y , Σ y ). Assuming p(y) is known, then maximizing log(p(x, y)) is equivalent to maximizing log(p(x|y)) = log(p(z N |y)) + N i=1 log(| det(J i (z i ))|). At inference time, one can classify by simply using Bayes rule. Note that directly optimizing log(p(x|y)) results in poor classification accuracy as discussed in section 3. This issue was also addressed in the recent hybrid model work Nalisnick et al. (2019a).
We will now describe some of the various approaches we investigated in order to train the best possible flow-based conditional generative models, to achieve a better trade-off betwen classification accuracy and data-likelihood as compared to commonly-used approaches. We also discuss some failed approaches in the appendix.
A.2 REWEIGHTING
The most basic approach, which has been used before in various works, is to reweight the discriminative part in eq. (4). While this can produce good accuracy, it can have an unfavorable trade-off with the NLL where good accuracy comes with severely sub-optimal NLL. This tradeoff has also been shown in Nalisnick et al. (2019a) where they train a somewhat similar model but classify with a generalized linear model instead of a Gaussian mixture model.
A.3 ARCHITECTURE CHANGE
Padding channels has been shown to increase accuracy in invertible networks Jacobsen et al. (2018);Behrmann et al. (2019). This helps ameliorate a basic limitation in bijective mappings (see Eq. (5)), by allowing to increase the number of channels as a pre-processing step. Unlike the discriminative i-RevNet, we cannot just pad zeros as that would not be a continuous density. Instead we pad channels with uniform(0,1) random noise. In effect we do not model MNIST and CIFAR10 as is typically done in the literature, but rather the zero-padded version of those. While the ground-truth likelihoods for the padded and un-padded datapoints are the same due to independence of the uniform noise and unit density of the noise, this is not guaranteed to be captured by the model, making likelihoods very similar but not exactly comparable with the literature. This is not an issue for us, as we only compare models on the padded datasets.
A.4 SPLIT PRIOR
One reason MLE is bad at capturing the label is because a small number of dimensions have a small effect on the NLL. Fortunately, we can use this property to our advantage. As the contribution of the conditional class information is negligible for the data log likelihood, we choose to model it in its distinct subspace, as proposed by Jacobsen et al. (2019). Thus, we partition the hidden dimensions z = (z s , z n ) and only try to enforce the low-dimensional z s to be the logits. This has two advantages: 1) we do not enforce class-conditional dimensions to be factorial; and 2) we can explicitly up-weight the loss on this subspace and treat it as standard logits of a discriminative model. A similar approach is also used by semi-supervised VAEs Kingma et al. (2014). This lets us jointly optimize the data log-likelihood alongside a classification objective without requiring most of the dimensions to be discriminative. Using the factorization p(z s , z n |y) = p(z s |y) · p(z n |z s , y) we model p(z s |y) as Gaussian with class conditional mean e i = (0, ..., 0, 1, ...0) and covariance matrix scaled by a constant. The distribution p(z n |z s , y) is modeled as a Gaussian where the mean and variances are a function of y and z n .
B IMPLEMENTATION DETAILS
We pad MNIST with zeros so both datasets are 32x32 and subtract 0.5 from both datasets to have a [-0.5,0.5] range. For data augmentation we do pytorch's random crop with a padding of 4 and 'edge' padding mode, and random horizontal flip for CIFAR10 only.
The model is based on GLOW with 4 levels, affine coupling layers, 1x1 convolution permutations and actnorm in a multi-scale architecture. We choose 128 channels and 12 blocks per level for MNIST and 256 channels and 16 blocks for CIFAR10. In both MNIST and CIFAR10 experiments we double the number of channels with uniforrm(0,1) noise which we scale down to the range [0, 2/256] (taking it into account in the Jacobian term). One major difference is that we do the squeeze operation at the end of each level instead of the beginning, which is what allows us to use 4 levels. This is possible because with the added channels the number of channels is even and the standard splitting is possible before the squeeze operation.
The models are optimized using Adam, for 150 epochs. The initial learning rate is 1e − 3, decayed by a factor of 10 every 60 epochs. For the reweighted optimization the objective is loss = − log(p(x|y))/D − log(p(y|x)) where D is the data dimension (3x32x32 for CIFAR, 1x32x32 for MNIST).
For adversarial detection we use a threshold of 1.4 for MNIST (100% of test data are below the threshold) and 4. for CIFAR10 (78.6% of test images are below the threshold).
C NEGATIVE RESULTS
In this work explored many ideas in order to achieve better tradeoff between accuracy with little or no impact.
C.1 ROBUST PRIORS
Since the Gaussian prior is very sensitive to outliers, one idea was that confident miss-classifications carry a strong penalty which might result in "messing" all the classes together. A solution would be to replace the Gaussian with a more robust prior, e.g. Laplace or Cauchy. Another idea we explored is a mixture of Gaussian and Laplace or Cauchy using the same location parameter. In our experiments we did not see any significant difference from the Gaussian prior.
C.2 LABEL SMOOTHING
Another approach to try to address the same issue is a version of label smoothing. In this new model the Gaussian clusters are a latent variable that is equal to the real label with probability 1 − and uniform on the other labels with probability . Using this will bound the error for confident miss-classification as long as the data is close to one of the Gaussian centers.
C.3 FLOW-GAN
As we claimed the main issue is with the MLE objective, it seems like a better objective is to optimize KL (p(x, y)||p θ (x, y)) or the Jensen-Shannon divergence as this KL term is highly penalized for miss-classification. It is also more natural when considering robustness against adversarial attacks. Optimizing this directly is hard, but generative adversarial networks (GANs) Goodfellow et al. (2014) in theory should also optimize this objective. Simply training a GAN would not work as we are interested in the likelihood value for adversarial detection and GANs only let you sample and does not give you any information regarding an input image.
Since flow algorithms are bijective, we could combine the two objective as was done in the flow-GAN paper Grover et al. (2018). We trained this approach with various conditional-GAN alternatives and found it very hard to train. GANs are know to be unstable to train, and combining them with the unstable flow generator is problematic.
2. y q (x) = y(x), y q (x) = y(x). We change the prediction without changing the ground-truth label.
5. The density q(x) is greater or equal to the median density, making the attack undetectable by observing q(x).
6. For ∆ < 1 the probability in any radius ball can be made as small as desired.
7. The total variation of the distribution can be made as small as desired.
The last two conditions exclude degenerate trivial counter-exmaples, one where the whole distribution support is in a ∆ radius ball and ∆ does indeed represent a small pertubation. The other condition excludes "pathological" distributions ,e.g. misclassification on a dense zero measure set like the rationals.
This boils down to ensuring d is large enough so that there is a valid λ 1 such as Which is true for large enough d as the l.h.s decays exponentially while the r.h.s linearly.
Condition 6 is trivial as the radius of the support is fixed so as long as ∆ < 1 the probability in any ∆ radius ball decays exponentially. Regarding total variation, we note that from the divergence theorem this can be bounded by a term that depends on the surface area of shperes with fixed radius which decreases to zero as d goes to infinity.
E PIXELCNN++
We trained a conditional PixelCNN++ where instead of predicting each new pixel using a mixture of 10 components, we use one mixture component per class. Using reweighting we train using the following objective −log(p(x|y))/dim + α · −log(p(y|x)). As one can see from table 4, standard trainig, i.e. α = 0, results in very poor accuracy, while reweighting the classification score results in much better accuracy but worse NLL. | 2019-06-04T02:56:14.000Z | 2019-06-04T00:00:00.000 | {
"year": 2019,
"sha1": "47f258ce89e17088121702cc085db47669c72130",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2cfb6271a5b9e47951ccd9a79ddd74b1cf457fef",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
240678013 | pes2o/s2orc | v3-fos-license | Respiratory aspects of Covid-19 in children: what pediatricians need to know
This is an article prepared collectively by members of the Scientific Department of Pulmonology of Brazilian Society of Pediatrics (BSP) on the respiratory aspects of COVID-19 in childhood, considering the clinical and diagnostic peculiarities of this age group and discussing the imaging methods that can assist in this process. Pulmonary involvement in the disease is notorious and occurs in varying degrees of severity. Although less frequently than in adults, children and adolescents can also develop severe conditions. Imaging exams are part of the investigation of the patient with COVID-19, since they can assist in the initial diagnosis, in the assessment of the evolution and prognosis of the disease. The indications of chest X-rays and computed tomography (CT) and their most relevant characteristics are placed. In most studies, it is emphasized that the radiological findings in children are similar to those found in adults, but with less frequency, intensity and extension. Recently, however, authors who studied 34 children with COVID-19 in China reported that irregular, high-density opacities were common, while the ground-glass pattern, typical in adults, was rarely seen on CT’s scans. Chest X-rays is less sensitive to identify alterations, with CT being the best imaging test to visualize SARS-CoV-2 lesions, but it must be ordered with precise indications, as alone is not sufficient for diagnosis. Monitoring of cases through oximetry was also discussed. It is concluded after a vast literature review that the severity of the cases must be observed by clinical signs, imaging exams and oximetry, among others, always together.
INTRODUCTION
In December 2019, China informed the World Health Organization (WHO) that an outbreak of pneumonia of unknown origin occurred in the city of Wuhan, Hubei province. On January 7, 2020, a new type of Coronavirus was identified as the etiologic agent of these pneumonias 1,2 . On 02/11/2020, this disease was called COVID-19 (Coronavirus disease-2019) and on the same day the WHO named the SARS-CoV-2 virus. On 01/20/2020, the first case involving a child was reported, in the city of Shenzen, also in China 3 . Since then, the disease has spread throughout the world, and was classified as a pandemic by the WHO on 11/03/2020 2 .
SARS-CoV-2 is the seventh virus identified in the Coronavirus family, with a single-stranded RNA, being common to different species, including humans. It probably originated from bats due to the similarity with the virus that affects this species 4 .
Transmission
The most important form of virus transmission is through respiratory secretions (droplets and aerosols containing viruses inside), released through speech, breathing, coughing and sneezing. These secretions infect other people up to 2 meters away, and are the main route of spread of the disease 4,5 . Sick individuals are the main contaminants, but people who are asymptomatic or who are still within the incubation period are also potential contaminants 4,5 . Within this context, although children present milder symptoms than adults, contaminants and sources of disease spread are also possible, due to their nature of contamination and because they adhere to the respiratory etiquette 1,2 .
The most severe cases have a higher viral load and are significant sources of infection, hence the cause of the high number of healthcare professionals getting sick, making nosocomial transmission an important point of concern in the epidemiology of the disease 4 .
Another form of contamination is direct contact with secretions taken to the mouth, nose or eyes by contaminated hands that have had contact with surfaces containing the virus 4 . SARS-CoV-2 can survive for different periods on different surfaces, with contamination of some of them, such as door handles, sinks, protective equipment, bathroom towels, among others, being common. The virus survives longer on surfaces at lower temperatures. There is also the possibility of contamination by air vents, such as air conditioning devices 4 .
Vertical transmission has already been reported in the literature, but in small numbers, as case reports, especially during the last weeks of pregnancy, which promotes virulence in the newborn and possible neurological manifestations 6,7 . Yagnin 3 reported a series of 10 cases of mothers with positive RT-PCR for COVID-19, but in none of the newborns did the test become positive soon after birth. To date, there is no evidence of the presence of SARS-CoV-2 in breast milk 7 .
SARS-CoV-2 has already been found in the urine and feces of infected children, which is yet another epidemiological concern. It was isolated in feces weeks after diagnosis, suggesting contamination by this route for a prolonged period 1 .
Pathogenesis
After transmission, the virus is deposited along the respiratory tract 5 . It uses the Angiotensin-2 Converting Enzyme (ECA-2) to penetrate the cell where it can replicate 8 .
In the initial phase, the virus promotes local symptoms of the upper airways and general symptoms such as adynamia, myalgia and fever. This phase is contaminating and can end at this stage, leading to the end of the disease, which occurs in around 80% of those infected 4 . Seven days after this period, the pulmonary phase with infiltration and proliferation of the virus in the lungs can occur, causing pneumonia with vasodilation, increased endothelial permeability, leukocyte recruitment and lung injury with hypoxia. There may be cardiopulmonary stress associated 4 . In the third stage (inflammatory phase) there is systemic inflammation, called cytokine storm, with the protagonist Interleukin-6 (IL-6) activated by leukocytes, acting on a large number of cells, triggering this response. There is an increase in serum levels of ferritin, interleukins and C-reactive protein. This inflammatory process can affect other organs. Cardiac injury with myocarditis, cardiac muscle contraction deficit and liver injury with elevated transaminases is more frequently described 4 . Another impairment is disseminated vascular coagulation, with an increased risk of pulmonary thromboembolism or in other regions 4 .
Children have had milder symptoms and signs compared to adults, especially the elderly and adults with comorbidities, such as arterial hypertension, heart disease and diabetes 1,2 . The causes for these milder signs and symptoms are still unknown. Some hypotheses would be: children tend not to have dysregulation of the immune system as it occurs with adults, since they maintain normal lymphocyte counts (reduced by 3.5% against 70% of adults), as well as C-reactive protein dosage, normal D-dimer and liver function 4,8 . Another hypothesis is that children have lower expression of ACE-2, which makes it difficult for the virus to enter the cell cytoplasm 8 . Finally, they present significant production of antibodies against other viruses, which would somehow act against SARS-CoV-2 4 .
Clinical status: how to assess the respiratory disease?
Based on current data, children have less clinical severity in about 80% of reported cases. In those with comorbidities, however, the disease may have a severe course, progressing to severe acute respiratory syndrome (SARS) and multiple organ dysfunction. The main symptoms are similar to those from common viral diseases, frequently found among children attending schools or daycare centers 9 .
The clinical spectrum of COVID-19 in children varies from asymptomatic to severe acute respiratory distress (Table 1) 10 . Dong et al. 11 published a pediatric series involving 2,143 pediatric patients registered in the China Center for Disease Control and Prevention database: laboratory confirmed cases corresponded to 34%, and 66% were characterized as suspect. The median age (IQR) was 7 (2-13) years and 1,213 cases (57%) were boys. Among laboratory confirmed cases, the proportion of asymptomatic, mild, moderate, severe and critically ill infections was 12.9%, 43.1%, 41%, 2.5% and 0.4%, respectively. Children with a confirmed diagnosis had a milder clinical course than suspected cases (which may include other prevalent viruses, such as respiratory syncytial virus, influenza and para-influenza etc. in this age group), indicating that the severity of the disease by SARS-CoV-2 may be milder than other acute respiratory infections in this age group. The average time from the beginning of the disease until the diagnosis was 2 days (range: 0 to 42 days). The proportion of "serious and critical" cases was 10.6%, 7.3%, 4.2%, 4.1% and 3.0% for the age group of "1, 1 to 5, 6 to 10 , 11 to 15 and > 15 years, respectively, indicating that young children, especially babies, were more vulnerable to severe SARS-CoV-2 infection; one child (14 years old) died. This study did not describe the frequency of individual symptoms in its population 11 .
According to another series of pediatric cases, with 171 patients, with a median age of 6.7 years (range from 1 day to 15 years), admitted to a hospital in Wuhan, China, all patients tested positive for COVID-19; there were 27 (15.8%) asymptomatic patients; 33 (19.3%) with upper airway symptoms and 111 (64.9%) with pneumonia. Seventy-one patients had fever (41.5%), lasting from 1 to 16 days (median, 3 days). Three patients were admitted to the intensive care unit: all with comorbidities (hydronephrosis, leukemia-during chemotherapy and intussusception). The intussusception patient was 10 months old and died 12 . Several skin rashes have been recently seen in some pediatric cases with variable clinical presentations 10 . Table 1 shows standards of COVID-19 clinical presentations, according to severity.
In a systematic review involving 38 studies (1,124 cases); the authors described the main clinical, laboratory and radiological characteristics of children infected with SARS-CoV2. Of all cases, 1,117 had their severity classified: 14.2% were asymptomatic, 36.3% were mild, 46% were moderate, 2.1% were severe and 1.2% were critical. The most prevalent symptom was fever (47.5%), followed by cough (41.5%), nasal symptoms (11.2%), diarrhea (8.1%) and nausea/vomiting (7.1%). One hundred and forty-five (36.9%) children were diagnosed with pneumonia, and 43 (10.9%) had infections of the upper airways; the authors concluded that the clinical manifestations of children with COVID-19 differ widely from the cases recorded in adults. Fever and respiratory symptoms should not be considered a registered trademark of COVID-19 in children. The distribution of the clinical manifestations of children with COVID-19 in the selected studies is shown in table 2 1 .
Patients with more severe clinical manifestations develop hypoxemia and poor perfusion, usually by the end of the first week. Complications commonly described are severe acute respiratory syndrome (SARS), myocarditis, septic shock, disseminated intravascular coagulation, acute kidney injury and liver dysfunction. Increased concentrations of procalcitonin, CRP and IL-10 and decreased IgA level and percentage of CD4 + CD25 + T lymphocytes have been associated with pneumonia in children with COVID-19 13 .
The main clinical manifestations of children with COVID-19 described in selected studies are contained in Table 2.
How to manage the children with respiratory disease
All children with severe acute respiratory syndrome (SARS) should undergo RT-PCR for SARS-CoV-2. However, in many services, children with mild illness without a history of contact are not tested for SARS-CoV-2. Due to the high percentage of acute respiratory tract infections due to other etiologies, such as acute viral bronchiolitis or severe acute asthma, many patients may meet the case definition of Severe Acute Respiratory Syndrome (SARS), being admitted to a
Asymptomatic infection
No clinical signs and symptoms of the disease, normal chest x-ray or chest CT scan, associated with a positive test for SARS-CoV2.
Mild infection
Symptoms of upper airway involvement, such as fever, cough, sore throat, rhinorrhea and sneezes, besides myalgia and fatigue. Normal tests of the respiratory system. Some cases may course without fever and other gastrointestinal symptoms, such as vomits, nausea, abdominal pain and diarrhea.
Moderate infection
Clinical signs of pneumonia. Persistent fever, dry cough in the beginning, and productive later; there may be lung crackling and wheezing upon respiratory auscultation, but in this phase, without respiratory distress. Some patients may not show clinical signs or symptoms, but the chest CT scan may show typical pulmonary lesions.
Severe infection
The initial respiratory symptoms may be associated with gastrointestinal symptoms, such as diarrhea. Clinical deterioration usually happens within one week, with the patient developing dyspnea and hypoxemia (SpO2 < 94%).
Critical infection
Patients may soon deteriorate to an acute respiratory distress syndrome or respiratory failure, and may develop shock, encephalopathy, myocardial damage or heart failure, coagulopathy, acute kidney damage and multiple organ dysfunction.
Adapted from: Carlotti et al. 38 hospital. In view of the current probability of children being exposed to the new virus, although with a lower likelihood of infection than adults, it is appropriate to examine them in separate places from other children suspected of having COVID-19. Thus, those with suspected SARS-CoV-2 infection awaiting laboratory results should be admitted separately from adults and other children. Parents should stay with isolated children and receive appropriate PPE (s). A quick response time for diagnostic test results will also reduce the risk of exposure. Official guidelines currently recommend hospital admission for confirmed cases, particularly those patients classified as having "Severe Pneumonia" and "in critical condition" for greater care ( Figure 1). The following criteria can be particularly considered for admission (any of the following criteria): 1. Respiratory discomfort (tachypnea, correlate with age group) 2. SpO2 < 92% in room air 3. Shock/poor peripheral perfusion 4. Poor oral intake, especially in babies and young children 5. Lethargy, especially in babies and young children 6. Seizures/encephalopathy To date, we do not have specific guidelines for children with underlying diseases, such as chronic respiratory diseases, immunosuppression, uncorrected heart disease, chronic kidney disease, etc. This group needs more intensive monitoring and early therapy 14 .
Differential Diagnosis
The differential diagnosis of COVID-19 in children is complex, since many pediatric diseases have similar signs and symptoms.
Cough and fever of mild to moderate intensity were the symptoms most frequently reported in a meta-analysis involving 551 children with positive tests for SARS-CoV-2. 15 Thus, in general, a differential diagnosis of COVID-19 is considered for all respiratory conditions associated with acute infections, from those with isolated upper airway involvement (differential with mild cases of COVID-19), to pneumonia (differential with moderate cases of COVID-19), to Severe Acute Respiratory Syndrome SARS (differential with severe cases of COVID-19) and severe sepsis from other etiologies, such as bacterial sepsis, staphylococcal or streptococcal toxic shock syndrome, as well as severe infections that develop with myocarditis, such as those caused by other enteroviruses (differential with critical cases of COVID -19). 11,16 Various respiratory viruses, common bacteria and atypical bacteria, present with clinical syndromes similar to those caused by SARS-COV-2, according to Table 1. 17,18 Other infectious diseases can also present with similar symptoms especially in the presence of fever and systemic symptoms such as those listed on Chart 2. 19 More recently, we know that children of different nationalities presented a multisystemic inflammatory syndrome with clinical manifestations and changes in complementary exams similar to those found in children and adolescents with Kawasaki syndrome, incomplete Kawasaki and/or toxic shock syndrome, opening a new front in differential diagnosis. These children had high and persistent fever (38-40°C), rash of different presentations, non-purulent conjunctivitis, hands and feet edema, abdominal pain, vomiting and diarrhea. The vast majority evolved to shock (with arterial hypotension and tachycardia), mainly cardiogenic, with elevation of myocardial enzymes, requiring vasoactive drugs for hemodynamic stabilization. Many had inflammation of the serosa, with pleural, pericardial effusion and ascites. Almost all of them needed ventilatory support, even though respiratory manifestations are not relevant. Some cases are so severe that they become a differential diagnosis of familial hemophagocytic lymphohistiocytosis syndrome or macrophage activation syndrome, seen in children with rheumatological diseases. 20,21 Today, considering the moment we are living through, the investigation of epidemiological data, laboratory and image exams and, especially, the search for the virus identification, are essential for the confirmation of the disease. Low fever, paroxystic cough in spells followed by vomit and perioral cyanosis, excess saliva.
There may be cyanosis and apnea.
RSV Influenza A and B Parainfluenza 1 and 3 Coronavírus (SARS-CoV, SARS-CoV2, MERS-CoV) Common and atypical bacteria
Fever, cough, tachypnea, chest pain, desaturation. There may be systemic involvement with prostration and signs of toxemia.
Hantavirus
High fever, myalgia, headache, the first week courses with acute respiratory failure.
Chart 2.
Other diseases in the differential diagnosis of COVID-19. Faced with a child with nonspecific symptoms, the critical and systematic approach is prudent, categorizing the cases initially as suspect (with clinical and/or epidemiological and/or radiological criteria). During follow-up and investigation, the cases will be classified as confirmed, of high or low probability according to the suggestion presented in Chart 3. 22
Risk factors for severe COVID-19 in children
Descriptive, observational studies reported the presence of certain pre-existing conditions in children who developed severe conditions and fatal outcomes in SARS-CoV2 infection, pointing to a tendency for certain underlying diseases to act as risk factors. Dong et al. 11 reported that younger children, particularly younger than one year, were more vulnerable to severe conditions. Lu et al. 12 , during an observation period, reported three children who needed ventilatory support and all had pre-conditions. (hydronephrosis, leukemia under chemotherapy and intussusception).
Likewise, She et al. 23 , described the only two cases of critically ill patients who had a history of underlying disease (congenital heart disease with malnutrition and bilateral hydronephrosis with lithiasis).
Another study described two cases of severe SARS-CoV2 pneumonia that occurred in one student in remission of acute lymphoblastic leukemia, and another in an obese adolescent. The same study draws attention to the careful assessment of risk factors, since children with COVID-19 without severity sometimes had the same status of comorbidities (severe group and the non-severe group (P = 1.00). 24 Like the aforementioned studies, underlying conditions such as congenital heart disease, bronchial pulmonary hypoplasia, abnormalities of the respiratory tract, abnormal hemoglobin level, severe malnutrition, primary immune deficiency or the prolonged use of immunosuppressants, seem to be criteria for more severe disease in children. 25 From the point of view of complementary assessment, the impairment of more than three lung segments was associated with a higher risk of severity (odds ratio = 25.0, p = 0.006). Elevations in IL-6levels, high total bilirubin and D-dimer can also help identify patients with potential severity early on 24 .
The characteristics of the risk groups for severe COVID-19 in children are not yet clearly defined. So far, the literature suggests certain underlying diseases and some laboratory and imaging findings, but a longer observation period and a larger group of children may, in the near future, accurately define this group of special interest.
Radiological findings
Imaging exams are part of the investigation of patients with COVID-19, since they can assist in the initial diagnosis, in addition to contributing to the assessment of the disease's evolution and prognosis. 2,26 The exam considered the gold standard for diagnosis, the identification of viral RNA by Polymerase Chain Reaction by Reverse Transcriptase (RT-PCR), can present false-negative results in around 30% of cases, depending on the quality of the sample collection and laboratory logistics. 26 The literature is poor in relation to the assessment of radiological findings in children. Between 1 and 2% of disease notifications occur in patients younger than 18 years of age. Most of these patients do not need to undergo imaging tests, since only 1 to 4% develop more severe conditions. 2 Chest radiography is easy to perform, inexpensive and carries little radiation, but it is less accurate than chest tomography (CT) to show changes resulting from COVID-19, both in children and in adults. 2,27 The most frequently found findings on chest x-ray are: 2 -Peripheral ground glass pattern in lower regions -Irregular bilateral consolidations CT provides a better visualization of unobservable changes in the x-ray. Its use must be judicious, especially in children, due to the higher dose of radiation, in addition to the higher cost and not being so accessible in smaller centers. The main changes found are: 1,2,4,27,28 -Peripheral and irregular ground glass pattern -Peripheral consolidations -Inverted halo signal -Mosaic perfusion -Irregular and discreet opacity These are preferably located in the subpleural region, in the lower lobes, and bilaterally, affecting more than one lobe. (Figures 3,4,5
and 6)
In a study involving 20 SARS-CoV-2 positive RT-PCR children with and a mean age of 2 years and 1 month, Xia et al. 28 reported that the x-ray did not detect lung injuries and such injuries were more visible on CT. Of the latter, 20% were normal; among the altered CTs, 60% had ground-glass opacity, 50% consolidations with an inverted halo sign, irregular opacity in 20%, and small nodules in 3%. There was no pleural effusion or lymphadenomegaly.
Steinberger et al 29 evaluated 30 children with SARS-CoV-2, positive RT-PCR, and a mean age of 10 years, from 6 different centers. Nine patients were asymptomatic, all with normal CT. Of the total, 23/30 (77%) had normal CT and only 7 (23%) had changes: 86% with ground-glass pattern, 14% with ground-glass + consolidations, 29% with mosaic perfusion and 29% with signal inverted halo. The changes were more frequent in patients over 14 years of age, with 71% in more than one lobe, 71% bilaterally and 86% in the periphery. There was no pleural effusion or lymphadenomegaly.
In a study with 171 children with confirmed SARS-COV2, Lu et al. 12 found 32.7% with ground-glass pattern on CT; 18.7% with local opacity and 12.3% with bilateral opacity; 15.8% had no radiological changes. Twelve patients (9.8%) had radiological changes, but were asymptomatic.
Wang et al. 30 reported abnormalities in 66% of CT scans of children with COVID-19, with the ground-glass pattern in more than one lobe, the most frequent alteration in 35% of cases. Carlotti et al. 10 showed that the most severe cases had bilateral condensations in posterior basal regions.
Zhang et al. 31 studied 34 children with COVID-19 in 4 hospitals in China and reported that irregular, high-density opacities were common while the ground-glass pattern was rarely seen on CT.
The image alterations in COVID 19, described in papers involving pediatric patients, would be more evident from the fourth day of the disease, both on X-Ray and CT. The evolution of the images is described in Figure 7. 32 In summary, the radiological findings in children are similar to those found in adults, but at a lower frequency, intensity and extension. The X-ray is less sensitive to identify changes, and the CT scan is the best imaging test to show SARS--CoV-2 lesions, but it must be ordered with precise indications, as alone is not sufficient for diagnosis.
Other viral infections may show similar images and there may be co-infections. There is an international concern regarding the excessive use of CT in children, as it may cause future damage by ionizing radiation (low radiation devices are lacking). In addition, it may contaminate healthcare professionals and other patients when using the equipment.
When to order a CT scan?
The discussion about the role of radiological examinations in patients with COVID 19 in the pediatric population has great use and repercussions in clinical practice. Some questions intrigue physicians who work with children with COVID 19, namely: i) When should imaging tests be performed in the initial diagnosis? Ii) How should the disease progression be assessed? and iii) Is there a radiological pattern with prognostic value. 2,33,34 In adults, chest tomography has been performed early in the initial approach, and as a form of diagnostic screening. This is because laboratory diagnostic tests require more time for its result and, in addition, some RT-PCR results -the gold standard for the disease -may be false negative, due to the time of disease or the collection technique. It is important to stress that the Radiology Society of North America (SNRC) and the Brazilian College of Radiology do not corroborate this practice. 2 At the beginning of the pandemic, with the general fear of the medical community about how the disease would develop in the pediatric population, we believed that chest computed tomography (CT) should be performed early for the initial diagnosis in children. However, with the publication of a series of papers showing the most favorable evolution in children, a common sense was reached that chest CT can be performed only in selected cases. Furthermore, the initial changes may be similar to those of other respiratory viruses, therefore uncharacteristic. In pediatric practice, a CT scan is indicated for patients who progress to more severe conditions and hospitalization. 3,29 Thus, as in the adult population, chest radiography has less specificity than CT, which should be indicated for those patients with unsatisfactory clinical evolution or in groups at risk. However, it is important to note that, whenever possible, it should be performed under low radiation protocols. 44 In general, CT in adults shows multifocal peripheral opacities, ground-glass opacities and predominant distribution in the lower and posterior lobes, bearing a bilateral presentation. In the researched literature, tomographic findings in the pediatric age group are more discreet, with fewer lung lobes involved and even with unilateral presentations. The main tomographic findings found in children infected with SARS-CoV-2 range from the presence of multiple irregular bilateral "ground-glass" opacities, sparse and irregular "ground-glass" opacities, and/or infiltrating the middle third or periphery of the lung or subpleural space. 41 There are descriptions of images of subpleural infiltrates, condensations in uni or bilateral halos. Upon tomographic examination, the progression of the disease determines image increase and bilaterality, with high-density condensations and peribronchial thickening, affecting both lungs. Pleural effusion is not described. In the remission phase of the disease, that is, from the 14th day on, there is still some halo images and parenchymal bands that can persist for months. 44 It is important to note that 30% of patients may have negative CT scans, and a normal CT does not rule out the diagnosis. In addition, CT is not indicated in asymptomatic patients. 3 In adults, CT should be performed in mild, moderate and severe symptomatic patients with risk factors for disease progression such as the elderly, diabetics, hypertension, in the search for complications and to rule out alternative diagnoses. Likewise, in patients who worsen their respiratory condition. It is important to note that in general it is not indicated in mildly symptomatic patients without risk factors. In the pediatric population, this criterion of greater attention for the risk group can be followed and, evidently, in patients with clinical deterioration. 34 Other situations in which CT scans have been ordered in Brazil are patients residing in cities without access to laboratory tests, or awaiting examination, or with nasal swab examination with negative RT-PCR, but with strong clinical suspicion, if moderately and severely symptomatic. 3 One of the classifications that has been used to quantify the extent of the disease is based on the percentage of the affected lung: i) mild involvement: < 25%; ii) moderate involvement: 25 to 50%; iii) marked involvement : > 50%. 2 The main tomographic findings are ground-glass opacity, reticular opacities, consolidations and inverted halo sign. The findings can be classified into 4 categories 2 : a. Findings compatible with a viral infection (
Considerations about imaging tests and oximetry:
is there already a severity score for children?
Children usually have mild cases of COVID-19 and chest radiography is not able to identify lesions or details. Chest computed tomography (CT) is more sensitive for identifying lesions, but care should be taken not to unnecessarily expose children to radiation. Thus, the question remains whether the x-ray or the measurement of hemoglobin saturation by oximetry can be useful in the evaluation of children with COVID-19, considering that the findings in children are different from those in adults. 3 There are few studies in children with COVID-19, and studies in adults are referred with the objective of verifying whether it is possible to establish an association between clinical picture, image and oximetry variables.
Steinberger et al. 29 evaluated 30 patients aged 10 months to 18 years (median = 10 years), immunocompetent and without other morbidities. Of these, 9/30 (30%) were asymptomatic. Fourteen patients out of 23 (61%) had normal CT and had at least one symptom. All 7 patients with abnormal CT (ground-glass opacities and/or consolidation were symptomatic. The most frequent clinical findings were fever > 38 C (53%) and cough (27%). No patient required oxygen, intubation or ICU admission. The number of affected lobes and the degree of impairment of each lobe was assessed. The severity of the impairment was classified according to the findings from Chang et al 48 , namely: without lobe impairment, score zero; minimum (1 to 25%) score 1; mild 26 to 50% score 2; moderate 51 to 75%, score 3; and severe 76 to 100% score 4. Severity was assessed by the sum of the scores of the 5 evaluated lobes; 5 lobes, each lobe from zero to 4 ; score variation (0 to 20); 2 patients had involvement of 1 lobe; 3 patients with 2 lobes and 2 patients with 3 and 4 pulmonary lobes involved.
The findings of this study (TABLE 3) show us that there is no relationship between tomographic findings and arterial desaturation, since there is no report that patients with lung injuries required O2, that is, they did not desaturated.
Mussolino et al. 36 consecutively evaluated 10 children (median age 11 years) admitted with COVID-19. Ultrasonography was performed in all of them and 1 patient underwent CT with similar findings to those of Steinberger et al. 29 All patients were symptomatic upon admission: fever (80%); cough (50%) and diarrhea (20%) and had pulmonary involvement (areas of the lung with subpleural consolidation and pleural irregularities. There were no reports of dyspnea, desaturation and need for oxygen. That is, the initial clinical condition (fever and cough), the findings of lung lesions (consolidation) in the initial phase in children do not correlate with desaturation and dyspnea. admission was 98%, but 6 (17%) of the children with moderate disease needed oxygen. All progressed well, the average length of stay was 14 days, although of these n = 10 (28%) were asymptomatic, 7 (19%) had upper airway manifestations and one patient had dyspnea. A striking feature of COVID-19 in these patients is the involvement of vital organs such as the lungs and heart (alteration of myocardial enzymes), even in patients with mild and moderate disease (31%). Although the study did not evaluate criteria for CT severity and relationship with oximetry, it is possible to observe that patients with mild to moderate clinical disease present radiological changes, elevation of cardiac enzymes, with normal oxygen saturation, but who progressively needed O2. This allows us to state that, in children with mild to moderate clinical disease, it is not possible to establish a severity score, or even an association between image and transcutaneous measurement of hemoglobin saturation, and despite myocardial changes, all presented with good progress.
Severity markers of lung injury that use partial oxygen saturation (SpO2), are suitable substitutes for those who use blood pressure (PaO2) in children with respiratory failure with SpO2 between 80% and 97%. Both should be used in clinical practice to characterize risks, increase participation in clinical trials and check for disease prevalence 39 .
Pulse oximetry and pulmonary ultrasound can be useful tools to track or rule out low oxygenation or pulmonary changes consistent with severe acute respiratory syndrome (SARS) in places with few resources, where arterial blood gases and chest x-ray are not available 40 .
The study (CONFIDENCE -The Coronavirus Infection in Pediatric Emergency Departments study) evaluated a cohort of 100 Italian children under 18 with Covid-19, confirmed by nasopharyngeal swab PCR. The median age was 3.3 years and the origin of the infection outside the home occurred in 55% of the cases. The most frequent signs were: sick aspect (12%), fever ≥ 37.6 C (54%), cough 44% and difficulty or refusal to eat in 23%. There were 4% of the children with Sp02 < 95%, assessed by pulse oximetry. All of them presented an image of pulmonary involvement (x-ray or pulmonary ultrasound); 9 needed oxygen during hospitalization due to desaturation or respiratory distress. Eleven patients with pulmonary impairment (chest x-ray or ultrasound) had normal oxygenation (pulse oximetry), probably due to lesions in the initial stage. No patient got worse. Nine patients were admitted to the ICU (4 neonates, 3 infants < 3m, others due to clinical condition or comorbidities). Of these, one required mechanical ventilation (patient with encephalopathy and epilepsy). This study demonstrates in children that the indication for hospital admission was dependent on the clinical condition, other morbidities and not on image evaluation. During admission, 4 children had SpO2 < 95%, but nine needed oxygen support. The indication for the use of oxygen was not based on the imaging results, as 11 patients had changes in radiography or ultrasound and did not require oxygen. It was not possible to Qiu et al 32 evaluated children with COVID-19 (n = 36), with mean age 8 ± 3.5. Severity was classified as mild or asymptomatic (n = 17) and moderate (n = 19) according to the criteria by Chen et al 38 . Ground-glass opacity findings were seen in 53% and 100% of patients with mild and moderate disease, respectively. The arterial saturation measure upon establish an association between clinical severity, saturation levels and radiological changes in the early stages of COVID-19 in children 27 .
Pulse oximeters have accuracy of ± 1-2% when SpO2 > 75-80% and there is no accuracy reported for values below 70%, because ethically there is no way to test it. This problem can be mitigated when monitoring patients with COVID-19 by setting saturation values above 75-80%, although it must be recognized that accuracy information is generally not reported, regardless of the degree of hypoxemia for pulse oximeters that are cheap and easy to purchase. Although there is knowledge about the accuracy and possible failures that may interfere with the values obtained, the ease of use, low cost of this equipment, associated with the burden of COVID-19 and the risks of silent hypoxemia, make it a reasonable solution for monitoring individuals at risk 41 .
Lipnick et al 42 compared 6 low-cost oximeters in 22 healthy individuals to assess measures of arterial saturation with a range of 70 to 100%. Of the 6 devices tested, only 2 achieved accuracy criteria established by the International Organization for Standardization accuracy criterion (Accuracy root mean square-Arms < 3%). Four of the oximeters showed Arms values > 3.0% and, in three of them, when the saturation was 80 to 90%. This value was > 5% on four devices when the saturation values are 70-80%. This may partly justify the difficulty of obtaining an association between variables (for example, CT versus pulse oximetry), the lack of accuracy of the device used.
The decision to institute mechanical ventilation is made according to the physician's clinical judgment, in addition to SpO2, dyspnea, respiratory rate, chest x-ray and other factors. Many patients with COVID-19 are intubated due to hypoxemia; still, with little dyspnea or distress 43 .
In order to verify whether the chest x-ray score correlates with hospitalization, intubation, length of stay and death, adults (n = 338) between 21 and 50 years of age were evaluated with COVID-19. The chest -ray score was obtained by dividing each hemithorax into 3 parts and the presence of opacities (score = 1 if present in one area). In the initial assessment, radiography with a score ≥ 2 was an independent predictor of hospital admission (n = 145), and a score ≥ 3 (n = 28) for intubation. The authors concluded that for patients with COIVID-19, between 21 and 50 years old, the chest x-ray score is an independent and predictive factor of severity for hospital admission and intubation. These findings probably occur because despite the low sensitivity of chest x-rays, it is altered in the most severe cases. 44 Yang et al. 45 , evaluated 102 patients with COVID-19, aged 15-79 years, 84 with mild disease.
The CT score was adapted from Chang et al. 35 and obtained by dividing 20 regions; each region received 1 or 2 points if the involvement was < 50% or ≥ 50%, respectively; that is, the score could vary from (0 to 40). The patients were divided into 2 groups (mild and severe); severe = RR ≥ 30; O 2 saturation ≤ 93%; PaO 2 / FiO 2 ≤ 300 mmHg and or need for mechanical ventilation, shock, failure of another organ.
The best CT score value capable of discriminating between mild and severe patients was 19.5, with 83.3% sensitivity and 94% specificity. In patients with mild disease, O 2 saturation was 97% (96-98) and 92% (88 to 93) for critically ill patients. It was possible to state that patients with more severe disease had a higher CT score and lower levels of oximetry. However, the SpO 2 variable has an assessment bias since it is an inclusion criterion for severe disease.
In order to check whether CT findings may be related to the evolution of patients with COVID-19, 380 patients with a mean age of 53.62 ± 16.66 years were evaluated in a cross-sectional study (66.1% male). The most frequent CT findings (low radiation dose and 4mm slices) were peripheral changes (86.6%) and peribroncovascular interstitium (34.6%) ground-glass (54.1%). The CT score was assessed according to Pan et al 46 .
In another study, the 5 pulmonary lobes were evaluated and scored from 1 to 5 according to their involvement; zero = without involvement; 1 = < 5%; 2 = from 5 to 25%; 3 = 26 to 49%; 4 = 50 to 75% and 5 = > 75% of involvement. There was a correlation between the mean score of CT severity and mortality (13.68 versus 8.72) p < 0.0001). SpO 2 levels were higher for patients who survived: 93.82 ± 5.88 versus 87.13 ± 6.72, for those who died (p = 0.002). Interestingly, when evaluating blood gas values, there was no difference for PO 2 and PCO 2 . This puts in check the hemoglobin saturation values found and the difficulty of establishing the relationship between oximetry and CT scores. There is no description on how to measure the saturation levels or which device was used, challenging the accuracy of the device. 47 To check whether the quantitative CT image can determine clinical severity, Li et al. 48 evaluated patients with COVID -19 who were divided into 3 groups according to Chinese guidelines: mild (few symptoms and normal CT); moderate (respiratory symptoms with pneumonia) and severe-critical (RR ≥ 30; O 2 saturation ≤ 93%; PaO 2 /FiO 2 ≤ 300 mmHg) -(respiratory failure requiring mechanical ventilation, shock or failure of other organs requiring ICU). Tomographic findings were evaluated according to the score by Chung et al. 49 and compared with the clinical classification. There were 78 patients included, 40 females. They were classified as mild: 24 (30.8%); moderate: 46 (59%) and severe-critical 8 (10.2%). The median CT score was higher in the severe-critical group (10), with variation (8-18) when compared to the moderate group (5) with variation (5 to 11) with p < 0.001. However, 32/46 (70%) of moderately-ill patients, with SpO2 ≥ 94%, had CT showing involvement of more than 2 pulmonary lobes, and 37/46 (80.4%) had the involvement of 2 lungs. Severelycritically ill patients (n = 8), who by criterion had SpO2 ≤ 93%, had the same image involvement. It was not possible to establish a relationship between the number of pulmonary lobes involved and the SpO 2 level. There is a need to analyze not only the number but also the degree of involvement of each lobe.
FINAL REMARKS
a. There are few COVID-19 studies in children, and with a small number of patients evaluated with CT; thus, it limits our capacity to establish score parameters that may be associated with oximetry and CT or chest x-ray and clinical severity.
b. There is no standardized CT severity score in children with COVID-19. The studies use adult scores, which in turn also have limitations, as they were applied to patients with SARS after discharge and not upon hospital admission.
c. Chest x-rays have limitations concerning their use in children due to a low sensitivity and the fact that children have milder disease. If it is necessary to use CT, it should be performed with low radiation dosages. In adults, it is a marker for hospitalization and intubation.
d. Oximeters should be used, due to their low cost and ease of use, but it is important to know their limitations, difficulty of use in young children, lack of accuracy and the need for proper use.
e. Severity assessment should be based on the following information: clinical signs, oximetry, image and not on isolated variables, even because it is a dynamic situation. | 2020-08-20T10:12:39.618Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "c56afc1581b7e1ac90d49938e1bd06357e2911e7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.25060/residpediatr-2020.v10n2-349",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "94ed8bb76cbe0223723ae5ef6e56acf92b8f1d7e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
216122966 | pes2o/s2orc | v3-fos-license | Evaluation and Selection of Thin-layer Drying Models for Paddy Dried in Static Flat-bed Batch Dryer
A Static flat-bed batch dryer was developed for drying paddy from harvesting moisture content (20 – 22%) to 12% for safe storage. The dryer mainly consisted of Blower, Heating chamber, Plenum chamber and drying chamber. Twenty kg paddy was dried in the developed dryer at two different inlet air flow rate (1 m/min. and 1.26 m/min). The machine has a capacity of 20 kg and temperature of drying air was 60 and 55°C respectively. The moisture content was recorded at every 15 minutes interval and moisture ratio plots were generated. The experimental data were fit in 8 different thin-layer drying models and statistical parameters along with the model constants were obtained. It was found that the Wang and Singh model with the highest values for R 2 and the least values of RMSE in selected drying conditions has the best fit. Henderson & Pabis and Newton models were also found suitable for describing the drying kinetics of paddy in the developed dryer.
INTRODUCTION
India comprises of one third for the total paddy cultivated area (83 million hectares). Northeastern, southern India and river valleys are the major regions in which the production of paddy is centred in the country. The per capita consumption of rice worldwide has remained remarkably stable since the year 2000 and amounted to about 53.9 kilograms per year in 2018-2019 [1]. In India, the states of West Bengal, Orissa, Andhra Pradesh, Tamil Nadu and Bihar are the major cultivators of paddy. The harvesting operation of paddy is done at the higher moisture content of 24%, to minimise the shattering losses after harvest. Drying of agricultural materials such as grains is a non-linear process with long-time delay and considerable complexity. Therefore, it is very difficult to establish a precise mathematical model for grain drying control [2]. Various researches have been conducted for the development of thin-layer drying models and evaluating their accuracy for prediction in various dryers. Golmohammadi et al. [3] studied the intermittent drying characteristics of various Iranian rice varieties to determine the drying kinetics and effective moisture diffusivity. Yadollahinia [4] conducted experiments on the drying kinetics of paddy at five different air temperatures ranging from 30 to 70°C and air velocity from 0.25 to 1m/s. Drying curves obtained from the experimental data, fitted to eight thin layer models and compared with three statistical parameters, showed that two terms model can predict moisture change with greater accuracy than other models. Similar results were reported by Hasan et al. [5] in which five thin-layer drying equations were fitted to the experimental data and the Midilli equation was found to be the best followed by the two-term exponential equation. Model drying behaviour of different agricultural products often requires various statistical methods of regression and correlation analysis. This study was conducted to study the drying kinetics of paddy in the static flatbed batch dryer and evaluate the various thin layer models to find the most suitable model for the process.
MATERIALS AND METHODS
A static flat-bed batch dryer was developed and evaluated at the Department of Agricultural Engineering, University of Agricultural Sciences, GKVK Bengaluru. The static flat-bed dryer is a type of on-farm dryer was designed and developed based on the anthropometric data for agricultural workers for easy operation of the dryer. In the fabrication process, the plenum chamber was fabricated first. The drying chamber was designed to hold a capacity of 20 kg paddy in a batch drying process. The electric heating coils were assembled in the heating chamber of the dryer and connections were arranged. The maximum capacity of the blower was about 1.47m³/min., it was placed at ground level to blow the air to the heating chamber. The air flows in cross direction to the heating coils and flows underneath the drying chamber.
Drying experiments were conducted in the developed dryer by drying 20 kg of freshly harvested paddy at 22-24% initial moisture content. The experiments were conducted at two different airflow rates of 1 and 1.26 m 3 /min. the temperature developed in the dryer was correspondingly 60 and 55°C in the dryer.The decline in moisture content was recorded at every 15 minutes interval. Moisture ratio was calculated for the corresponding change in moisture and plot of moisture ratio and time was generated. The drying chamber was mixed uniformly after every 15 minutes interval so that all the grain gets uniform exposure to the drying air.
The obtained moisture ratio was plotted against time and its fitting for various thin layer drying models was checked for different treatment combinations. Curve fitting tool was used for checking the fit to various models. Values of various constants were found for 8 different thinlayer drying models (Table 1).
Various statistical parameters such as coefficient of correlation (R 2 ), Error sum of square (SSE), and root mean square of error (RMSE) values were also found with the help of the same tool to decide the quality of fit. Linear and non-linear regression models are important tools to find the relationship between different variables. The goodness of fit of the tested models to the experimental data is the coefficients of determination (R 2 ) and root mean square error (RMSE) [6]. Model studies help us to identify the best-suited equation for the drying kinetics of a particular commodity dried in a dryer and hence predict its drying parameters such as drying rate.
RESULTS AND DISCUSSION
The curve fitting approach helped to calculate various statistical parameters for various thin layer drying models and hence obtain the best fit model for drying characteristics of paddy in the developed dryer. In the drying experiments, it was found that the Wang and Singh model has the best fit for the experimental results ( Table 2).
CONCLUSION
A static flat-bed batch dryer was developed and its performance was evaluated by drying 20 kg of paddy. The drying time was 90 times in totality for moisture reduction up to 12% from harvesting M.C of 24%. The experimental data were fit against 8 thin layer models and Wang and Singh model were found the best fit for the experimental data. Thus this model can be apprehensively used for predicting the drying behaviour of paddy in the developed dryer. The usage of these models helps us in optimising the drying performance of the dryer.
The developed dryer can be apprehensively used for small scale drying of paddy. The drying behaviour of paddy grain dried under thin layer drying process in the developed dryer can be predicted using the suitable evaluated Wang and Singh model. | 2020-04-16T09:21:30.605Z | 2020-04-08T00:00:00.000 | {
"year": 2020,
"sha1": "15f40f0aa60d3d0540695b92a49a1beb61072a80",
"oa_license": null,
"oa_url": "https://www.journalcjast.com/index.php/CJAST/article/download/30546/57322",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3358adec9e0a20863e1410a9bab393263b1b77c9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
240422662 | pes2o/s2orc | v3-fos-license | Update on and Future Directions for Use of Anti–SARS-CoV-2 Antibodies: National Institutes of Health Summit on Treatment and Prevention of COVID-19
Anti–SARS-CoV-2 antibodies have been used in health care and clinical settings to prevent and treat COVID-19. The National Institutes of Health convened a virtual summit on 15 June 2021 to summarize existing knowledge and to identify key unanswered scientific questions to catalyze future development of these antibodies.
As the fourth wave of the SARS-CoV-2 pandemic encircles the globe, there remains an urgent challenge to identify safe and effective treatment and prevention strategies that can be implemented in a range of health care and clinical settings. Substantial advances have been made in the use of anti-SARS-CoV-2 antibodies to mitigate the morbidity and mortality associated with COVID-19. On 15 June 2021, the National Institutes of Health, in collaboration with the U.S. Food and Drug Administration, convened a virtual summit to summarize existing knowledge on anti-SARS-CoV-2 antibodies and to identify key unanswered scientific questions to further catalyze the clinical development and implementation of antibodies. A s the fourth wave of the COVID-19 pandemic encircles the globe, there remains a continuing urgent priority to develop safe and effective treatment and prevention strategies for those at risk for infection with SARS-CoV-2, the virus that causes COVID-19, and for those who are already infected. The World Health Organization estimates that as of September 2021 there had been more than 232 million confirmed cases and more than 4.7 million deaths worldwide (1). In the search to identify safe and viable interventions to alleviate morbidity and mortality associated with COVID-19, anti-SARS-CoV-2 antibodies, including convalescent plasma (CP), hyperimmune globulin (HIG), and monoclonal antibodies (mAbs), have been used in a range of health care settings and clinical studies (2). The rationale for administering passive antibody therapy is based on biological plausibility and successful use for treatment of other infectious diseases (3,4).
To date, the U.S. Food and Drug Administration (FDA) has issued an approval for 1 antiviral drug to treat hospitalized patients and granted Emergency Use Authorizations (EUAs) for several single and combination mAbs to treat persons in outpatient settings with mild to moderate COVID-19 who are at risk for clinical progression to severe disease (5) and for 2 mAb combinations for use as postexposure prophylaxis in certain scenarios (6,7). Agents that moderate the host immune or inflammatory response are used in later stages of COVID-19 and include dexamethasone, recommended for hospitalized patients requiring supplemental oxygen, as well as tocilizumab (interleukin-6 inhibitor) or baricitinib (Janus kinase inhibitor), recommended for certain patients with severe disease receiving corticosteroids (8). Both tocilizumab and baricitinib have been issued EUAs. The FDA has also granted 3 EUAs for COVID-19 vaccines (1 of which has now been fully approved) (9). The continuing emergence of SARS-CoV-2 variants has caused clinicians and scientists to reconsider how to proceed with the development and use of anti-SARS-CoV-2 antibodies.
On 15 June 2021, the National Institutes of Health, in cooperation with the FDA, convened the third virtual Summit on COVID-19. The meeting, entitled "Anti-SARS-CoV-2 Antibodies for Treatment and Prevention of COVID-19-Lessons Learned and Remaining Questions," highlighted a "snapshot" of the current state of the science and served to inform future directions in this rapidly evolving field ( Table 1). The participants included researchers and clinicians from academia, industry, and federal government agencies. The videocast (accessible on https://videocast.nih.gov/watch=42078) was open to the public and had more than 1500 participants.
The meeting launched with presentations highlighting the most recent clinical trial data on the use of antibodies to treat or prevent COVID-19 and the global landscape of emerging variants of concern (VOCs).
CONVALESCENT PLASMA
One of the first interventions evaluated to treat patients with COVID-19 was the transfusion of CP, which is blood plasma derived from patients who have recovered from COVID-19. The rationale for use of CP was based on its administration as treatment of Argentine hemorrhagic fever in a clinical trial that provided compelling evidence for the efficacy of CP for viral infections (10), as well as use of CP for treatment in previous influenza and coronavirus outbreaks over several decades (4,(11)(12)(13)(14).
Several studies have evaluated the safety and efficacy of CP for treatment of COVID-19. Findings from a retrospective matched cohort study in patients treated with CP soon after admission showed some survival benefit compared with administration later during the disease course (15). A few trials reported efficacy of CP in early-stage disease, but CP treatment did not seem to benefit patients with advanced COVID-19 (15)(16)(17). The RECOVERY (Randomised Evaluation of COVID-19 Therapy) trial (ClinicalTrials.gov: NCT04381936) showed that high-titer CP did not improve survival or other prespecified clinical outcomes in patients hospitalized with COVID-19 (18). Benefits did, however, seem to accrue to immunocompromised patients in some studies (19)(20)(21).
In April 2020, the national CP Expanded Access Protocol (EAP) (ClinicalTrials.gov: NCT04374370) was initiated to provide access to a therapy with possible clinical benefit for patients with COVID-19. The EAP was administered by the Mayo Clinic as a single-group clinical protocol treating more than 100 000 patients at about 2700 sites. Although patients received many concomitant treatments over the course of the study, the EAP found that only CP from donor units with high antibody titers was associated with modest clinical benefit based on improvements in 7-day survival in patients who were not intubated, as well as in those who were not intubated, were aged 80 years or younger, and received CP within 72 hours of diagnosis (22).
These studies suggested that CP acts like a conventional antiviral, so benefit may be seen only when CP is administered early in the disease course with donor units containing potent, high-titer antibodies. Disappointingly, however, the C3PO (Convalescent Plasma in Outpatients With COVID-19) trial (ClinicalTrials.gov: NCT04355767) in recently diagnosed outpatients failed to show benefit (23).
ANTI-SARS-COV-2 MABS AND HIG
Similar challenges have been encountered in achieving clinically significant improvements in hospitalized patients treated with mAbs and HIG in addition to standard of care. Anti-SARS-CoV-2 mAbs targeting the spike protein were developed rapidly and integrated into research studies. Hyperimmune globulin is composed of highly purified anti-SARS-CoV-2 antibodies from multiple donors who have recovered from COVID-19, rendering a product whose SARS-CoV-2 neutralization titer is several times higher than that of single-donor CP (24).
In 2 trials, ACTIV-3/TICO (Accelerating COVID-19 Therapeutic Interventions and Vaccines-3: Therapeutics for Inpatients With COVID-19) (ClinicalTrials.gov: NCT04501978) and ITAC (INSIGHT 013: Inpatient Treatment of COVID-19 With Anti-Coronavirus Immunoglobulin) (ClinicalTrials.gov: NCT04546581), patients with COVID-19 hospitalized within 12 days of symptom onset were randomly assigned to an investigational agent or a placebo group, both with standard of care. ACTIV is a public-private partnership among federal agencies, academia, and numerous industry partners managed by the Foundation for the National Institutes of Health (25). Overall, 5 antibody agents were studied in TICO, including the single mAbs bamlanivimab (Eli Lilly) and sotrovimab (GlaxoSmithKline and Vir Biotechnology), as well as 2 mAb combinations, BRII-196 plus BRII-198 (Brii Biosciences) and AZD7442 (AstraZeneca). The fifth agent evaluated was an HIG (CSL Behring, Emergent BioSolutions, Grifols, and Takeda Pharmaceutical), a product that targets multiple epitopes and has been shown to be effective against several SARS-CoV-2 VOCs (26). The first 4 completed evaluations of bamlanivimab, BRII-196 plus BRII-198, sotrovimab, and HIG showed that the first 2 did not result in favorable outcomes and the latter 2 resulted in modest but statistically nonsignificant favorable outcomes compared with placebo (27)(28)(29). Studies of 2 agents, MP0420 (Molecular Partners-DARPin technology) and AZD7442, are ongoing. Thus, as with CP, it seems that mAbs and HIG administered after multiple days of illness may not be useful.
The results in outpatients were much more encouraging. The safety and efficacy of mAbs to treat COVID-19 in nonhospitalized patients has been evaluated in several clinical trials. The BLAZE-1 (Blocking Viral Attachment and Cell Entry with SARS-CoV-2 Neutralizing Antibodies) trial (ClinicalTrials.gov: NCT04427501) evaluated intravenous bamlanivimab for early COVID-19. The phase 2 trial included a preplanned interim analysis (when the last patient randomly assigned to bamlanivimab reached day 11) that showed lower rates of hospitalization and emergency department visits for those receiving bamlanivimab than those receiving placebo, at 1.6% and 6.3%, respectively (30). This trial provided the basis for the first EUA for an mAb to treat COVID-19. Sotrovimab was evaluated in the COMET-ICE (COVID-19 Monoclonal Antibody Efficacy Trial -Intent to Care Early) phase 3 trial (ClinicalTrials. gov: NCT04545060). The trial was stopped early for efficacy because the mAb treatment substantially prevented progression to COVID-19, with an adjusted relative risk reduction of 85% (97.24% CI, 44% to 96%; P = 0.002) and with 1% of patients progressing in the sotrovimab group versus 7% in the placebo group (31). The FDA issued an EUA for sotrovimab in May 2021 (32).
The BLAZE-1 extension trial (ClinicalTrials.gov: NCT-04427501) and several trials of REGEN-COV (Regeneron) (ClinicalTrials.gov: NCT04425629) evaluated mAb combinations. The BLAZE-1 extension compared bamlanivimabetesevimab versus either bamlanivimab alone or placebo in outpatients with mild to moderate COVID-19 (33). Compared with placebo, this mAb combination showed a 70% reduction in rates of hospitalization or death in the phase 3 trial among nonhospitalized patients with COVID-19; specifically, 2.1% of patients in the bamlanivimabetesevimab group compared with 7.0% in the placebo group were hospitalized or died (absolute risk difference, À4.8 percentage points; [95% CI, À7.4 to À2.3 percentage points]; relative risk difference, P < 0.001) (34). The REGEN-COV trials assessed the mAb combination casirivimabimdevimab delivered intravenously in outpatients. The phase 2 trial showed that participants who were seronegative for SARS-CoV-2 at study entry benefited more from the treatment than those who were seropositive, considering viral and clinical end points (35). The phase 3 trial results for 600 mg of casirivimab plus 600 mg of imdevimab included a reduction in symptom duration and a 70% relative risk reduction in hospitalizations and deaths; specifically, 1% of patients treated with REGEN-COV versus 3% in the placebo group were hospitalized or died (P = 0.0024) (36, 37). A phase 2 dose-ranging trial (ClinicalTrials.gov: NCT04666441) testing intravenous and subcutaneous delivery of casirivimab-imdevimab in outpatients with SARS-CoV-2 infection showed similar viral load reductions independently of the dose and route of administration compared with placebo (36). Additional results were pending at the time of the Summit. Other mAbs are currently under evaluation in the ACTIV-2 (A Study for Outpatients With COVID-19) clinical trial (ClinicalTrials.gov: NCT04518410), including subcutaneous BMS-986414 (C135-LS) plus BMS-986413 (C144-LS) (Bristol Myers Squibb and Rockefeller University), AZD8895 plus AZD1061 (AstraZeneca), and intravenous SAB-185 (SAb Biotherapeutics)-a polyclonal antibody product.
In the prevention setting, mAbs could offer immediate protection for unvaccinated persons exposed to SARS-CoV-2 or those who have no specific exposure but work in high-risk settings. They could also be administered to patients who are unlikely to respond to-or in rare cases those who are allergic to components of-COVID-19 vaccines. Target populations for such preventive use of mAbs may include residents of nursing homes, household contacts, immunocompromised hosts, and certain individuals in high-incidence workplaces.
Nursing homes are areas of particularly high incidence, with nursing home residents and workers making up approximately one third of all COVID-19 deaths in the United States (38). Findings from the phase 3 BLAZE-2 study of postexposure prophylaxis (ClinicalTrials.gov: NCT04497987) served as proof of concept for the use of mAbs in this setting, showing reduced incidence of COVID-19, reduced symptoms, and no deaths among patients in nursing homes who were administered bamlanivimab versus placebo (39).
In the REGEN-COV 2069 phase 3 study (ClinicalTrials. gov: NCT04452318), the combination of casirivimabimdevimab was administered subcutaneously to all contacts of a household in which 1 member had been diagnosed with COVID-19. Household contacts who received REGEN-COV showed no symptomatic cases of COVID-19 and a 50% reduction in overall rates of infection with SARS-CoV-2 compared with the placebo group (40). Data from the full study showed a relative risk reduction of approximately 81% between the REGEN-COV and placebo groups in the incidence of symptomatic SARS-CoV-2 infection; specifically, 1.5% of patients in the REGEN-COV group versus 7.8% in the placebo group had symptomatic infection (odds ratio, 0.17; P < 0.001) (41). The FDA recently expanded the REGEN-COV EUA for postexposure prophylaxis in persons who are at high risk for progression to severe COVID-19 (6).
The Summit included an update on emerging SARS-CoV-2 variants based on data from the Global Initiative on Sharing All Influenza Data as of 9 June 2021 (42). All emerging variants of interest have mutations in the N-terminal domain or the receptor-binding domain; many also carry mutations at the furin cleavage site (43). Many of these mutations have been shown to confer partial resistance to convalescent sera and neutralizing antibodies, indicative of immune pressure as a selective force (43 (43).
The mutations and variants of SARS-CoV-2 have implications for treatment and vaccine design. Although the EUA for bamlanivimab administered alone was subsequently revoked by the FDA because of the emergence of variants that affect the mAb epitope, other mAb combinations administered together under EUA, including bamlanivimab-etesevimab, sotrovimab, and casirivimab-imdevimab, remain effective against the Delta variant (45). However, when the Delta variant acquired the K417N mutation, the neutralization activity by the bamlanivimab-etesevimab combination was reduced by more than 1000-fold ( Table 2) (46). Transitions to global prevalence can occur quickly; the transition to the G clade-SARS-CoV-2 that carries the D614G mutation in the spike protein-in spring 2020 took 6 to 10 weeks. The Delta variant is currently the major variant globally. Although the exact trajectory of future variants cannot be predicted, patterns of convergence and covariation can be studied to determine relevant variants and forms that can occur. This knowledge will inform the improvement of current antibody-based interventions and the design and development of next-generation COVID-19 vaccines.
SESSION 1 KEY THEME: CHARACTERIZATION OF FUNCTIONAL ANTIBODIES TARGETING SARS-COV-2
Central to the selection of functional antibodies that can be used as potential therapeutic or prophylactic agents is the characterization of the antibodies that can neutralize SARS-CoV-2 or eliminate SARS-CoV-2-infected cells. This requires a detailed analysis of the specific epitopes targeted by the antibody, delineation of the mechanisms of antiviral effect, characterization of the ability of the antibody to control viruses harboring mutations of concern, and determination of the efficacy of the antibody in animal models followed by evaluation in clinical studies.
The Coronavirus Immunotherapy Consortium (CoVIC) (47) was launched to evaluate potential therapeutic antibodies in side-by-side in vitro analyses using standardized platform assays and in vivo models. Findings from these comparative studies are used to create a profile of antibody activity that correlates with protective clinical efficacy and predicts clinically successful outcomes (47, 48).
To date, CoVIC has compiled 350 candidate therapeutics from more than 50 groups in academia, research institutions, industry and biotechnology companies, and government agencies (47, 48). These antibodies are evaluated by 8 partner laboratories according to CoVIC standardized testing protocols; CoVIC also assesses the ability of their compiled antibodies to retain their neutralizing activity to the VOCs (47, 48).
The session panelists noted an ongoing need to focus on VOCs, particularly on the location of mutations in the spike protein and which antibody classes can be combined to address commonly occurring escape mutants. This gap is being addressed by the ACTIV Tracking Resistance and Coronavirus Evolution initiative, designed to provide actionable information on emerging SARS-CoV-2 variants (49). The panelists also proposed that combining 2 receptor-binding domain binders with 1 N-terminal domain binder would provide more coverage on the spike protein and allow antibody cocktails that are more potent; however, structural factors must be considered to ensure that antibodies with different orientations do not block each other.
SPECIAL ARTICLE NIH Summit on Anti-SARS-CoV-2 Antibodies
A focus on conserved epitopes is also an aspect that needs to be considered for identifying mAbs against variants, especially for development of bispecific antibodies. The panel proposed the use of broadly neutralizing antibody cocktails as a potential approach because target sites for these antibodies on the spike proteins are not routinely recognized by other antibodies (50)(51)(52). They noted that additional research is essential to better understand how antibodies with different Fc-mediated effector functions and specificities drive optimal antiviral activity.
The session panelists proposed that as additional mAbs are identified, these should be provided to CoVIC for comparative analysis. They concluded that these critical studies would provide a better understanding of how anti-SARS-CoV-2 antibodies target spike antigens and how mutations in the spike proteins result in immune escape. These research findings will also inform the selection of mAbs with relevant breadth and potency that can be combined as future therapeutics or prevention strategies.
SESSION 2 KEY THEME: PRECLINICAL DELIVERY, PHARMACOLOGY, AND EFFICACY OF ANTI-SARS-COV-2 ANTIBODIES
The COVID-19 pandemic brought an urgent need to develop animal models to recapitulate the human disease phenotype to better understand the virus pathogenesis and to test therapeutic and preventive interventions. Many models were developed, including mouse, hamster, ferret, and nonhuman primate. Infection with SARS-CoV-2 manifests differently in different animal models. Nonhuman primates are typically a replication model with mild pathology, whereas ferrets and hamsters are useful as transmission models that develop severe disease with weight loss and prolonged recovery. A key goal for these models is to develop cell targets like those that become infected during natural infection in humans, including ciliated cells in the airways and type II alveolar epithelial cells in the alveoli. Current models show evidence of the biphasic disease patterns seen in humans, with early replication followed by clearance and immunopathogenic severe disease; however, the disease course is compressed in small animal models.
The human angiotensin-converting enzyme 2 (HuACE2) is the receptor of SARS-CoV-2 (53,54). The mouse ACE2 is not sufficient to allow SARS-CoV-2 entry and replication, so mouse systems have been generated to express the HuACE2 gene as a transgene or through viral vectors. The K18 mouse model expresses the HuACE2 under the cytokeratin (K18) promoter (55). The Ad5-HuACE2 transduced mice were widely used to test the ability of multiple mAbs to prevent or treat COVID-19-like disease (56). Another approach was to create a mouse-adapted virus capable of infecting target cells and causing disease (57). Animal models developed to date have been useful in evaluation of mAbs; however, the therapeutic window is very narrow.
The role of the interaction between the Fc portion of antibodies and the Fc receptors can also be evaluated in the transgenic mouse models (58). Interactions between Fc and Fcg R are required for in vivo protective activity of neutralizing anti-SARS-CoV-2 mAbs by comparing the efficacy of the antibody treatment in animals expressing the human Fcg R versus genetically defined Fcg R-null mice. Furthermore, Fc domain variants engineered for selectively enhanced binding to activating Fcg Rs exhibit improved efficacy using the SARS-CoV-2 mouse-adapted model (58). The Collaborative Cross mice, a recombinant inbred mouse strain (59, 60), have been valuable in addressing the natural genetic variation linked to SARS-CoV-2 pathogenesis. These mice were used to identify the susceptibility loci to extend the therapeutic window. Using a genetic mapping approach, 6 genes associated with susceptibility to infection were identified located on mouse chromosome 9 (Baric RS. Personal communication.). The orthologous genes, at the chromosome 3p21.31 gene cluster, were identified in human genome-wide association studies linked to individuals developing severe COVID-19 (61). The Syrian hamster model has also been used extensively because hamsters are susceptible to the original strain and new variants. These hamsters develop disease and serve as a model of viral transmission between animals (62).
The session panelists suggested that an optimal approach to mitigate variants is to focus on conserved SARS-CoV-2 epitopes, half-life extension of mAbs and Fc-mediated effector function, and use of antibody combinations. The concept of the conserved epitope is important for pandemic preparedness-there is a need to identify and develop mAbs that can protect against other coronaviruses in the future.
The remaining knowledge gaps highlighted during the session included exploring strategies to extend the therapeutic window, identifying correlates of protection from infection and disease, defining characteristics of antibodies essential to providing long-term immunity, defining the effect of antibody biodistribution on its performance, and developing animal models that recapitulate the longer-term effects of SARS-CoV-2 infection.
SESSION 3: REAL-WORLD CLINICAL USE OF ANTI-SARS-COV-2 ANTIBODIES
This session focused on lessons learned, challenges, and potential future directions for real-world clinical use of anti-SARS-CoV-2 antibodies. Because clinical trials of mAbs have demonstrated efficacy in treatment of early COVID-19 (33-35), resulting in issuance of EUAs for mAb products, much of the discussion centered on these agents.
The panelists discussed challenges encountered regarding delivery and infrastructure both during clinical trials evaluating mAbs and during use as therapeutic agents after EUA. These challenges include the need for an appropriate outpatient setting with sufficient clinical staffing and resources to see trial participants or deliver parenteral mAb treatment to patients from an infection control perspective. This may be amplified when health care resources are stretched, when inpatients with severe disease are prioritized, and where clinical sites are in academic centers rather than community settings. Given the need to administer mAbs early in the disease course, it is necessary to promptly identify eligible patients and to coordinate where they can receive treatment or (39), may overcome certain aspects of these challenges. Lessons learned and challenges encountered about public health messaging and health inequities were discussed. As knowledge has evolved during the pandemic and mAbs have become available as therapeutic agents, public health messaging to both clinicians and patients has changed. Early in the pandemic, patients were told to isolate at home if they had COVID-19 and were not sick enough to require hospitalization. With the availability of mAbs to treat early COVID-19, certain patients at high risk for progression to severe disease were urged to seek early treatment in the outpatient setting after a positive test result for SARS-CoV-2. Adapting to this evolving messaging was difficult for both patients and clinicians. In addition, there have been numerous barriers to equitable delivery of mAbs, including systemic and structural inequities related to race/ethnicity and socioeconomic status. Geographic and transportation barriers to health care systems can also be important because many infusion centers for administering mAbs are located far from some patients' homes or sites of diagnosis.
Future directions and unanswered questions about the use of anti-SARS-CoV-2 antibodies were addressed. One question raised was whether the availability of routes of administration other than intravenous for antibody treatments could overcome some identified challenges and barriers to implementation. All FDA-authorized mAbs are currently delivered intravenously, with the exception of REGEN-COV, which is also authorized for subcutaneous administration (63). The potential use of mAbs as preventive treatments for immunocompromised persons who do not mount a protective immune response to vaccination was discussed. Although the field has advanced substantially, much remains to be done to achieve certain personalized approaches to treatment, such as rapid point-of-care diagnostic tests to identify variants to ensure that the correct treatment is provided.
CONCLUSION
The Summit highlighted advances that have been made using anti-SARS-CoV-2 antibodies for prevention and treatment of COVID-19. The presenters and panelists illustrated the clinical benefits when potent mAbs are administered early in the disease course for patients at high risk for progression to severe COVID-19. They also discussed ongoing studies to determine the potential benefit of high-titer CP antibodies or HIG in treating patients with COVID-19.
Several key knowledge gaps were identified on the basis of the snapshot of this field, including characterizing the specific spike protein epitopes and conserved regions targeted by anti-SARS-CoV-2 antibodies, delineating the mechanisms of action by which these antibodies bind to the targeted epitopes to prevent or control disease, studying different Fc-mediated effector functions and specificities driving optimal antiviral activity and pharmacokinetics, developing alternate routes of administration, and improving animal models. The potential effect of mAb infusions on COVID-19 vaccine immunogenicity and efficacy remains unclear. Several preclinical and clinical studies (ClinicalTrials.gov: NCT04852978 and NCT04952402) addressing this issue are ongoing with the support of the U.S. Countermeasures Acceleration Group for the federal COVID-19 response in collaboration with the study product sponsors. The continuing emergence of SARS-CoV-2 variants underscores the critical need to identify classes of mAbs that can be successfully and effectively combined and to develop and evaluate broadly neutralizing antibody cocktails and bispecific antibodies as potential therapeutics. In addition, it is critical to develop solutions to the identified challenges associated with real-world clinical use of antibody therapies. Although this field has rapidly advanced, additional progress is critical for prevention and treatment of COVID-19 in the United States and worldwide. | 2021-11-03T06:17:31.433Z | 2021-11-02T00:00:00.000 | {
"year": 2021,
"sha1": "cb25bdc20c362a4008efee72ae5c56ebacb2e1bb",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc8559823?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "9ef1bc8971ab5525177160dd225c85833132b27a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204965720 | pes2o/s2orc | v3-fos-license | IL‐17 and its role in inflammatory, autoimmune, and oncological skin diseases: state of art
Abstract Recent data support the theory of the involvement of IL‐17 in the pathogenesis of several chronic inflammatory skin diseases (psoriasis, atopic dermatitis, acne, hidradenitis suppurativa) and autoimmune skin diseases (alopecia areata, vitiligo, bullous diseases). Even if the role of IL‐17 in inflammatory and autoimmune diseases has been reported extensively, its role in tumor is still controversial. Some reports show that Th17 cells eradicate tumors, while others reveal that they promote the initiation and early growth of tumors. Herein, we review the role of IL‐17 in the involvement of some common dermatologic diseases: psoriasis, atopic dermatitis, hidradenitis suppurativa, acne, vitiligo, melanoma, and nonmelanoma skin cancers.
Introduction
IL-17 is a cytokine whose family consists of six members, from IL-17A to IL-17F, even if the term IL-17 usually refers to IL-17A. 1 All the family members share some activities: IL-17A, IL-17F, and their heterodimers IL-17A/F have inflammatory effect with different strength; IL-17B, IL-17C, IL-17D are proinflammatory cytokines with unknown role. The IL-17E cytokine, also named IL-25, is involved in Th2 cells response. 2 Production of IL-17 is mainly operated by T helper type 17 (Th17), CD8 + T cells, cd T cells, invariant natural killer T cells (iNKT), natural killer (NK) cells, natural Th17 cells, and lymphoid tissue inducer (LTi) cells. 1 IL-17 exerts many physiological functions including: neutrophil recruitment, Th2 stimulation to provide an effective response against extracellular organisms, macrophage production of IL-1b and TNF-a, and inflammatory mediator matrix metalloproteinases (MMPs) induction. 1 Despite of the important role of IL-17 cytokine in regulating adaptive and innate immune systems, 3 its overproduction could be involved in several diseases. 4 In recent years, different studies have tried to associate the IL-17 pathway to an increasing number of inflammatory diseases, such as asthma, chronic obstructive pulmonary disease (COPD), lupus, polymyalgia rheumatica, giant cell arteritis, Behc ßet disease, dry-eye syndrome, Sj€ ogren's syndrome, Crohn's disease, and multiple sclerosis. However, the role of IL-17 is still unclear. 4 Th17 cells and IL-17 have also been identified to be implicated in the pathogenesis of several human autoimmune diseases, such as rheumatoid arthritis, multiple sclerosis, inflammatory bowel disease, systemic sclerosis, primary Sjogren's syndrome, alopecia areata, and vitiligo. 5 In an attempt to contribute to the description of the relationship between IL-17 pathways and the most frequent dermatological diseases, we report the state of the art about the role of IL-17 in psoriasis, atopic dermatitis, acne, hidradenitis suppurativa, viti-ligo, alopecia areata, nonmelanoma skin cancer, and melanoma.
Psoriasis
Recent evidences highlighted the importance of IL-17A in the pathogenesis of psoriasis and psoriatic arthritis. IL-17A upregulates the expression of inflammation-related genes in keratinocytes and fibroblasts, resulting in the production of inflammatory cytokines, chemokines, and other mediators and the consequent clinical features. 8 Linag et al. support the emergent role of IL-17 in psoriasis pathogenesis, proving its effects on keratinocyte cultures to increase the expression of antimicrobial peptides (e.g. S100) and chemokines (e.g. CCL20) that activate and recruit neutrophils and T cells. 9 Recent data reported a serum increase of IL-17 in psoriatic patients. 10 The IL-17 binding to its receptor (IL-17R) expressed by keratinocytes promotes the aberrant differentiation and proliferation of keratinocytes, the expression of chemokines (CXCL1, CXCL2, and CCL20) which enhance recruitment of Th17 and dendritic cells to the skin, the expression of antimicrobial peptides, and reduces the expression of cell adhesion molecules, resulting in disruption of skin barrier function. 11 Zhang et al. in a recent work on a pediatric population reported a significantly higher frequency of Th17 and its cytokines in psoriatic patients' blood compared with healthy controls, pointing out a significantly positively relation between the increase of Th17 blood frequency and patient PASI. 12 16,17 Recently, the central role of IL-17 in systemic inflammation and cardiovascular comorbidity in psoriasis patients has also been highlighted. [18][19][20] Finally, the evidence of the role of IL-17 in the pathogenesis of psoriasis has been supported by the proven efficacy of the treatment with anti-IL-17 in psoriatic patients. 8,21,22 In particular, the biological agent brodalumab proved its efficacy and safety in generalized pustular psoriasis. 23
Atopic dermatitis
The central role of cytokines in the pathogenesis of atopic dermatitis (AD) has been widely highlighted. Interleukin-4 and interleukin-13 are major causes of inflammation and itch in patients. 24 Recently, the role of Th17 cells and IL-17 in the pathogenesis of AD has been investigated. Th17 cells play a potential role in the immune activation, including attraction of neutrophils. 25 An increase in IL-17A, IL-17E, IL-17F, and IL-23 serum concentrations has been demonstrated in children suffering from AD, which correlated with disease severity. 26 Elevated IL-17A and IL-17E levels were found in the papillary dermis of AD lesions, especially in the acute phase. 27 In AD mouse models, IL-17E is upregulated inducing the expression of endothelin-1, which is an important factor in developing pruritus. 28 IL-17E may be responsible for skewing the immune response to a Th2 dominance and decreases filaggrin production in keratinocytes, resulting in disorders of keratinization and an impaired skin barrier function/homeostasis. 29 Considering the role of IL-17 in the pathogenesis of DA, it is reasonable to speculate a possible effectiveness of anti-IL-17 in the treatment of AD. As anti-IL-17A therapy in psoriasis increases the risk of neutropenia and skin and oral candida infections, it seems reasonable to be cautious, as AD patients are more prone to skin and mucosal infections than patients suffering from psoriasis. 30
Acne
The evidence of the presence of inflammatory factors in the very earliest stages of acne lesion prior to hyperproliferation of the follicular epithelium raises questions on the primary key role of inflammation in the pathogenesis of acne. 31 Recently, the role of Th17 in acne lesions is under investigation. Examination of inflammatory changes in early acne lesions in two separate patient cohorts revealed a significant overexpression of cytokines involved in Th17 pathways. 32 Agak et al. It has been hypothesized that IL-17 may stimulate cancer cells to produce some angiogenic factors (VEGF), thereby enhancing tumor angiogenesis. 58 These results could suggest a direct impact on the biological behavior of tumor cells in the local microenvironment and could explain the increased expression of VEGF and angiogenesis in invasive melanomas.
IL-17 and Oncological Skin Diseases
In patients with advanced inoperable melanoma, a better understanding of the regulation of cytokine and cellular immune activity in response to melanoma may be important in order to predict therapeutic response to immunotherapy, including treatment with ipilimumab. A recent study showed that in patients with regionally advanced melanoma enrolled in a trial of neoadjuvant therapy with ipilimumab at 10 mg/kg, baseline IL-17 level was significantly associated with the risk of subsequent development of severe immune-mediated diarrhea/colitis. 61 Further investigation and confirmation in larger trials is necessary in order to confirm these findings.
Conclusion
We reported a global up-to-date overview about the role of IL-17 in inflammatory, autoimmune, and oncological skin diseases. The literature data suggest that this cytokine has been implicated in these diseases with different pathways. It could be hypothesized that IL-17 might represent a link between different diseases underlying a common pathogenic basis where inflammation could be considered the distinctive benchmark and this cytokine as a marker. Further clinical studies are needed to confirm this correlation and to determine its implications for the development of new therapeutic approaches as well as the monitoring of disease activity and therapeutic response.
Questions (answers provided after references)
True/False 1 All the family members of IL-17 family have an inflammatory effect.
2 The pathogenetic mechanisms triggered by the binding of IL-17 with its receptor in patients with psoriasis result in disruption of skin barrier function.
3 There is no evidence about the efficacy of treatment with anti-IL-17 in psoriatic patients.
4 Psoriatic patients are more prone to skin and mucosal infections than patients suffering from atopic dermatitis.
5 Some drugs, such as dihydroxyvitamin D3, retinoids, vitamin A, and zinc, have a successful role in the treatment of acne because of their inhibition of inflammatory Th17.
6 In the case of HS refractory to antibiotic therapy or other biological drugs, there is evidence that supports the role of IL-17 in the pathogenesis of HS and provides a rationale for treating hidradenitis with anti-IL-17.
7 In a recent study, is it confirmed that Th17 cells exert a direct effect on melanocytes. 10 Better understanding of the regulation of IL-17 and cellular immune activity in response to melanoma to predict therapeutic response to immunotherapy could be an advantage for patients with advanced inoperable melanoma. | 2019-10-31T09:13:26.280Z | 2019-10-30T00:00:00.000 | {
"year": 2019,
"sha1": "82018c1899bdf5e179b7977532ab2872c47e2a29",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ijd.14695",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c26d4cc730f805df60b91014912dbb548efa074a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
676994 | pes2o/s2orc | v3-fos-license | Improvement of n-butanol tolerance in Escherichia coli by membrane-targeted tilapia metallothionein
Background Though n-butanol has been proposed as a potential transportation biofuel, its toxicity often causes oxidative stress in the host microorganism and is considered one of the bottlenecks preventing its efficient mass production. Results To relieve the oxidative stress in the host cell, metallothioneins (MTs), which are known as scavengers for reactive oxygen species (ROS), were engineered in E. coli hosts for both cytosolic and outer-membrane-targeted (osmoregulatory membrane protein OmpC fused) expression. Metallothioneins from human (HMT), mouse (MMT), and tilapia fish (TMT) were tested. The host strain expressing membrane-targeted TMT showed the greatest ability to reduce oxidative stresses induced by n-butanol, ethanol, furfural, hydroxymethylfurfural, and nickel. The same strain also allowed for an increased growth rate of recombinant E. coli under n-butanol stress. Further experiments indicated that the TMT-fused OmpC protein could not only function in ROS scavenging but also regulate either glycine betaine (GB) or glucose uptake via osmosis, and the dual functional fusion protein could contribute in an enhancement of the host microorganism’s growth rate. Conclusions The abilities of scavenging intracellular or extracellular ROS by these engineering E. coli were examined, and TMT show the best ability among three MTs. Additionally, the membrane-targeted fusion protein, OmpC-TMT, improved host tolerance up to 1.5% n-butanol above that of TMT which is only 1%. These results presented indicate potential novel approaches for engineering stress tolerant microorganism strains.
Background n-Butanol has many advantages over ethanol, including a higher energy density due to two extra carbons, and can be used in gasoline engines without modification. n-Butanol is less hygroscopic and evaporative than ethanol and has been recently regarded as a more viable transportation biofuel than ethanol [1]. Additionally, n-butanol is also a permitted artificial flavoring and is used in a wide range of industries, including the food and plastic industries [2]. n-Butanol often occurs as a metabolic product of the microbial fermentation using sugars and other carbohydrates as carbon sources. However, during the production of n-butanol, its accumulation is known to be highly toxic to both natural producers and engineered hosts [3,4]. This toxicity makes it difficult to produce large titers of n-butanol at levels needed for economic efficiency.
The cellular membrane is a vital factor that allows for cells to acclimate to external stresses and is also one of the components highly affected by organic solvents [5,6]. Most toxicity researchers have proposed that the plasma membrane is the most affected target of organic solvents and plays a significant role in adapting to stress. Additionally, the length of the carbon backbone of organic solvents could alter the toxicity mechanism; increasing the hydrophobicity of the solvent could also raise the level of toxicity [7]. The long-and short-chain alcohols are known to cause stress during biofuel production by changing membrane fluidity. Ethanol and n-butanol are known to respectively decrease and increase the membrane fluidity [6,8,9]. Understanding the membrane stress response to solvents and alcohols could facilitate engineer-ing microorganisms for improved toxin tolerance. As such, stress responses of organisms such as E. coli, to ethanol exposure has been widely studied [10], and information from these studies have been successfully adapted to engineering improved ethanologenic hosts [11]. To understand the effect of n-butanol toxicity on the host, cell-wide studies have been conducted to obtain a global view of the n-butanol stress-response in transcript, protein, and metabolite levels. In Clostridium acetobutyli cum, transcript analysis indicated that the primary response was an accumulation of transcripts encoding chaperones, proteases, and other heat shock-related proteins [12]. In E. coli, several transcriptional analyses have been performed to understand the stress caused by alcohols including ethanol, n-butanol, and isobutanol [13][14][15][16]. Additionally, observations from fluorescent dye-staining indicated a large increase in reactive oxygen species during n-butanol stress [15]. This increasing oxidative stress is a response of the cell to extracellular xenobiotics, which may mediate macromolecular damage. These free radicals could directly attack the membrane by lipid peroxidation [17].
ROS include molecules that are either oxidants (such as hydrogen peroxide, H 2 O 2 ) or reductants (such as the superoxide anion, O 2 − ). All are typical side products of cellular aerobic metabolism. To decrease ROS-generated oxidative damage, microorganisms synthesize many antioxidant enzymes, including catalases, superoxide dismutases and glutathione peroxidase [18,19]. Recently, metallothioneins (MTs), a beneficial antioxidant enzyme that widely occurs in mammals, plants and fungi, has been identified [20]. MTs are heat-stable, low-molecular-weight and cysteine-rich intracellular proteins that are responsible for maintaining the homeostasis of essential metals, such as Cu 2+ , Zn 2+ and for the detoxification of toxic metal ions, such as Cd 2+ and Hg 2+ [20][21][22]. In addition, MTs also play a role as a defense system against oxidative stress through their ROS-targeted scavenging abilities [23]. For example, the tilapia fish (Oreochromis mossambicus), which serves as a biomarker for the contamination level of aqueous environments, has the ability to survive in a highly polluted environment because of its MTs function [24,25]. Furthermore, purified tilapia MT (TMT) has been shown to have a higher ability than glutathione (GSH) to scavenge both 2-diphenyil-1-picrylhydrazyl (DPPH • ) and 2,2-azinobis (3-ethylbenzothiazoline-6sulfonic acid) diammonium salt (ABTS •+ ) [26]. These observations have prompted us to postulate that TMT may serve as a good candidate for the purposes of metal absorption and free radicals scavenging in microorganisms during bio-fuel production.
It is known that the levels of intracellular reactive oxygen species increase in E. coli after exposure to n-butanol [15]. In this study, we demonstrate that engineered E. coli strains expressing OmpC fused MTs could elevate n-butanol tolerance by scavenging intra-and extra-cellular free radicals and the fusion protein could still contribute in osmosis via either GB or glucose uptaking.
Results and discussion
Alcohols tolerance assay Alcohol tolerances in a variety of microorganisms have been reported by many previous studies (Table 1). A few naturally occurring microorganisms presented a high alcohol tolerance: as high as 6% n-butanol in Pseudo monas [27] and 14% ethanol in Candida [3,4]. However, these alcohols are sensitive toxins to E. coli as tolerances of n-butanol and ethanol are only 0.5-1% and 4-5%, respectively (Table 1). In this study, we attempted to improve the alcoholic tolerance of E. coli via a MTs expression approach.
Therefore, alcohols tolerance measurements for the engineered E. coli strains of HMT, MMT and TMT were cytosolic expression, while the OmpC fused MTs strains (OmpC-HMT, OmpC-MMT and OmpC-TMT) were expressed for membrane-targeted MTs ( Table 2). The tolerance assays of n-butanol and ethanol of these engineered E. coli strains were examined from 0% -2.5% and 0-5%, respectively, and the relative growth rate was defined as the [ (A600) challenge, t12 − (A600) callenge, t0 / (A600) no challenge, t12 − (A600) no challenge, t0 ] × 100. When either 1-3% ethanol or 0.5% n-butanol was added, there was no significant difference among the different engineered E. coli strains. When 4% ethanol or 1% n-butanol was added, the TMT strain showed the best tolerance among the engineered strains; all subsequent higher concentrations of alcohols yielded no higher tolerance in each of the cytosolic-expressed MT strains ( Figure 1). Additionally, we also tested the hypothesis that the strains expressed membrane-targeted MTs that could enhance alcohol tolerance by decreasing damage to the cell membrane. All of the engineered strains expressing MTs on the outer membrane were observed to enhance alcohol tolerance up to 5% ethanol; however, only the OmpC-MMT and OmpC-TMT strains were able to tolerate 1.5% n-butanol ( Figure 1). Our data indicated that the TMT strains showed a higher capacity of alcohol detoxification than the MMT strains, and the HMT strains showed the lowest detoxification capacity. Furthermore, all of the membrane-targeted MTs strains showed higher alcohol tolerances when compared to strains expressing cytosolic MTs at the higher concentrations of alcohols (1.5% n-butanol or 5% ethanol). The membrane-targeted MTs strains showed the better capability of tolerance for both alcohols.
In previous studies, MTs were known to increase cellular tolerance to toxins by scavenging free radicals that were produced during stress [33,34]. In this study, it was hypothesized that the increased alcohol tolerance in engineered E. coli strains was due to the ability of MTs, particularly the TMT strains, to possess higher scavenging efficiencies as previously reported [26]. Overall, both membrane-targeted MMT and TMT strains were found contributing to 3 times n-butanol (0.5% to 1.5%) and 1.25 times ethanol (4% to 5%) greater tolerances, respectively, than the control E. coli strains (pET30a). Interestingly, the OmpC over-expressed E. coli strains without MTs also enhanced its alcohol tolerance to 1% n-butanol and 4% ethanol; this phenomenon was also observed in another study in which an E. coli strain EbN1 was observed to tolerate phenol by expressing OmpC [35]. We hypothesize that OmpC might not only act as a membrane-targeted protein but also utilizes its osmoregulative ability, leading to the accumulation of compatible solutes that prevent solvent stress.
Free radical scavenging ability
Toxins and stresses are factors of oxidative stress leading to elevated radicals in cells. MTs are well-known antioxidants that scavenge radicals and alcohols are known factors that cause oxidative stress in E. coli [35]. It is worthwhile to investigate cytosolic and membrane-targeted MTs, as they function as radical scavengers and increase the host toxin tolerance. In this study, we examined the capacity for MTs to scavenge free radicals when the host cells were treated with 0 to 1.5% n-butanol. We then detected the content of ROS in cells by 5(6)-Carboxy-2',7'dichlorodihydrofluorescein diacetate (carboxy-H 2 DCFDA) ( Figure 2). It was observed that free radicals in all strains increased with increasing concentrations of n-butanol from 0-1.5%. However, both cytosolic-and membranetargeted expressed MTs strains had lower levels of radicals than the control pET30a strain up to 1% (Figure 2). Moreover, in the lower n-butanol concentrations (less than 1%), the TMT strain showed an increased capacity for scavenging free radicals than either MMT or HMT strains. Notably, the membrane-targeted MTs strains showed elevated radical scavenging capacities than the strains expressing the cytosolic-MTs. In the higher n-butanol (1.5%) treatment, the both membrane-targeted MMT and TMT strains, except OmpC-HMT, showed highest radical scavenging capacities than all of the test strains ( Figure 2). These results suggested that the expression of MT proteins could lower the levels of free radicals and enhance the tolerance for n-butanol. Interestingly, non-MT OmpC-only strain also showed the abilities for both lowering the free radicals and enhancement of n-butanol tolerance. In particular, the OmpC strain was observed to have the lowest level of radicals among all of the engineered strains when treated with 0-1% n-butanol ( Figure 2). It has been suggested that osmoregulation could enhance solvent tolerance [36] and our results from overexpressing OmpC supported the suggestion. In addition, the slightly increased tolerance capacity of the OmpC-MMT and OmpC-TMT strains under 1.5% n-butanol stress (Figure 1b) might be attributed to the combination of both osmosis and elevated extracellular radical scavenging capacities, especially in the presence of increased ROS levels originating from lysed cells. The results of these ROS assays for higher n-butanol concentrations indicate that the slight growth capacities observed in the OmpC-TMT and OmpC-MMT strains ( Figure 1) are caused by decreased oxidative stress due to increased scavenging of extracellular radicals.
The roles of outer membrane (OM) proteins
Previous studies have reported that osmoregulation of a cell can help the uptake of compatible solutes, such as proline, choline, proline betaine and GB, through active transportation by transmembrane proteins such as OmpC in E. coli [36,37]. To determine whether n-butanol tolerance is dependent on OmpC presence in our engineered E. coli strains, the pET30a, TMT, OmpC and OmpC-TMT strains were cultured in M9 minimal medium containing 1% n-butanol and with or without 10 mM GB. After culture for 12 hours at 37°C with 1% n-butanol as stress, the adding of GB in M9 medium could not enhance the growth of TMT stain (even worse). On the other hand, without 10 mM GB, the relative growth rates of OmpC and OmpC-TMT strains were 5.21% and 4.99%, respectively, while tolerances were slightly increased when the same strains cultured with GB (OmpC: 6.32% and OmpC-TMT: 6.81%) (Figure 3). The result indicates that the medium containing GB was not responsible for an increased tolerance capacity for non OmpC overexpressing strain but for those OmpC related strains GB could contribute to their tolerance (Figure 3). From these results, we suggested that strains overexpressing OmpC were accumulating compatible solute through OmpC into cytosol and lead to slightly elevating n-butanol tolerance. It is also suggested that our construction strategy of OmpC-MT fusion protein for membrane targeting did not abolish the function of OmpC, as the dual functional OmpC-MT fusion protein could not only regulate compatible solutes but also reduce radicals to elevate the host's nbutanol tolerance.
In PYG medium, it was found that the growth rate of the OmpC overexpressed strains were nearly four times faster than other strains without overexpressed OmpC protein ( Figure 3). Meanwhile, the OmpC overexpressed strains cultured in M9 minimal medium showed that growth rates were nearly 5 to 6.5 times lower than the same strains cultured in PYG medium. It was also observed that the growth rate of the TMT strain in M9 medium was 1.65 times lower when compared to the rate observed in PYG medium cultured TMT strain. Previous reports have observed that the porins OmpF and OmpC are differentially regulated by glucose concentrations because the two porins constitute the main glucose Figure 3 Assay of osmoregulation capacity in PYG or M9 medium with/without glycine betaine. The OD 600 values of engineered E. coli strains were measured for cells cultured in PYG or M9 medium with 1% n-butanol (vol/vol) at 37°C. The M9 minimal medium is a simple growth medium containing 1% glucose, sodium chloride and phosphate salt; the compatible solute, GB, was added at 10 mM concentration for evaluation of osmoregulation capacity. The relative growth rates were compared with the strain controls cultured in PYG medium without alcohols. entry channels into the periplasm when the carbon source is present at a higher concentration of 0.2 mM (0.036 g/l) [38]. Cellular growth rate has been correlated to the uptake of glucose via OmpF and OmpC. Based on these evidences, it suggested that overexpressed OmpC could not only increase growth through osmoregulation of compatible solutes such as GB but also regulating glucose-uptake-capacity in M9 minimal (2 g/l glucose) and PYG (10 g/l glucose) medium, It is also found that the cytosolic TMT strain showed higher growth rates than that of the OmpC and OmpC-TMT strains in M9 minimal medium. As non-rich medium could generate radicals in cytoplasmic matrix, this phenomenon might be mostly related to the free radicals scavenging ability of cytosolic TMT.
Tolerance assay of lignocellulose pretreatment's toxins
In the bio-fuel industry, the pretreatment of lignocellulose substrate is a complex process requiring dilute acid and steam pretreatment and involving many toxins, including furfural, hydroxymethylfurfural and heavy metals [39]. In this study, the engineered E. coli strains were also used to test the toxin tolerance of these compounds.
Furfural and hydroxymethylfurfural (HMF) are dehydration products of hemicellulose hydrolysates and can be used as fermentation inhibitors but are also potential toxins [39][40][41]. The data of relative growth in furfuralpositive media of the engineered E. coli strains indicated that all but not the both HMT-expressed strains could raise the furfural tolerance capacity in 15 mM (Figure 4a). The MMT strain, which expressed the cytosolic MTs from mouse, showed best growth among cytoslically expressed MTs strains in all furfural concentrations. The OmpC-TMT strain, which expressed the membrane-targeted MTs from tilapia, showed the highest furfural tolerance capacity (35 mM). However, the TMT strain, which expressed cytosolic MTs from tilapia, did not show a tolerance enhancement from 20 mM furfural, relative to the MMT strain. In the HMF stress tests, both cytosolic-expressed and membrane-targeted MTs from tilapia and mouse showed a high relative growth rate (30 to 40%) in 4.5 mM HMF (Figure 4b). The OmpC strain, which only overexpressed OmpC protein, also increased toxin tolerances to both furfural and HMF. This could also be explained by its osmoregulation ability. Interestingly, the tolerances of MMT performed better than TMT in furfural (Figure 4a), Our previous studies [26], we found MT display different scavenging capacity between two kinds of radicals (ABTS •+ and DPPH • ). We suggested that MMT prefers to scavenge the type of ROS generated from furfural.
Furthermore, it is expected that the OmpC-MTs fusion strategy would show increased growth rates due to the combination of osmoregulation and MTs extracellular free radical scavenging abilities. Indeed, the OmpC-TMT strain was observed to have twice and 1.3 times of the growth rate compared to the OmpC strain in furfural and HMF, respectively, and better than pET30a strain (Figure 4a and 4b).
Heavy metals, such as nickel, may also be present in the host cell environment and are likely sourced from the substrates or its solubilized byproducts during lignocellulose pretreatment [39]. In addition to the other toxins, nickel tolerance was also tested. All engineered E. coli strains showed a significant nickel tolerance compared to the control strain (pET30a strain) in 1 mM nickel (Figure 4c). When 2 mM nickel-supplemented media was used, only the OmpC-TMT strain showed a distinguished relative growth over the control strain (47.9%). It is predicted that the E. coli strains expressing the OmpC-TMT protein could chelate metals in the external milieu and could also decrease the toxin-induced oxidative stress in the cytosol. This mechanism was also suggested by a previous study, which used Hg to test toxin tolerance in E. coli [26].
Conclusions
This study uses a novel approach to develop E. coli strains that expresses cytosolic and membrane-targeted MTs to improve cell tolerance capacity of toxins derived from fermentation process. From results, we suggested that our construction strategy of OmpC-MT fusion protein for membrane targeting did not abolish the function of OmpC, as the dual functional OmpC-MT fusion protein could not only regulate compatible solutes and glucose but also reduce radicals to elevate the host's toxins' tolerances.
Reagents
All of the chemicals and reagents used were purchased from the Sigma-Aldrich Co. USA, unless mentioned otherwise. The reagents, when available, were molecular biology grade. All solutions were prepared using these reagents and sterile distilled water.
Bacterial strains, culture media and culturing conditions
MTs expressing engineered constructs, protein expression and their locations in recombinants E. coli hosts were confirmed in our previous study [26]. Batch cultures were grown in 10 ml PYG medium (5 g of peptone, 10 g of yeast extract, 10 g of glucose, 5 g of tryptone, 40 (Table 2), were grown in medium Figure 4 Tolerance assay of lignocellulose pretreatment's toxins. Toxin tolerance assays were conducted with different concentrations of (a.) furfural, (b.) hydroxymethylfurfural, and (c.) nickel. The OD 600 values of engineered E. coli strains cultured in PYG medium were also measured. The relative growth rates were compared with the strain controls cultured in PYG medium without toxins.
supplemented with 30 μg/mL kanamycin at 37°C. When culture density reached O.D. 0.6, isopropyl-β-D-thiogalactopyranoside (IPTG) was added to for a final culture concentration of 0.6 mM. After eight hours of incubation, cells were harvested for tolerance experiments. All solvent concentrations in media are reported as% (v/v).
Tolerance assay of toxins
The above described (Table 2) pre-cultures of E. coli BL21 (DE3) strains, including different pET-30a plasmids, were inoculated at an initial O.D. of 0.1 in PYG medium containing 0.6 mM IPTG and 0-2.5% of n-butanol (v/v) or other toxins (furfural, hydromethylfurfural (HMF) and nickel). The cells were assessed after 12 hours of growth at 37°C. The relative growth rates were presented as the cell densities measured at a wavelength of 600 nm by spectrophotometer (GE Healthcare Life Sciences "GeneQuant 1300). Densities of toxin-treated cultures were normalized by the density of their respective toxinfree controls under otherwise same growth conditions [42]. From each tolerance assay, percent tolerance relative to unchallenged cultures was estimated at each challenge level and sample time as follows: Þno challenge; t12− A600 ð Þno challenge; t0 Â100 Reactive oxygen species detected by carboxy-H 2 DCFDA under n-butanol stress The engineered E. coli strains were pre-cultured in PYG medium containing 0%, 0.5%, 1%, 1.5%, 2% and 2.5% nbutanol. Aliquots of 100 μl of pre-cultured strains were each re-suspended in 5 ml M9 medium; 140 μl of each diluted sample was transferred to a 96-well plate followed by incubation at 37°C for 45 min. The assay method was adapted from a previous study [15]. All samples were treated with 10 μl of 25 mM carboxy-H 2 DCFDA (Invitrogen, Co., Carlsbad, CA) and incubated at 37°C for 15 min. The optical density at 600 nm and the fluorescence excitation/emission at 535/600 nm of each sample were measured by a plate reader. Tert-butyl hydroperoxide (TBHP) (Invitrogen, Carlsbad, CA) is a known stressor that produces intracellular H 2 O 2 ; a set of positive controls for the ROS assay were prepared with the strains cultured without n-butanol and treated by same steps as above except with an initial 45 min incubation of 10 μl of 7.78 M TBHP. | 2014-10-01T00:00:00.000Z | 2013-09-11T00:00:00.000 | {
"year": 2013,
"sha1": "c471420be72015c7281bf3095d9bd515a15b9ca6",
"oa_license": "CCBY",
"oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/track/pdf/10.1186/1754-6834-6-130",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6022510467d0cd4a023be53c670a123f41390e3a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
266978140 | pes2o/s2orc | v3-fos-license | Intellectual Property Rights, Foreign Direct Investment, and Industrial Development
This paper develops a North-South product model in which Southern imitation and the North-South flow of foreign direct investment (FDI) are endogenously determined. In the model, a strengthening of IPR protection in the South reduces the rate of imitation, which, in turn, increases the flow of FDI. The increase in FDI more than offsets the decline in production undertaken by Southern imitators, so that the South's share of goods produced by the global economy increases. Furthermore, real wages of Southern workers increase even though prices of goods produced by multinationals exceed those of Southern imitators. The preceding results hold when Northern innovation is endogenously determined; in addition, the rate of innovation increases with a strengthening of Southern IPR protection.
Introduction
How does the strengthening of intellectual property rights (IPRs) protection by developing countries impact their industrial development? How does it a¤ect their ability to attract foreign direct investment (FDI)? These and related questions have been at the heart of an ongoing debate that was brought into sharp relief during the negotiations preceding the rati…cation of the WTO's Agreement on Trade Related Aspects of Intellectual Property Rights (TRIPS) in 1995. Opposition to stronger IPR regimes in developing countries rests on two general arguments. First, there is concern that consumer welfare may be adversely impacted by enhancing the monopoly powers of innovators. Second, there is fear that stronger IPR protection in developing countries will hamper their ability to absorb foreign technologies without having any appreciable e¤ect on Northern innovation. 1 On the other side, TRIPS supporters argue that stronger IPRs world-wide will not only increase incentives for innovation but also foster industrial development in developing countries by encouraging multinationals to shift production there. In this paper, we seek to illuminate this important debate by developing a North-South product cycle model in which Southern imitation as well as the North-South ‡ow of FDI respond endogenously to changes in the degree of Southern IPR protection available to Northern …rms. Building on the research tradition established by Grossman and Helpman (1991), the model provides a uni…ed framework for assessing some of the key arguments for and against stronger IPR regimes in developing countries. The theoretical product cycle literature on the e¤ects of Southern IPR protection has been built on two types of growth models analyzed in great detail in Grossman and Helpman (1991) -the variety expansion model and the quality ladders model. Important contributions to this literature were subsequently made by Helpman (1993) and Lai (1998) both of which uti-lized the variety expansion model and Glass and Saggi (2002) who adopted the quality ladders approach. This research established that the e¤ects of increased IPR protection in the South on the Northern rate of innovation depend very much on whether production shifts to the South via imitation of Northern …rms or via North-South FDI. Furthermore, Helpman (1993) forcefully drove home the point that while stronger Southern IPR protection can indeed increase the pace of Northern innovation, such a policy change does not necessarily bene…t the South since it reallocates production in favor of Northern …rms whose prices tend to be higher than those of Southern ones. Thus, international production shifting matters not just for the nature and the extent of innovation but also welfare. Accordingly, we develop a North-South product cycle model with two important features. First, Like Lai (1998), the level of North-South FDI responds endogenously to changes in the degree of Southern IPR protection. Second, like Grossman and Helpman (1991b), imitation is treated as a costly activity and the Southern rate of imitation is endogenously determined. 2 To ease the exposition of our main results and to focus on the e¤ects of Southern IPR protection on activities that occur in the South -i.e. Southern imitation, production by local …rms, and production by Northern multinationals -we …rst analyze a benchmark model in which imitation and FDI are endogenous whereas innovation is exogenously given. The results obtained in this benchmark model are then shown to hold when the rate of Northern innovation is endogenously determined. Apart from tractability, an important advantage of the simpler model is that it allows us to analyze the e¤ects of a strengthening of Southern IPR protection when it does not have any e¤ect on the Northern rate of innovation. This is important because opposition to stronger IPRs in the South is often based on the premise that since Northern innovation is unlikely to respond to changes in the South's IPR regime, the South does not have much to gain from such a policy change. As our analysis below shows, this position is not entirely correct.
Making both imitation and FDI endogenous helps push forward the literature on North-South product cycle models of international trade. Furthermore, since imitation is a costly activity in the real world, analyses that treat it as exogenous fail to capture how changes in the Southern IPR regime alter the allocation of Southern resources among imitation and production. In addition to realism, an important reason for treating imitation as an endogenous activity is that North-South product cycle models with exogenous imitation have yielded remarkably di¤erent conclusions regarding the relationship between imitation and innovation from those that have treated it as endogenous. In a model with endogenous imitation and innovation, Grossman and Helpman (1991b) uncovered a positive relationship between the two activities while Lai (1998) found that a decline in the (exogenously given) rate of imitation leads to an increase in innovation if Northern …rms can undertake FDI in the South. 3 Our model sheds light on the relationship between innovation and imitation when both FDI and imitation are endogenously determined.
In our model, a strengthening of IPR protection in the South reduces the incentive of Southern …rms to imitate Northern multinationals. This decline in imitation makes the South a more attractive location for Northern multinationals. Furthermore, we …nd that the intra-regional reallocation of Southern production (from local imitators to Northern multinationals) that results from a strengthening of Southern IPR protection is dominated by the accompanying inter-regional reallocation of production: in other words, the share of the global basket of goods produced in the South increases with a strengthening of Southern IPR protection.
Our analysis also provides some interesting insights with respect to the e¤ects of Southern IPR protection on prices and wages in the two regions. First, by making the South a more attractive location for production and thereby shifting labor demand from the North to the South, a strengthening of IPR protection by the South lowers the North's relative wage. 4 Second, since Northern multinationals charge lower prices relative to …rms that produce in the North, the increase in FDI helps lower prices. However, this bene…cial e¤ect on prices is partially o¤set by the intra-regional reallocation of Southern production from local imitators to multinationals since a typical imitator charges a lower price than a multinational. Due to the nature of pricing behavior under Dixit-Stiglitz (1977) preferences (prices are mark-ups over marginal costs), these changes in prices and nominal wages translate into clear-cut e¤ects on real wages in the two regions: while Northern real wages decline due to stronger Southern IPR protection, Southern real wages increase. More speci…cally, the purchasing power of Southern workers in terms of Northern goods increases whereas their ability to purchase goods produced by Southern imitators and multinationals remains una¤ected.
As noted earlier, a key argument in favor of weak IPR protection in the South is that Southern imitation lowers prices. Since Southern imitators price below Northern multinationals, this channel is also operative in our model. However, this argument ignores the labor market e¤ects of international production shifting induced by stronger IPR protection in the South. By contrast, in our model, a strengthening of IPR protection by the South raises real wages of its workers. 5 In Section 4 of the paper we show that all of the preceding results regarding wages, prices, and the allocation of production across regions as well as within the South continue to hold when the Northern rate of innovation is endogenously determined. The main additional result that emerges under endogenous innovation is that a tightening of IPR protection in the South raises the rate of innovation. As in Lai (1998), this happens due to two reasons. One, the reduction in imitation risk increases the duration for which Northern multinationals enjoy their pro…t stream and since all Northern …rms are free to become multinationals, the reward to innovation goes up. Second, the reduction in imitation risk implies a greater North-South ‡ow of FDI and this helps move Northern resources from production into innovation.
The relationship between FDI and IPR protection has received significant empirical scrutiny in the literature. 6 As the survey by Park (2008) notes, as far as US data is concerned, there appears to be a clear positive relationship between the degree of IPR enforcement in developing countries and investment by US …rms -see, for example, Lee and Mans…eld (1996) and Nunnenkamp and Spatz (2004). Results derived from non-US data portray a 5 The real wage e¤ects captured by our model would not arise in partial equilibrium models that ignore the labor market e¤ects of IPR reforms. Furthermore, such e¤ects should only be expected to arise when IPR reforms are undertaken on an economy-wide basis as opposed to being focused on a few sectors. 6 For a nuanced and detailed discussion of this literature, see Maskus (2000 7 Given the central role of FDI in our model, it is worth noting that, consistent with a large number of empirical studies discussed in Markusen (1995), we also …nd that an increase in the productivity of Northern R&D leads to an increase in the ‡ow of FDI as well as in the sales of Northern multinationals. Furthermore, we show that the use of FDI incentives by the South in the form of a reduction in the tax rate on the pro…ts earned by multinationals has e¤ects quite like those of IPR reform: it increases North-South FDI, real wages in the South, as well as the Northern rate of innovation. These results not only help clarify the structure of our model but are also quite relevant because incentives toward FDI are widespread in the global economy (UNCTAD, 2003) and a host of recent …rm-level empirical studies document a negative relationship between FDI (particularly by US …rms) and host country tax rates. 8 The rest of the paper is organized as follows. Section 2 presents our benchmark model. Sections 3 describes the e¤ects of a strengthening of Southern 7 Following Feenstra and Rose (2000), they also construct for each reforming country an annual count of "initial export episodes" -the number of 10-digit commodities for which recorded U.S. imports from a given country exceed zero for the …rst time. This serves as a rough indicator of the net rate at which production shifts to the reforming countries, capturing changes in multinational production as well as indigenous imitation. This net rate of production shifting increases sharply after IPR reform, suggesting that any decline in indigenous imitation is more than o¤set by the increase in the range of goods produced by multinational a¢ liates. 8 IPR protection on FDI, Southern production, wages, and prices. Section 4 presents the fully endogenous model and also considers the e¤ects of Southern tax reductions toward Northern multinationals. Section 5 concludes while Section 6 constitutes the appendix.
Model
Consider a world comprised of two regions: North and South. Labor is the only factor of production and region i's labor endowment equals L i , i = N; S. As in Grossman and Helpman (1991b), preferences are identical in the two regions and a representative consumer chooses instantaneous expenditure E( ) to maximize utility at time t: subject to the intertemporal budget constraint where denotes the rate of time preference; r the nominal interest rate; I( ) instantaneous income; and A(t) the current value of assets. The instantaneous utility D( ) is given by where x(j) denotes the consumption of good j; n the number of goods available and 0 < < 1.
As is well known, under the above assumptions, the consumer's optimization problem can be broken down into two stages. First, he chooses how to allocate a given spending level across all available goods. Second, he chooses the optimal time path of spending. The instantaneous utility function D implies that the elasticity of substitution between any two goods is constant and equals " = 1 1 and demand for good j (given expenditure E) is given by where p(j) denotes the price of good j and P a price index such that Furthermore, as is well known, under the two-stage procedure the optimal spending rule is given by i.e. nominal consumption spending grows at a rate equal to the di¤erence between the interest rate and the subjective rate of time preference.
Product market
Three types of …rms produce goods: Northern …rms (N ), Northern multinationals (M ), and Southern imitators (S). Denote …rms by J where J = N; M , or S. Northern …rms can either produce in the North or the South. A …rm needs one worker to produce a unit of output in the North, whereas 1 workers per unit of output are needed in the South. Intuitively, this is due to the costs of coordinating decisions over large distances and operating in unfamiliar foreign environments. Indeed, the theory of the multinational enterprise argues that such …rms rely on 'ownership'advantages derived from technological assets and/or brand names in order to o¤set the disadvantages they face relative to local …rms (see Markusen, 1995).
Given the constant elasticity demand functions, it is straightforward to show that prices of Northern …rms are mark-ups over their marginal costs: Southern …rms can produce only those goods that they have successfully imitated and they need one worker to produce one unit of output. If successful in imitating a multinational, a Southern …rm charges its optimal monopoly price Note that this price can be sustained if and only if it lies below the multinational's marginal cost w S : In what follows, we assume > 1. 9 Let x J denote the output level of …rm J where J = N; M , or S. We know from the demand functions that Using the pricing equations for the three types of products, we have and x Flow pro…t of a Northern producer is given by Similarly, a multinational's ‡ow pro…t equals while that of a Southern …rm equals
FDI and Imitation
Of the n goods that exist, n N are produced in the North, n M are produced in the South by Northern multinationals, and n I are produced by Southern imitators. Let n S n I + n M denote all goods produced in the South. In what follows, we will think of the level of Southern industrial development as roughly corresponding to the Southern share of global manufacturing; i.e., the ratio of goods produced in the South to the number of goods that exist at a point in time. Since this measure of industrial development explicitly 9 When < 1, a Southern imitator limit prices the Northern …rm whose product it has copied by setting its price equal to the Northern …rm's marginal cost w S . includes the activities of a¢ liates of Northern multinationals, the advance of Southern industrial development in our model depends on the rate of FDI.
Let the rate of imitation be de…ned by i.e. denotes the rate of increase of the stock of imitated goods relative to the total number of goods produced by Northern multinationals. Since both multinationals and Southern imitators produce in the South, imitation simply transfers ownership of a good (and the associated ‡ow of pro…ts) from the hands of a multinational to a Southern imitator.
The rate of North-South FDI is de…ned by where n N denotes the number of goods produced in the North. In other words, at each instant, the the total stock of goods produced in the South increases by n N . Note that this measures the in ‡ow of North-South FDI because imitation only targets Northern multinationals and does not, by itself, lead to North-South production shifting. Like Grossman and Helpman (1991b) and Lai (1998), we study a steady state equilibrium in which prices, nominal spending, and all product categories grow at the same rate g: To facilitate exposition, we initially analyze our model under the assumption that the rate of innovation g is exogenously given and then in Section 4 analyze the fully endogenous model. Equations (6), and (14) through (15) imply that in steady state the interest rate equals the sum of the subjective discount rate and the growth rate: r = + g Furthermore, the steady state allocation of products across the two regions satis…es n N n = g g + and n S n N = g Similarly, the ratio of multinationals to their two types of competitors equals The lifetime value of a Northern …rm that opts to produce in the North equals: Note from above that since future products creates competition for existing products, an increase in the rate of innovation (g) reduces the life-time value of a Northern …rm. While it is cheaper to produce in the South (as we show below, the Southern relative wage is lower in equilibrium), shifting production to the South invites the risk of imitation and the value of a Northern multinational …rm equals As is clear, in calculating the value of a multinational …rm, the ‡ow pro…t M is discounted not just by the e¤ective interest rate (which equals + g) but also by the rate of imitation . As in Lai (1998), we assume that imitation targets only Northern multinationals. In other words, the risk faced by Northern …rms that refrain from shifting production to the South has been normalized to zero. In reality, Northern …rms that do not undertake FDI can also have their technologies imitated, but the risk of imitation they face is probably lower than that of multinational …rms that produce in the South. As is known from the work of Mans…eld (1994) and Maskus (2000), multinational …rms indeed internalize the risk of imitation that they face due to weak IPR protection in host countries. 10 Finally, the lifetime value of a Southern producer (i.e. the reward earned by a successful imitator) equals
Relative wage
Since all Northern …rms have the option of becoming multinationals, we must Note immediately from above that if the risk of imitation is positive (i.e. > 0) then we must have M > N . This is intuitive: since any Northern …rm is free to become a multinational, the ‡ow pro…t earned by a multinational must be higher in order to compensate for the risk of imitation faced (only) by multinationals. 11 From the de…nition of pro…t we have The last two equations allow us to write the Northern relative wage (w R ) as a function of the rate of innovation and imitation as well as some of the exogenous parameters of the model: As is clear, the relative wage in the North increases with the production disadvantage faced by Northern multinationals ( ) as well as with the Southern rate of imitation ( ) since both of these factors discourage Northern …rms from relocating production to the South. This reluctance to shift production to the South increases the relative demand for Northern labor and therefore North's relative wage. As we noted earlier, this result di¤ers from that of Grossman and Helpman (1991b) and is line with Lai (1998). Why do these models yield such di¤erent results regarding the determinants of the North-South relative wage? In Grossman and Helpman (1991b), Southern imitation of …rms producing in the North serves as the channel through which international reallocation of production (and therefore labor demand) occurs. By contrast, in our model as well as in Lai (1998) Southern imitation targets multinational …rms and North-South FDI is the channel of international reallocation of production. In our model, by lowering the risk of imitation, a strengthening of Southern IPR protection increases FDI and the demand for Southern labor while it reduces demand for Northern labor. In Grossman and Helpman (1991b), the opposite happens: as imitation declines, more production stays in the North and less of it occurs in the South. Hence the North-South relative wage behaves rather di¤erently across these models.
Imitation incentives and Southern IPR protection
At any given point in time, the unit labor requirement in imitation is given by a I n S . In other words, the unit labor requirement in imitation is assumed to decline with the number of goods produced in the South (n S = n I + n M ). The idea underlying this formulation is that imitation and Northern FDI generate knowledge spillovers for the South that lower the cost of imitation over time. This decline in imitation cost is necessary to sustain imitation in the long run since an ongoing expansion in the number of products in the global economy reduces the pro…tability of imitation over time.
We consider two di¤erent formulations of Southern IPR protection. Under our …rst formulation, the cost function for imitation is given by where k 1 is an index of the degree of IPR protection in the South. The idea underlying this formulation is that as IPR protection is strengthened, imitation becomes a more costly activity for Southern …rms because evading local enforcement of IPRs becomes more di¢ cult. Under our second formulation, the cost function of imitation is given c I = a I w S n S and a Southern imitator's ‡ow pro…t equal Under this formulation, IPR policy is akin to a pro…t tax on imitators: the more stringent is IPR protection, the smaller the rents from imitation. Alternatively, one could view k S as the share of its pro…t stream that an imitator must surrender to local authorities for them to willingly turn a blind eye towards the violation of IPRs of Northern multinationals.
Free entry into imitation implies that the reward from imitation should equal its cost: Substituting from (12) into the above equation and using (8) gives the sales levels of a Southern imitator and a Northern multinational: Furthermore, using we have where The following lemma reports some important properties of the function A( ; g): Lemma 1: A( ; g) < 1 and @A( ;g) @ < 0 < @A( ;g) @g .
General equilibrium
The conditions for general equilibrium are derived from the resource constraints in the two regions. In the North, when the rate of innovation is exogenously given, all labor is allocated to production. Let L i d denote aggregate labor demand in region i where i = N; S. Then L N d n N x N . Substituting from (16) and (26) allows us to write aggregate labor demand in the North as which implies that the Northern labor market equilibrium condition is given by g 1 It is obvious from (27) that aggregate labor demand in the North L N d decreases in . Furthermore, Lemma 1 implies that L N d decreases in . These two properties of L N d imply that in the ( ; ) space, the Northern labor market equilibrium condition, which we will refer to as the N N curve, is downward sloping: increases in while it decreases in and (iii) is unde…ned for = 0. Property (ii) implies that the N N curve is convex in the ( ; ) space whereas property (iii) implies that it does not intersect the vertical axis. From (27) it is straightforward that the N N curve intersects the horizontal axis at 0 > 0. Southern labor is allocated to imitation and production by multinationals and local …rms. Therefore we must have : Substituting into the above resource constraint from equations (16), (17), and (24) gives Observe from (30) that, in steady state, labor demand in the South is independent of the ‡ow of North-South FDI . This implies that in the ( ; ) space, the Southern labor market equilibrium condition (called the SS curve) is a horizontal line at the equilibrium rate of imitation -i.e.
The equilibrium allocation of resources in the global economy is given by the intersection of the N N and SS curves.
E¤ects of Southern IPR protection
In this section, we study the e¤ects of a strengthening of Southern IPR protection when the rate of innovation (g) is exogenously given. We begin by establishing some crucial properties of the North-South ‡ow of FDI. Solving equation (27) for FDI ‡ow in terms of the rate of imitation ( ) gives Observe immediately from (31) that holding constant the numerator of the right hand side increases with g: recall from Lemma 1 that A( ; g) increases with g. Due to the same reason, the ratio =g also increases in g. This implies the following: Remark 1: Holding constant the rate of imitation ( ), the ‡ow of FDI ( ) to the South increases with the rate of Northern innovation: @ @g > 0. Furthermore, the elasticity of the ‡ow of FDI with respect to the rate of innovation is greater than unity: g @ @g > 1.
In this context, it is worth noting that a large number of empirical studies have demonstrated a strong positive correlation between innovation and FDI and, as Markusen (1995) notes, this …nding is so pervasive that it has become a cornerstone of the modern theory of the multinational …rm.
Suppose now that the South increases the degree of IPR protection (k) available to Northern …rms. Note …rst that equation (30) can be written as In other words, from the viewpoint of the South, holding constant the rates of imitation ( ) and innovation (g), an increase in the degree of IPR protection (k) is an e¤ective reduction in the real resources available (i.e. a decline in L S k ) since all three activities that the South is engaged in -imitation, production by multinational …rms, and production by local imitators -would require more resources if k increases and remains unchanged. It is intuitively obvious why an increase in the cost of imitation increases the resources required to sustain a given level of imitation. But why do the two production activities undertaken in the South also become more resource intensive with an increase in the IPR index k? The intuition for this comes from the free entry condition in imitation: as the cost of imitation increases, the sales of a …rm that is successful in imitation also must increase in order to maintain the zero pro…t condition in imitation. Finally, the sales of a multinational (x M ) are proportional to the sales of a Southern imitator (x M ) and if x S increases, so must x M .
Direct calculations yield because " > and < 1. Since aggregate labor demand in the South L S d increases with the rate of imitation and the e¤ective labor supply ( L S k ) falls with the Southern IPR index k, the rate of imitation must fall with k or else the Southern labor market would fail to clear. Now consider how an increase in k e¤ects the N N curve. Since the slope of this curve N ( ; ) is independent of k whereas the horizontal intercept 0 increases with k, the N N curve shifts outward with an increase in k. The decline in the rate of imitation (i.e. the downward shift in the SS curve) along with the outward shift in the N N curve yields: Proposition 1: When the rate of innovation ( g) is exogenously given, a strengthening of Southern IPR protection lowers the rate of Southern imitation ( ) and it increases the rate of North-South FDI ( ): d dk < 0 < d dk .
The logic behind Proposition 1 is easy to see. Recall from Lemma 1 that A( ; g) decreases in . Since decreases in k, it follows then that kA( ; g) increases in k. But with g …xed, equation (27) also implies that the North-South ‡ow of FDI necessarily increases with k or else the Northern labor market cannot clear. Figure 1 illustrates Proposition 1 in the ( ; ) space. 13 With a strengthening of Southern IPR protection, the SS curve shifts down while the N N curve shifts up and the equilibrium of the world economy moves from point A to B, where the rate of Southern imitation is lower whereas the rate of North-South FDI is higher. Since Proposition 1 is our core result from which most other results are derived, it is worth checking whether it holds when Southern IPR protection determines how much rent local imitators collect from their investment in imitation as opposed to making imitation more costly. Let a Southern imitator's ‡ow pro…t from imitation equal S k = (1 k) S = (1 k)(1 )w S x S where k measures the degree of IPR protection and 0 k 1. It is straightforward that in the Northern labor market equilibrium condition (27) we simply need to replace 1=k by (1 k) whereas in the Southern labor market equilibrium condition (30) the same substitution is needed in the second and third terms of the LHS; in the …rst term of the same equation, k needs to be simply replaced by 1. Since an increase in k in our pro…t tax based formulation of IPR protection has analogous e¤ects to an in increase in k under our cost based formulation, Proposition 1 holds under both formulations.
Southern industrial development and FDI
An important objective of this paper is to understand how a strengthening of IPR protection in the South alters the distribution of production across the two regions as well as between Northern multinationals and Southern imitators. How Southern IPR protection a¤ects the global allocation of production depends on its e¤ects on Southern imitation and the North-South ‡ow of FDI. To see the e¤ect of an increase in k on the international allocation of production, …rst note that n S n = +g . Since increases in k and g is exogenously …xed we must have: Corollary 1 (Inter-regional Reallocation of Production): A strengthening of Southern IPR protection increases the South's share of the total basket of goods produced in the global economy ( n S n ): d(n S =n) dk > 0: Given that n I n M = g decreases with k, we can state the following result regarding the allocation of production within the South between multinationals and Southern …rms: Corollary 2 (Intra-regional Reallocation of Production): A strengthening of Southern IPR protection increases the share of Southern production undertaken by Northern multinational …rms ( n M n S ): d(n M =n S ) dk > 0: It is straightforward to show that the total value of multinational sales relative to those of Southern imitators has the following simple expression: Since the the rate of imitation ( ) falls with an increase in the degree of Southern IPR protection, it implies that a strengthening of Southern IPR protection leads to an increase in the aggregate sales of multinational …rms relative to those of Southern imitators.
Now consider a comparison of total multinational sales relative to those of …rms producing in the North: Since n M n N = g+ , equation (33) implies that a typical multinational must have higher relative sales compared to a Northern …rm (i.e. the ratio p M x M =p N x N must exceed 1). Intuitively, since imitation only targets multinational …rms, for a typical multinational to earn the same rate of return as a Northern …rm producing in the North, the multinational must have a higher relative pro…t ‡ow. However, with a decline in the rate of imitation, this relative pro…t ‡ow actually has to shrink in order to ensure multinationals and Northern …rms earn the same rate of return. In other words, a strengthening of Southern IPR protection decreases the sales of a typical multinational …rm relative to those of a Northern …rm.
In this context, one further subtlety that arises from general equilibrium considerations is worth noting: an decrease in the rate of imitation increases the relative Southern wage and therefore the cost of production of multinationals relative to Northern …rms. However, since prices of both types of …rms are mark-ups over their respective marginal costs, this cost increase has a proportional e¤ect on prices of multinationals relative to those of Northern …rms. In other words, by increasing the South's relative wage, IPR reform increases the prices charged by multinationals relative to those of Northern …rms and this translates into lower relative sales for a typical multinational.
Real wages and the aggregate price index
What are the e¤ects of a strengthening of IPR protection in the South on real wages in the two regions? By de…nition, the real wage e¤ects of such a policy change depends upon nominal wages in the two regions and the prices of goods produced by three types of …rm: …rms located in the North, multinationals producing in the South, and Southern imitators. Recall that p N = w N ; p M = w S ; and p S = w S which allows us to write Northern real wages in terms of the three types of goods: In other words, the Northern real wage in terms of goods produced by Northern …rms is una¤ected by Southern IPR protection whereas in terms of the other two goods, it moves in the same direction as the Northern relative wage w R . We already know that Northern relative wage decreases as a result a strengthening of Southern IPR protection since the rate of imitation falls with such a policy change. This decline in the Northern relative wage w R implies that a strengthening of Southern IPR protection decreases real wages in the North.
19
Consider now the e¤ect on Southern real wages. We have In other words, the only e¤ect on Southern real wages of a change in its IPR policy is in terms of goods produced in the North. However, since w R decreases with , it implies that a strengthening of Southern IPR protection increases real wages in the South. The general equilibrium nature of this result deserves emphasis. A common argument in favor of weaker IPR protection in the South is that Southern imitation lowers prices and therefore bene…ts consumers. Since prices of Southern imitators are lower than those of Northern multinationals, this channel is operative in our model as well. However, the story does not end there: international production shifting that results from a reduction in the rate of imitation also has labor market e¤ects. In our model, a strengthening of Southern IPR protection leads to a higher Southern relative wage since the resulting decline in imitation risk makes the South a more attractive location for Northern multinationals. Indeed, changes in prices are dominated by the change in the Southern relative wage so that the purchasing power of Southern workers in terms of goods produced in the North increases whereas there is no change in their ability to purchase goods produced in the South. Despite an increase in real wages, Southern welfare does not necessarily increase because the ‡ow of utility equals the log of real spending (log u = log E log P ) and a reduction in pro…ts of Southern imitators lowers Southern income and can adversely impact Southern spending. While a complete welfare analysis along the lines of Helpman (1993) is beyond the scope of the paper, it is useful to consider how a strengthening of Southern IPR protection a¤ects the aggregate price index P . By de…nition, which can be rewritten as which is the same as While goods produced by multinationals are cheaper than those produced by Northern …rms (p M < p N ), it is the Southern imitators that produce the cheapest goods (p S < p M ). Recall that n I n M = g decreases with the degree of Southern IPR protection (k) since imitation slows down while innovation increases. This implies that n I =n n M =n = g decreases with k, i.e., the share of global production that is in the hands of multinational …rms increases. Furthermore, recall that a strengthening of Southern IPR protection shifts production away from the North and towards the South (inter-regional reallocation). Since p M < p N , the inter-regional reallocation of production from North to the South helps lower the overall price index. However, since p M > p S , the intra-regional reallocation of Southern production in favor of Northern multinationals and away from Southern imitators has the opposite e¤ect. This implies that if the inter-regional reallocation of production is substantial, Southern imitation has the potential to partially bene…t Northern consumers by lowering the aggregate price index P . Indeed, this is the key reason why Helpman (1993) …nds that some amount of imitation is in the interest of the North. However, in our model, since FDI also o¤ers the potential for lowering prices, imitation is not as crucial for welfare purposes. This is worth explaining in some detail. Unlike us, Helpman (1993) assumes that the risk of imitation applies equally to Northern …rms and multinationals. As a result, multinationals and Northern producers can coexist in equilibrium only if the two regions have equal wages. 14 Under such wage equalization, FDI o¤ers no reduction in costs of production and therefore has no price effects. By contrast, in our model, both FDI and imitation imply cost savings and the allocation of production across regions as well as within the South have implications for the aggregate price index.
We next study the e¤ects of a strengthening of Southern IPR protection when innovation is endogenous. 14 Our model would yield the same result if the rate of imitation facing multinationals and Northern producers were the same (i.e. = 0) and multinationals did not face any frictions that hamper their ability to be as e¤ective in production as local Southern …rms (i.e. = 1). 21
The model with endogenous innovation
Note …rst that when the rate of innovation is endogenously determined, the results obtained under the assumption of exogenous innovation (i.e. Proposition 1 and Corollaries 1-2) continue to hold so long as a strengthening of Southern IPR protection does not decrease the Northern rate of innovation g. In what follows, we show that an increase in the Southern IPR protection index k actually increases the rate of innovation (Proposition 2). In addition, we also show that an increase in R&D productivity of the North increases both innovation and North-South FDI (Proposition 3) and that a policy of attracting multinational …rms through a reduction in the pro…t tax imposed on them has e¤ects quite similar to a strengthening of Southern IPR protection.
Costly innovation
When innovation is endogenous and there is free entry into it, the value of a Northern …rm must exactly equal the cost of innovation: where a N is the unit labor requirement in innovation and w N a N n measures the up-front cost of product development. This formulation assumes that the cost of designing new products falls with the number of products (n) that have been invented. In other words, knowledge spillovers from innovation sustain further innovation. This assumption is standard in the literature (see Helpman, 1991a andb, andRomer, 1990) and in its absence growth cannot be sustained in the variety expansion model with …xed resources. This is because the ‡ow pro…t of a successful innovator declines with the number of products invented and incentives for innovation disappear in the long run if the cost of innovation does not also fall with an increase in the number of products.
Substituting from equation (10) into (34) gives the output level of a Northern …rm From equations (34) and (23) we have Utilizing the de…nition of …rm values and pro…ts allows us to rewrite the above equation as Using equations (9) and (21) Substituting from (16) and (17) into the above equation gives us an equilibrium relationship between the three endogenous variables g, , and and the exogenous parameters of the model: Intuitively, this condition follows from the assumption of free entry into imitation and innovation and it ensures that neither activity leads to excess pro…ts for …rms that are successful in such activities. Solving equation (39) for FDI ‡ow in terms of the other two endogenous variables (g and ) gives = g " a N A( ;g)ka I
(40)
Observe immediately from (40) that holding constant the denominator of the right hand side increases with g: this is because =g falls with g whereas A( ; g) increases (Lemma 1). This implies the following result: Remark 2: Holding constant the rate of imitation ( ), factors that increase the North-South ‡ow of FDI ( ) must also increase the rate of Northern innovation ( g).
In this context, it is worth noting that a large number of empirical studies have demonstrated that there is a positive correlation between innovation and FDI; as Markusen (1995) notes, this …nding is so pervasive that it has become a cornerstone of the modern theory of the multinational …rm. Furthermore, since A( ; g) decreases with , we have: Remark 3: Holding constant the rate of innovation ( g), factors that decrease the Southern rate of imitation ( ) must also increase the North-South ‡ow of FDI ( ): An important point to note is that since our model exhibits a negative feedback between FDI and imitation and a positive feedback between FDI and innovation, it necessarily implies a negative feedback between innovation and imitation. This is an important property of the model which di¤erentiates it from the results of Grossman and Helpman (1991b) and aligns it with those of Lai (1998).
Consider now the direct e¤ect of Southern IPR protection on the North-South ‡ow of FDI. From (40) directly observe that the denominator in the formula of ( ; g) decreases with k so that we have: Remark 4: Holding constant the rates of imitation ( ) and innovation ( g), the ‡ow of FDI ( ) to the South increases with a strengthening of Southern IPR protection (i.e. an increase in k).
The intuition for this result comes from equation (38) which requires the rate of return on innovation and imitation to equal each other. Since the right hand side of this equation always equals 1, an increase in the IPR index k must be counterbalanced by an increase in the ratio of production ( n S n = +g ) that occurs in the South for the cost of imitation to not increase relative to the cost of innovation which in turn requires that the ‡ow of FDI increases with the degree of IPR protection k. It is well-known that multinational …rms conduct a large share of global research and development (R&D). Indeed, a generation of empirical studies have documented the positive correlation between FDI ‡ows and R&D investment (Markusen, 1995). Given this, it is worth noting from equation (40) that, holding constant the rate of innovation and imitation, an increase in the R&D productivity of Northern …rms (as measured by an decrease in a N ) implies a faster North-South ‡ow of FDI. We later discuss the general equilibrium response of FDI to an increase in Northern R&D productivity taking into account its e¤ects on the rates of imitation and innovation.
Southern IPR protection under endogenous innovation
Assuming the rate of imitation is exogenously given, Lai (1998) has shown that a decline in this rate increases Northern innovation and the rate of production shifting to the South. 15 A crucial question is whether this important result holds when both imitation and innovation are endogenous and the underlying exogenous variable is the degree of IPR protection (i.e. parameter k). Under endogenous innovation, the Southern labor market equilibrium condition (30) remains unaltered where in the North we now need to account for resources allocated to innovation: Substituting into the above resource constraint from the market measure equations (16), (17), and (35) yields Equations (30), (39), and (42) de…ne the steady state equilibrium of the model in terms of the three endogenous variables: the rate of innovation g, the rate of imitation , and the rate of FDI . All of the e¤ects of increased IPR protection in the South (i.e. an increase in k) are derived from the e¤ects on these endogenous variables. Using the equilibrium ‡ow of FDI and the two resource constraints, we can derive a system of two equations in two unknowns that helps provide a graphical illustration of the consequences of stronger IPR protection in the South.
Recall that the Southern labor market constraint is independent of the ‡ow of FDI . As before, let L S d measure aggregate labor demand in the South (given by the LHS of equation (30)). Recall that @L S d @ > 0 -i.e. 15 In the appendix, we show how our model relates to Lai (1998). holding constant the rate of innovation g, aggregate labor demand in the South increase with the rate of imitation . Similarly, holding constant the rate of imitation, demand for Southern labor increases with the rate of innovation: where we have assumed that > . Thus, the Southern labor market constraint (i.e. the SS curve) is downward sloping in the (g; ) space: In other words, since the South has only a …xed amount of labor resources, an increase in the Southern rate of imitation implies that the rate of innovation g that can be supported by the global economy must be lower. Also, we have i.e. the higher the rate of imitation , the higher the demand for Northern labor. The logic for this is as follows. Since FDI is endogenously determined, a higher rate of imitation makes FDI less attractive to Northern …rms. For a …xed rate of innovation, the demand for Northern workers is inversely related to the ‡ow of FDI. Next consider how an increase in the rate of innovation e¤ects aggregate labor demand in the North. Recall that demand for Northern labor comes from innovation (L N n a N g) and from production (L N p n N x N ). It is obvious that an increase in g raises labor demand in innovation (L N n ). On the production side, labor demand can be written as which immediately implies that if n N n were to increase in g, then it must be that L N p (and therefore aggregate labor demand) in the North increases in g. Further note from above that if were independent of g, it would immediately follow that n N n increases in g. This thought experiment is useful for highlighting the role of the ‡ow of FDI in our model: if the ‡ow of FDI ‡ow were invariant to the rate of innovation, labor demand in the North would necessarily increase with the rate of innovation. However, Remark 2 notes that the ‡ow of FDI and the rate of innovation are positively related. This raises the possibility that n N n might decrease with g. Intuitively, such a situation could arise since the elasticity of the ‡ow of FDI with respect to the rate of innovation exceeds unity. Despite this, we show in the appendix that labor demand in the North necessarily increases with the rate of innovation: As a result, like the Southern labor market constraint, the Northern labor market constraint (i.e. the NN curve) is also downward sloping in the (g; ) space: It is worth emphasizing the role FDI plays in this context: in the absence of FDI, in a variety expansion product cycle model such as Grossman and Helpman (1991b), the Northern market labor constraint is actually upward sloping in the (g; ) space. This is because when imitation is the only channel via which production is reallocated internationally, an increase in the rate of imitation frees up Northern labor for use in innovation thereby generating a positive feedback between imitation and innovation. By contrast, in our model imitation targets production by multinationals and by slowing down FDI, an increase in the rate of imitation actually pulls Northern resources out of innovation and into production.
For a unique steady state equilibrium to exist, the SS curve and the N N curve must have a unique intersection in the (g; ) space. We have already noted that both curves are downward sloping. Neither curve intersects the vertical axis and we show in the appendix that under minor conditions, the horizontal intercept (g s ) of the SS curve is larger than that (g n ) of the N N curve . The latter property means that when the rate of imitation is near zero, the rate of innovation required for the Southern market to be in equilibrium is greater relative to the required rate of innovation for the Southern market to be in equilibrium. This is quite intuitive: when the rate of imitation is zero, Southern resources are utilized only by multinationals for their production activities whereas Northern resources are used up in both innovation and production. As a result, when imitation is non-existent labor market equilibrium in the South calls for a greater rate of innovation than that in the North since the only activity generating labor demand -i.e. FDI -is positively related to the rate of innovation.
Given these properties of the two curves, any intersection of the two curves will be unique if the N N curve is steeper than the SS curve: i.e. r N = S > 1. We can show that r > 1 i¤ a R a N =a I exceeds some threshold a R , where a R is a function of exogenous parameters and the rates of imitation and innovation. Furthermore, as approaches zero, a R can be shown to be decreasing in the rate of imitation . In other words, for close to zero, the required threshold a R is the highest (and therefore the most di¢ cult to meet) at = 0. Next, it can be shown that at = = 0, a R decreases in and at the lowest feasible value of (which is 1= ), the condition a R > a R is necessarily satis…ed for all feasible . Thus, we proceed with the scenario where the N N curve is steeper than the SS curve and the two curves have a unique intersection that pins down the equilibrium of the global economy.
As was already noted, holding constant the rates of imitation ( ) and innovation (g), an increase in the degree of Southern IPR protection (k) increases labor demand in the South in all three activities (i.e. local imitation, production by Southern …rms, and production by multinationals). This is equivalent to an inward shift in the Southern labor market constraint in the (g; ) space. Further note that holding constant g and , an increase in k e¤ects the Northern labor market constraint via its e¤ect on the ‡ow of FDI . Given that the ‡ow of FDI increases in the Southern IPR index k, it follows that labor demand in the North L N ( ; g) (i.e. the left hand side of equation 42) decreases with k. The e¤ect of a strengthening of IPR protection in the South on equilibrium rates of imitation and innovation can now be derived. As IPR protection in the South increases, the Southern labor market constraint (i.e. the SS curve) shifts down while the Northern constraint (i.e. the N N curve) shifts up. These shifts in the two constraints deliver one of our key results: Proposition 2: A strengthening of IPR protection in the South decreases the Southern rate of rate of imitation ( ) while it increases the Northern rate of innovation ( g): d dk < 0 < dg dk . The N N curve illustrates the Northern resource constraint whereas the SS curve denotes the Southern one. In Figure 2, the N N curve is relatively steeper because of the fact that while the rate of innovation is determined primarily by the size of the Northern economy (since only the North innovates), the rate of imitation is determined primarily by the size of the Southern one (since only the South imitates). Of course, the North-South ‡ow of FDI is what links the two resource constraints to each other.
Point A denotes the initial steady state equilibrium. Now suppose that Southern IPR protection is strengthened (i.e. k increases). In Figure 2, this implies an inward shift in the Southern resource constraint and an outward shift in the Northern constraint. Why the Southern constraint shifts has already been explained: all three activities in the South become more resource intensive and this e¤ectively reduces the resource base. The Northern constraint shifts out because of the FDI response: as the ‡ow of North-South FDI increases, more Northern resources become available for innovation. The outward shift in the Northern constraint is relatively smaller because the North is a¤ected via a single, indirect channel (i.e. through the response of North-South ‡ow of FDI) whereas the e¤ect on the South is a more direct one and it occurs via all three activities that take place there. As shown in Figure 2, these shifts in the two resource constraints caused by a strengthening of IPR protection in the South imply that in the new steady state equilibrium B the Southern rate of imitation is signi…cantly lower than that at A while the Northern rate of innovation is higher. 17 Thus, from the perspective of the North, stronger Southern IPR enforcement in our model generates a rather classical trade-o¤ between a static welfare loss and a dynamic welfare gain: the static loss being the decrease in real wages (or in its terms of trade since the relative price of Northern exports is determined by the relative wage) and the dynamic gain being the increase in the rate of innovation. What is noteworthy, however, is that the trade-o¤ in the North results from changes in the IPR policy of the South.
We should emphasize that the properties of the model noted in Remarks 2 and 3 are quite crucial since these establish a positive feedback between FDI and innovation and a negative feedback between these two variables and the rate of imitation. As long as a strengthening of Southern IPR protection discourages imitation, its positive e¤ects on innovation and FDI are implied by Remark 3. For innovation and FDI to be a¤ected negatively by a strengthening of Southern IPR protection, our model would need to have the somewhat strange property that an increase in the resource requirement for imitation (as measured by ka I ) increases the rate of Southern imitation. Due to the complexity of the fully endogenous model, we cannot provide an analytical proof that rules out this unlikely possibility; however, we have not been able to …nd any sets of parameter values under which it arises. Now brie ‡y consider the case where a Southern imitator's ‡ow pro…t from imitation equal S k = (1 k) S = (1 k)( 1)w S x S where k determined the degree of IPR protection and 0 k 1. Under such a formulation, the Northern labor market equilibrium condition is unaltered whereas the other two equilibrium conditions are slightly modi…ed. In equation (39) we simply need to replace 1=k by (1 k) whereas in equation (30) the same substitution is needed in the second and third terms of the LHS; in the …rst term of the same equation, k needs to be simply replaced by 1. It is straightforward to show that results obtained under our cost based formulation of IPR protection continue to hold under thus pro…t-tax formulation.
Finally, we note how an improvement in R&D productivity (i.e. a decrease in a N ) a¤ects the North-South ‡ow of FDI as well as the global allocation of production, once the e¤ects on innovation and imitation are taken into account. First note that a decrease in a N has no direct e¤ect on the SS constraint whereas the e¤ect on the N N constraint is essentially the same as that an increase in the Northern labor supply -i.e. in …gure 1, the N N curve shifts out. This immediately implies that with an increase in Northern R&D productivity, the rate of imitation decreases whereas the rate of innovation increases. Relying on arguments similar to those used to derive the e¤ects of Southern IPR protection, we directly state the following: Proposition 3: With an increase in the R&D productivity of Northern …rms (i.e. a decrease in a N ), the rate of innovation, the North-South ‡ow of FDI, the share of Southern production in the hands of Northern multinationals, and the sales of multinationals relative to other …rms, all increase whereas the rate of imitation decreases.
E¤ects of FDI policies
Many countries implement policies designed to attract FDI, perhaps with the hope of spurring local industrial development (see UNCTAD, 2003). Quite often such policies take the form of …scal incentives under which multinationals that invest locally are o¤ered reduced tax rates. Are such policies justi…able? To address this question, suppose that the South undertakes a policy of o¤ering an incentive to Northern multinationals that lowers the pro…t tax t on Northern multinationals from. What are the consequences of such a policy? First note that when such a pro…t tax is in place, a typical Northern multinational's after-tax pro…t equals It is straightforward to show that when a multinational's pro…t is M t (as opposed to M ), the Northern relative wage equals from where it is immediate that a reduction in the pro…t tax t on multinationals (which is the same as a tax incentive for FDI) increases the Southern relative wage. The intuition is simple: the use of FDI incentives makes the South a more attractive production location and shifts labor demand away from the North in favor of the South.
Under a tax on multinationals, the North-South ‡ow of FDI is given by Since A t ( ; g) decreases in t, it is clear from above that holding constant the rates of innovation (g) and imitation ( ), the North-South ‡ow of FDI ( ) increases with a decrease in the FDI tax rate t. Of course, how the equilibrium ‡ow of FDI responds to the use of an FDI incentive depends on how the rates of innovation (g) and imitation ( ) respond to such a policy. Note that the Southern resource constraint is una¤ected by the FDI tax rate t whereas the Northern constraint is a¤ected via the North-South ‡ow of FDI. But since this ‡ow is inversely related to the tax rate t, it implies that a reduction in t results in an outward shift in the N N curve in …gure 1 without having any a¤ect on the SS curve. This implies that a reduction in the Southern tax rate t on multinationals increases the rate of innovation ( g) and the North-South ‡ow of FDI ( ) whereas it decreases the Southern rate of imitation ( ).
In the presence of the FDI tax, equation (38) (which follows from free entry into innovation and imitation) becomes Since the term in square brackets increases with t, it must be that n S n decreases with t. In other words, a Southern policy of attracting FDI via a reduction in the tax rate t, increases the share of the global basket of goods that is produced in the South. Furthermore, since n I n M = g , such a policy change towards FDI also shifts production in favor of Northern multinationals and away from Southern imitators. Finally, consider the labor market consequences of such a policy. It follows immediately from the formula for the North-South relative wage (see 43) that a reduction in t decreases the South's relative wage (1=w R t ). Furthermore, such a policy change lowers real wages in the North while increasing them in the South.
Corollary 3: A reduction in the Southern tax rate on multinationals increases the South's wage relative to the North as well as the real wages of Southern workers.
Finally, note that the price e¤ects of a reduction in the Southern tax rate on multinationals are quite like those of a strengthening of its IPR protection: both types of policies lower prices of those goods whose production shifts from the North to the South while increasing prices of those goods whose production stays in the hands of multinationals as opposed to Southern imitators.
Conclusion
Opinions regarding the strengthening of IPR regimes in developing countries required under the TRIPS agreement of the WTO vary remarkably across individuals and nations. While the issue is multi-faceted and complex, the following statement broadly captures the disparity in views regarding TRIPs: developing countries have tended to argue that stronger IPR regimes in their markets will have adverse e¤ects on prices without having much of a positive impact on innovation whereas developed countries have stressed that not only innovation, but also FDI ‡ows would respond strongly to such reforms. In principle, an increase in FDI has the potential to o¤er two major sources of welfare gains. One, it can lower prices by shifting production to lower cost locations. Two, FDI has the potential to encourage Southern industrial development by introducing new technologies into the South. In this paper, we have presented a general equilibrium North-South product cycle model with a degree of endogenity that allows us to assess these arguments in a uni…ed framework.
The major results of our core model are as follows. First, we …nd that a strengthening of IPR protection in the South discourages imitation. Second, it increases FDI to a degree that the Southern production base actually expands -i.e. the decline in Southern imitative activity is more than o¤set by the increase in the production activity of Northern multinationals who are drawn to the South because local IPR reform renders it a more attractive production location by reducing the risk of imitation. Third, while prices of those goods that are reallocated from …rms producing in the North to multinationals fall, prices of goods that are reallocated from potential imitators to Northern multinationals increase. In other words, IPR reform in the South has con ‡icting e¤ects on consumer welfare when viewed solely through the price channel. However, what actually matters for consumer welfare is purchasing power. And from this viewpoint, Southern IPR reform bene…ts the South since it increases not only the South's wage relative to the North but also the purchasing power of Southern consumers. By contrast, not only does the Northern relative wage decline, the real income of Northern workers also falters. It is worth emphasizing that only a general equilibrium model such as ours can help assess the full impact of the price changes that result from IPR reforms since these can be o¤set (or dominated) by the accompanying changes in wages. Finally, when innovation is endogenous, a strengthening of IPR protection in the South increases its rate. We should note that while the paper does not provide a full- ‡edged welfare analysis along the lines of Helpman (1993), the clarity with which the various channels that a¤ect welfare emerge in the model does shed new light on a rather complex set of issues.
Appendix
In this appendix, we …rst provide some derivations omitted from the text and then discuss the relationship of our model to Lai (1998).
Rate of imitation with g exogenous
When g is exogenous, the equilibrium rate of imitation solves L S d ( ; ) = L S which is the same as ka I g g + + ka I 1 g( + g) " (g + ) + ka I 1 ( + g) g + = L S Rearranging, we have ka I 1 g( + g) " + ka I g + + ka I ( + g) 1 = L S (g + ) which is the same as B + C = L S (g + ) where 0 < B ka I 1 ( + g) " < C ka I g + ka I ( + g) 1 Solving for yields = g(L S B) C L S It is straightforward to show that increases in L S whereas it decreases in the degree of Southern IPR protection k.
Slope of N N curve
We already noted in the main text that @L N ( ;g) @ > 0. Direct calculations yield @L N ( ; g) @g = " ( + + g)a N a I A( ; g)[ ( + + g) ] ( + + g)(1 ) " From where it follows that a su¢ cient condition for @L N ( ;g) @g > 0 is that a N a I > 1+ " . This is because ( + g)[ " a N a I A( ; g)] > 0 due to the fact that A( ; g) < 1, < 1, a N a I and " > 1. Next note that the condition a N a I > 1+ " is satis…ed for all feasible parameter values: since a N a I , at the lowest feasible value of a N this condition becomes " > 1 + which necessarily holds since > 1= .
Horizontal intercepts of the two curves
It is trivial to observe that neither curve can intersect the vertical axis since labor demand in each country approaches zero as the growth rate approaches zero. The N N curve intersects the horizontal axes at g n where g n " L N (1 ) (a N " a I ) a N " a I Similarly, the SS curve intersects the horizontal axis at g s where g s L S (1 ) " a I a I From where it follows that g s > g n i¤ L S > L S where L S (L N + a N ) a I " a N a I We assume that L S > L S .
Relationship to Lai' s model
Our model di¤ers from Lai's in one key respect: imitation is endogenous in our model whereas it is exogenous in his. Setting = 1 and assuming is exogenous simpli…es our model down to Lai's. In that case, the two endogenous variables (i.e. g and ) must satisfy the following two equations: The following result is proved in Lai (1998): a strengthening of Southern IPR protection (i.e. a decrease in the rate of imitation ) increases the Northern rate of innovation g. The proof proceeds in a straightforward fashion: the implicit function theorem is applied to the above equation to determine the sign of dg d . | 2018-05-08T17:41:24.614Z | 2009-10-01T00:00:00.000 | {
"year": 2009,
"sha1": "92fd6de529ee41f815ded619c957b84ec02f9d7d",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Intellectual_Property_Rights_Foreign_Direct_Investment_and_Industrial_Development/6571199/1/files/12056060.pdf",
"oa_status": "GREEN",
"pdf_src": "ElsevierPush",
"pdf_hash": "613d72567bc012017f7a6d9964f29ed2aa0869cf",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
3848714 | pes2o/s2orc | v3-fos-license | Brief Communication; A Heterologous Oncolytic Bacteria-Virus Prime-Boost Approach for Anticancer Vaccination in Mice
Supplemental Digital Content is available in the text.
O ncolytic virotherapy targets cancers by several ways. 1 An important part of the treatment is mediated by direct oncolysis and relies on the specific replication of the oncolytic virus (OV) in tumor cells. The induction of antitumor immunity by the virus is also believed to be an important facet of OV therapy that confers long-term benefits. 2 Following the same idea, anticancer vaccines have shown great success in various clinical trials, notably in several anti-idiotype vaccine studies for patients with B-cell non-Hodgkin lymphoma, 3 synthetic human papilloma virus type 16 E6-E7 long peptide immunization for gynecologic cancers 4 as well as gp100 vaccines with IL-2 administration for metastatic melanoma patients, 5 all of which demonstrate the tremendous potential of this strategy. To further improve on this aspect, OVs encoding tumor antigens can be used in vaccination strategies. 6 Optimal results were obtained by using 2 different viruses to prime and boost immunity. This strategy is currently being tested using an adenovirus (Ad) and Maraba (MRB) encoding the tumor antigen MAGE-A3 in patients with solid tumors (NCT02285816 and NCT02879760). Although this approach provides protection in mouse tumor models, 7,8 Ad is only effective at priming the immune response when administered intramuscularly and has no direct oncolytic effects. In contrast, MRB kills cancer cells in addition to boosting antitumor immunity. We sought to determine if an alternative priming agent that can be administered intratumorally and trigger inflammation locally would improve efficacy. We tested the bacteria Listeria monocytogenes (LM) based on its well-established vaccination potential 9,10 as well as the previous reports of the bacteria directly infecting tumor cells. 11 We found the magnitude of the immune response induced by both prime-boost combinations to be comparable, but the therapeutic benefits provided by the LM-MRB strategy to be improved compared with Ad-MRB, with the mice showing smaller tumors and prolonged survival.
LM, Ad, and MRB Propagation
LM was cultured in brain-heart infusion media, Ad and MRB were expanded on HEK 293T and Vero cells, respectively. The MRB virus used in this study is the double mutant MG1 and has been described previously. 12
Flow Cytometry
Splenocytes were stimulated for 6 hours with the Ova peptide SIINFEKL (Biomer Technology) and golgi-plug (BD Biosciences) was added after 1 hours. Antibodies were purchased from BD Biosciences. The samples were analyzed using an LSR Fortessa flow cytometer.
Enzyme-linked immunospot
Splenocytes were seeded into IFNγ enzyme-linked immunospot (ELISPOT) plates (Mabtech) and the assay was performed following the manufacturer's protocol.
Histologic Analysis
Tumors were fixed in formalin and special stainings were performed by the University of Ottawa Pathology core. For caspase-3 staining (Cell signaling technology), the samples were rehydrated through graded alcohol. Heat-mediated antigen retrieval was performed using citrate buffer (sodium citrate 10 mM, pH 6).
High mobility group box 1 protein and Lactate dehydrogenase Assays
Serum was collected 48 hours after treatment by saphenous bleed. The high mobility group box 1 protein (HMGB1) and lactate dehydrogenase (LDH) concentrations were determined using a mouse enzyme-linked immunosorbent assay kit (Antibodies online) and an LDH assay (Abcam), respectively, following manufacturers' protocols.
In Vivo Studies
All experiments were performed in accordance with the institutional guidelines of animal care and veterinary services. Tumor volume = (length×width 2 )/2.
LM and Ad Have Comparable Immune priming Activity
In this study, we used LM and Ad variants encoding Ovalbumin (Ova) to compare the vaccination potential of both agents in our heterologous prime-boost setting. First, we compared the antiOva immune response induced by Ad and LM in an ELISPOT assay following the treatment regimen illustrated in Figure 1A. Our results show the induction of an important antigen-specific response 7 days after vaccination using both priming agents (Supplemental Fig. 1A, Supplemental Digital Content 1, http://links.lww. com/JIT/A488). In contrast, vaccination with empty LM did not induce Ova-specific immunity. Flow cytometry analysis confirmed that 10%-15% of the cytotoxic T cells were responsive to Ova (Fig. 1B), and a significant proportion of these cells produced both IFNγ and TNFα (Fig. 1C). We next tested LM as a priming agent in the prime-boost setting and found that LM-Ova priming could efficiently be combined with MRB-Ova boosting (Supplemental Fig. 1B, Supplemental Digital Content 1, http://links.lww.com/JIT/ A488). Impressively, we observed that 30% of the cytotoxic T cells from the LM-MRB group were responsive to Ova upon vaccination (Fig. 1D, E). Consistent with previous reports, 8 MRB was not able to induce an antigen-specific immune response in absence of previous priming (LM +MRB-Ova group). Importantly, we observed no significant difference in the response to vaccination using LM-Ova or Ad-Ova. Taken together, these results show that LM is as efficient as Ad at priming antitumor immunity in the heterologous prime-boost setting.
The LM-MRB Prime-boost is an Improved Therapeutic Strategy
We next wanted to confirm that LM could replicate in our tumor models. To do so, we performed a histologic analysis of B16F10-Ova melanoma tumors 24 hours after treatment. As expected, we were able to observe the bacteria in treated tumors by Gram staining (Fig. 2A). It is interesting to note that, our hematoxylin and eosin staining revealed that most of the tumor surface was necrotic and bloody upon LM treatment (Fig. 2B). In order to further assess the cytotoxic effect of intratumoral LM treatment, we performed immunohistochemical staining for cleaved caspase-3 on tumor sections 48 hours after treatment. Consistent with the hematoxylin and eosin staining (Fig. 2B), most of the tumor surface stained positive for caspase-3. Furthermore, the levels of LDH (Fig. 2D) and HMGB1 (Fig. 2E), 2 markers of necrosis, were elevated in the serum of LM-injected, tumor-bearing mice. Taken together, these results indicate that the bacteria treatment indeed contributes to tumor killing.
In order to determine if our LM-MRB prime-boost could efficiently control tumor growth, we treated and measured B16F10-Ova tumors as depicted in Figure 1A. Our results clearly show that while single MRB or LM treatments only slightly affect tumor growth, the LM-MRB prime-boost was very efficient at controlling most of the tumors (Fig. 3A). We then compared our bacteria-virus prime-boost approach to the current Ad-MRB vaccination strategy and found that the LM-MRB prime-boost provided improved therapeutic benefits, with the mice showing smaller tumors and prolonged survival compared with the Ad-MRB group (Figs. 3B, C). To determine if the LM-MRB prime-boost could provide long-term protection, we rechallenged animals which had been previously cured of B16F10-Ova tumors for 123-314 days. Animals were challenged with E0771 cells, to which they were naïve, or B16F10-Ova cells. Our results show that while all long-term survivors displayed E0771 tumors 14 days posttumor challenge, all of the B16F10-Ova tumors were rejected. Taken together, our results show that the LM-MRB prime-boost efficiently controls established tumors and provides longterm protection.
To determine if our approach caused adverse effects, we monitored the animals from each group. None of the animals displayed any sign of discomfort over the course of the experiment and we found no drop in body weight, indicating that our LM-MRB combination was well tolerated (Fig. 3D).
DISCUSSION
In this study, we developed a novel oncolytic bacteriavirus prime-boost approach for anticancer vaccination. Our results clearly show the induction of antitumor immunity to levels that are comparable with those induced by the current Ad-MRB prime-boost approach. As we expected, the therapeutic benefits provided by our treatment strategy on established tumors are superior compared with the Ad-MRB prime-boost. We believe that this improved efficacy is the result of the direct killing of tumor cells by the bacteria, as well as the local inflammation in the tumor microenvironment triggered upon injection. Moreover, LMbased vaccines induce both cellular a humoral immunity 13 and although we did not investigate this possibility, the generation of Ova-specific antibodies might have contributed to the improved tumor control observed for the LM-MRB group.
Our study provides proof of concept for the combination of bacteria and viruses in vaccination approaches. Although we focused our work on LM, other bacteria like Salmonella have been previously described as good immune priming agents and are also able to replicate and kill tumor cells directly. 14,15 Alternatively, an OV that could efficiently prime the immune response against a tumor antigen could be a suitable candidate. For instance, Newcastle disease virus, Herpes simplex virus and vaccinia virus are all OVs that have been shown to be efficient anticancer vaccination agents 6 and could therefore be used in heterologous virus prime-boost regiments.
Given that both LM and the Ad-MRB anticancer vaccination strategies are already being evaluated clinically, our LM-MRB prime-boost approach has the potential for rapid clinical translation. | 2018-04-03T01:28:23.282Z | 2017-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "731f9f869327f5bedbcb90e5f90940b96364fca6",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc5895163?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "731f9f869327f5bedbcb90e5f90940b96364fca6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256047247 | pes2o/s2orc | v3-fos-license | Effects of altered N-glycan structures of Cryptococcus neoformans mannoproteins, MP98 (Cda2) and MP84 (Cda3), on interaction with host cells
Cryptococcus neoformans is an opportunistic human fungal pathogen causing lethal meningoencephalitis. It has several cell wall mannoproteins (MPs) identified as immunoreactive antigens. To investigate the structure and function of N-glycans assembled on cryptococcal cell wall MPs in host cell interactions, we purified MP98 (Cda2) and MP84 (Cda3) expressed in wild-type (WT) and N-glycosylation-defective alg3 mutant (alg3Δ) strains. HPLC and MALDI-TOF analysis of the MP proteins from the WT revealed protein-specific glycan structures with different extents of hypermannosylation and xylose/xylose phosphate addition. In alg3Δ, MP98 and MP84 had truncated core N-glycans, containing mostly five and seven mannoses (M5 and M7 forms), respectively. In vitro adhesion and uptake assays indicated that the altered core N-glycans did not affect adhesion affinities to host cells although the capacity to induce the immune response of bone-marrow derived dendritic cells (BMDCs) decreased. Intriguingly, the removal of all N-glycosylation sites on MP84 increased adhesion to host cells and enhanced the induction of cytokine secretion from BMDCs compared with that on MP84 carrying WT N-glycans. Therefore, the structure-dependent effects of N-glycans suggested their complex roles in modulating the interaction of MPs with host cells to avoid nonspecific adherence to host cells and host immune response hyperactivation.
Results
). MP98 and MP84, encoded by the CDA2 and CDA3 genes, respectively, possess the highly conserved polysaccharide deacetylase domain Pfam01522 31 and function as a chitin deacetylase that converts chitin into chitosan, which is essential for the cell wall integrity of C. neoformans 32 .
As GPI-anchored proteins, MP98 (Cda2) and MP84 (Cda3) are predicted to transverse the plasma membrane and/or attach to the cell wall to deacetylate chitin in the cell wall 33 . In the present study, the GPI anchor motif was exchanged with a polyhistidine tag to achieve the secretory expression of MP98 and MP84 as extracellular proteins. The expression of His-tagged and GPI-anchorless MP98 and MP84 was initially directed under their native promoters. However, the MP84 expression level in recombinant C. neoformans cultivated in a YPD medium was too low for the subsequent purification procedure mainly because of the extremely low transcription activity of its native promoter ( Supplementary Fig. 1). The MP84 expression level is also 15% less than that of MP98 in C. neoformans cells isolated directly from infected human patients 34 . Thus, MP84 was expressed under the control of histone H3 promoter instead of its native promoter (Fig. 1A). The culture medium of C. neoformans is highly viscous because of the presence of shed capsule components, such as glucuronoxylomannan, which interferes with the concentration process via centrifugal filters. In this study, MP proteins were expressed in the C. neoformans cap59Δ mutant background, where CAP59, which is involved in the extracellular trafficking of capsular glucuronoxylomannan 35 , was disrupted to facilitate the purification of MP proteins from the culture supernatant.
The MP proteins were expressed in the wild-type (WT) strain and the alg3∆ mutant strain, which is defective in core N-glycan biosynthesis 20 and purified from the cell culture supernatant. In SDS-PAGE analysis, the protein bands of MP98 and MP84 secreted from the alg3∆ cells (alg3∆MP98 and alg3∆MP84) migrated substantially faster than those from the WT cells; this observation indicated a decrease in size due to the attachment of truncated N-glycans (Fig. 1B). As expected, N-glycan profiling based on high-performance liquid chromatography (HPLC) confirmed that the overall lengths of N-glycans assembled on alg3∆MP98 and alg3∆MP84 decreased compared with those from WT cells (Fig. 1C). In MP98, the M8 (Man 8 GlcNAc 2 ) peak was detected as the major N-glycan species in the WT cells. Conversely, the N-glycans assembled on MP84 were heavily hypermannosylated, indicating different extents of mannosylation depending on proteins. In the WT cells, substantial portions of N-glycans attached to MP84 were detected as acidic N-glycans containing xylose phosphate residues 20 . In the alg3Δ strain, MP98 and MP84 had truncated core N-glycans, mostly M5 and M7 forms as the major species, respectively. Moreover, the acidic N-glycans detected in MP84 secreted from the WT strain (indicated by * in Fig. 1C) were not detected in alg3ΔMP84. Our previous glycomic analysis of whole cell wall proteins indicated that the truncated core N-glycans generated from the alg3Δ strain do not possess xylosylphosphotransferase sites 20 ; therefore, these observations might explain the disappearance of acid glycans in alg3ΔMP84.
Structural characterization of N-linked glycans attached to MP98 and MP84. To obtain more detailed information on the structure of N-glycans attached to MP98 and MP84, we performed matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) analysis of neutral and acidic N-glycans separately collected from HPLC fractionation (Fig. 2). As predicted from the HPLC-based glycan profiles, the MALDI-TOF profiles confirmed that the major N-glycan species of MP98 is M8, which is composed of eight mannose residues without xylose (Man 8 GlcNAc 2 ) in the WT strain; in the alg3Δ strain, the major glycan is M5 (Man 5 GlcNAc 2 ) carrying five mannose residues ( Fig. 2A). In contrast to the low population of N-glycans with xylose addition in MP98, the neutral N-glycans of MP84 were mostly hypermannosylated with xylose addition (Xyl 1-2 Man 9-18 GlcNAc 2 ) in the WT strain as revealed by MALDI-TOF analysis. In the alg3Δ strain, the major glycan of MP84 became M7 (Man 7 GlcNAc 2 ), along with M4 to M11 forms as minor species that mostly did not www.nature.com/scientificreports/ have xylose residues (Fig. 2B). The MALDI-TOF profiles of acidic N-glycans attached to MP84 confirmed that the acidic glycans (Pho 1 Xyl 1-2 Man 9-18 GlcNAc 2 ) were generated by further adding a xylose phosphate residue (Fig. 2C). α-1,2-Mannosidase and α-1,6-mannosidase treatments sequentially converted most N-glycans of MP98 to M5 in the WT strain and to M3 in the alg3Δ strain, respectively (Supplementary Fig. 2A). However, the sequential treatment of α-1,2-and α-1,6-mannosidases shifted most N-glycans of MP84 to X1M5 (Xyl 1 Man 5 GlcNAc 2 ) and X1M8 (Xyl 1 Man 8 GlcNAc 2 ) peaks in WT, respectively ( Supplementary Fig. 2C). In our previous study on the structural analysis of N-glycans assembled on total cell wall MPs from C. neoformans WT cells of the serotype A H99 strain, the X1M8 glycan was speculated as an incompletely digested product that was derived from a fraction of glycans lacking α-1,6-mannose extension based on its conversion to X1M5 by additional prolonged digestion with α-1,2-mannosidase 36 . This indicates that the presence of xylose may inhibit the efficient mannose trimming by α-1,2-mannosidases in some glycan structures. In alg3Δ, due to the dramatic decrease of xylose addition, most N-glycans of MP84 were converted to M3 after the sequential treatment of α-1,2-and α-1,6mannosidases ( Supplementary Fig. 2B,C). Therefore, the results revealed that the N-glycans of MP98 and MP84 exhibited different extents of mannosylation with differential addition of xylose and xylose phosphate in the WT cells, indicating protein-specific glycan structures (Fig. 2D). In the alg3Δ strain, generating truncated core N-glycans, the structure of N-glycans was no longer protein specific because of the extremely low efficiency of xylose addition and the lack of mannose residues for xylose phosphate attachment.
Effect of N-glycan truncation on the adhesion of MPs to host cells and uptake by dendritic cells.
In the initial phase of respiratory infection, C. neoformans cells are exposed to macrophages and lung epithelial cells. To examine the effect of N-glycan truncation on adhesion to host cells, we performed an in vitro adhesion assay by incubating purified MP98 and MP84 with J774A.1 macrophage-like cells and A549 lung epithelial cells for 80 mins (Fig. 3A). Consistent with a previous report on MP84 as an adhesion molecule of C. neoformans to A549 lung epithelial cells 4 , our study showed that MP84 had a considerably much higher attachment to lung epithelial cells than MP98. The adhesion of MP84 to lung epithelial cells was evidently much higher than that to macrophages, indicating the cell-specific adherence of MP84 to some extent. However, host cell adherence did not significantly differ between MP84 carrying the WT N-glycans and the truncated core N-glycans (Fig. 3A).
We further analyzed the uptake of MP98 and MP84 by bone marrow-derived dendritic cells (BMDCs; Fig. 3B) by incubating Alexa fluor 647-labeled MPs with BMDCs. From an early incubation up to 1 h, the uptake of both MPs by BMDCs increased with similar kinetics regardless of the alteration of N-glycan structures. alg3∆MP98 and alg3∆MP84, carrying the truncated core N-glycans, were captured by BMDCs as efficiently as those carrying the WT N-glycans. This result indicated no apparent effect of the trimmed N-glycan structure of MP on adhesion. The efficient internalization of Alexa fluor 647-labeled MPs into BMDCs was verified through confocal microscopy, which showed that the labeled MPs localized mostly to a perinuclear compartment in BMDCs at 15 min co-incubation (Fig. 3C). The treatment with methyl-α-D-mannopyranoside, one of the competitive inhibitors of mannose receptors, effectively inhibited the uptake of MPs. This result demonstrated that recognition by mannose receptors on the surface of BMDCs was involved in the uptake of MP98 and MP84. In addition to the shortened length of N-glycans, the lack of xylose and xylose phosphate residues was caused by the truncated core-N glycans generated by ALG3 deletion. Thus, short N-glycans containing up to five mannose resides might be sufficient for the efficient binding of MP98 and MP84 to mannose receptors and internalization. Considering a previous report on the inhibitory effect of core β-1,2-xylosylation on glycoprotein recognition by murine C-type lectin receptors 37 , we speculated that the absence of xylose and xylose phosphate residues in alg3∆MP98 and alg3∆MP84 might compensate for the compromised binding activity of truncated N-glycans to the mannose receptor of BMDCs.
Effect of N-glycan truncation on the immune response of BMDCs. The purified MPs were co-incubated with BMDCs for 24 h to investigate the effect of altered N-glycan structure on the host immune response of murine BMDCs, and the cytokines secreted by BMDCs in culture supernatants were analyzed. The secretion of pro-inflammatory cytokines, such as TNF-α and IL-6, noticeably increased in the culture supernatant of BMDCs co-incubated with MP98 carrying the WT N-glycans. However, alg3∆MP98 showed a significantly decreased activity in inducing cytokine secretion (Fig. 4A). In contrast to co-incubation with MP98, co-incubation with MP84 induced only TNF-α without the detectable induction of IL-6. Moreover, the secretion level of TNF-α by MP84-co-incubated BMDCs was much lower than that by MP98-co-incubated BMDCs (Fig. 4B), implying that MP98 displays higher immune stimulation activity. Although the difference of immune stimulation activity between MP84 and alg3ΔMP84 was not as significant as that between MP98 and alg3ΔMP98, the induced level of TNF-α was also decreased in the BMDCs co-incubated with alg3ΔMP84 (Fig. 4B). Therefore, N-glycan structures might play a role in inducing cytokine secretion from BMDCs through interactions with MP98 and MP84.
Effect of the absence of N-glycans on the interaction of MP84 with host cells. The DNA construct encoding the MP84 lacking all N-glycosylation sites but retaining most O-glycosylation sites (NΔMP84) was generated via protein engineering combined with gene synthesis to maximize the alteration of the N-glycan structure (Fig. 5A). N-glycosylation sites (N-X-S/T motif) of MP84 were predicted to be located on the 61st, 149th, 279th, and 293rd amino acid residues. Asn codons (AAC or AAT) were changed to Ala codons (GCC or GCT) via fusion PCR and gene synthesis to remove the N-glycosylation sites of MP84 ( Supplementary Fig. 3A,B). The NΔMP84 construct, containing a C-terminal polyhistidine tag, was expressed under the control of the H3 promoter in the acapsular strain of C. neoformans. SDS-PAGE analysis revealed that the www.nature.com/scientificreports/ protein showed a dramatically increased mobility, indicating reduced molecular mass compared with that of MP84 expressed in the WT and alg3Δ strains. The decreased molecular mass was due to the complete absence of N-glycans attached to MP84, which was confirmed by PNGase F treatment (Fig. 5B). Conversely, the size of MP84 secreted in WT and alg3Δ strains decreased after PNGase F treatment, but the size of NΔMP84 did not change, demonstrating that all N-linked glycosylation sites on MP84 were successfully removed. It is notable that after PNGase F treatment, alg3ΔMP84 migrated slightly faster than MP84. Similarly increased electrophoretic migration of MP98 was also observed in our previous study on C. neoformans alg3 mutation 20 , which might be derived from a reduced extent of occupancy at N-glycosylation sites in the alg3 mutant strains, due to the inefficient transfer of truncated N-glycans by oligosaccharyltransferases 38 . After cleavage by PNGase F, the Asn resides at which N-glycans attached were converted to aspartic acid (Asp) residues. Considering a little bit larger molecular weight (MW) of Asp compared to Asn, lower N-glycan site occupancy in the alg3∆ mutant strain could generate a decrease in MWs of MP84 and MP98, resulting in faster migration in electrophoresis. Western blot analysis of the culture supernatants (extracellular fraction) and soluble cell lysates (intracellular fraction) of the C. neoformans cells expressing His-tagged MP84 indicated the lack of N-glycosylation did not affect the secretion efficiency of MP84 ( Supplementary Fig. 3C,D). Although the predicted MW of MP84 was 44.6 based on its amino acid sequences, the apparent MW of NΔMP84 on the gel appeared to be larger than the expected size. This finding indicated the presence of post-translational modifications other than N-glycosylation, such as O-mannosylation, which was confirmed by lectin blotting involving Galanthus nivalis lectin (GNA)-alkaline phosphatase (AP) that recognizes mannose residues. Even after the PNGase F treatment, the protein bands of MP84 were detected in all MP84, indicating that they were heavily O-mannosylated. We analyzed the in vitro adhesion of NΔMP84 to macrophages and lung epithelial cells. We unexpectedly found that the abolishment of N-glycosylation rather increased the adhesion ability of MP84 protein not only to lung epithelial cells, to which MP84 has preferential adherence, but also to macrophages, to which MP84 has relatively lower adherence (Fig. 5C). Moreover, NΔMP84 induced a significantly higher secretion of proinflammatory cytokines from BMDCs than MP84 carrying WT N-glycans (Fig. 5D). Particularly, NΔMP84 induced remarkably high levels of IL-6 secretion, which was hardly detected in BMDCs co-incubated with MP84 www.nature.com/scientificreports/ carrying WT N-glycans. Considering that the enforced N-glycosylation is generally considered an approach to enhance the immunogenicity of non-glycosylated protein antigens, we suggest that the effect of the abolishment of N-glycans on the interaction of MP84 with host cells is complex because of the combined effects exerted by several other factors.
Discussion
Glycans assembled on the cell surface glycoproteins of fungal pathogens modulate the efficiency of pathogen adhesion to and interaction with host cells during infection 39,40 . The cell wall proteins of various infectious fungi are mostly hypermannosylated and expected to function as immunoreactive molecules to induce host immune responses 21 . The N-glycosylation pathway of C. neoformans is conserved evolutionarily, but several unique features of C. neoformans N-glycans have been described in the structure and biosynthesis pathway 36 . C. www.nature.com/scientificreports/ neoformans has serotype-specific high-mannose-type N-glycans with or without a β-1,2-xylose residue, which is attached to the trimannosyl core of N-glycans. Moreover, the acidic N-glycans of C. neoformans contain xylose phosphates attached to mannose residues in the N-glycan core and outer mannose chains. Although the outer chains of N-glycans are dispensable for the virulence of C. neoformans, the intact core N-glycan structure is required for C. neoformans pathogenicity in the systematic analysis of alg3∆, alg9∆, and alg12∆ mutant strains that have defects in lipid-linked N-glycan assembly 20 . In the present study, we investigated the effect of the altered structure of N-glycans assembled on cryptococcal cell surface MP98 and MP84, known as the T-cell antigen and the adhesion molecule in host lung epithelial cells, respectively 4,26 , by analyzing the interaction of purified fungal MPs with host cells. Intriguingly, our data on the N-glycan structure analysis of MP98 and MP84 revealed significant differences in their N-glycan patterns, indicating the protein-specific N-glycan structures with different extents of mannosylation and addition of xylose and xylose phosphate residues (Figs. 1 and 2). In the WT strain, although MP98 had N-glycans carrying eight mannoses (M8) as major species, MP84 had long hypermannosylated glycans decorated not only with xylose but also with xylose phosphate residues, generating acidic glycans. As functional chitin deacetylases, MP98 (Cda2) and MP84 (Cda3) show very similar structural organization by possessing a cleavable N-terminal signal sequence, a C-terminal omega site for GPI anchoring, and the highly conserved polysaccharide deacetylase domain Pfam01522. Although MP98 and MP84 had similar predicted MWs, they showed only 30% identity in the overall amino acid sequences (Supplementary Fig. 1A) and notable differences in N-glycosylation sites. MP98 had 10 predicted N-glycosylation sites, whereas MP84 had only four predicted N-glycosylation sites. Considering that the further modification of core-N-glycans, such as addition of extra mannose, xylose and xylose phosphate resides, was mediated by Golgi-resident enzymes, we speculated that the different three-dimensional structures and trafficking rates of the two MPs during secretion might be attributed to differential modification by processing enzymes in the Golgi; consequently, protein-specific N-glycan structures were formed.
The recombinant alg3ΔMP98 and alg3ΔMP84, carrying truncated core N-glycans, had M5 and M7 lacking xylose and xylose phosphate residues as the major N-glycan species, respectively (Fig. 2). In comparison with the MPs carrying the WT N-glycans, alg3ΔMP98 and alg3ΔMP84 retained almost equivocal adhesion affinities to host cells, indicating that the altered core N-glycan structure did not affect the adhesion ability of MPs (Fig. 3A). This result was consistent with our previous observations that the adhesion efficiency in lung epithelial cells has no significant differences between WT cells and alg3Δ mutant cells 20 . alg3ΔMP98 and alg3ΔMP84 also showed comparable uptake kinetics by BMDCs, which rapidly captured the fluorescent-labeled MPs (Fig. 3B,C). This result implied that the truncated core N-glycans containing more than five mannose residues are still efficiently recognized by the mannose receptor of dendritic cells 41,42 . Considering that MP98 and MP84 have 30 O-mannosylation sites (Fig. 1), we could speculate that the presence of short-chain O-linked terminal mannose residues at the serine/threonine-rich C-terminus could contribute to the retainment of the binding capacity to mannose receptors. A previous study reported that multiple lectin receptors on dendritic cells recognize C. neoformans MPs, which may potentially contribute to glycoprotein-specific T cell immunity 43 . However, alg3ΔMP98 showed significantly decreased activities in inducing immune responses from BMDCs, although the decrease was less evident in alg3ΔMP84 with low immune stimulation activity (Fig. 4), consistent with our previous report on the decreased TNF-α level in the BMDCs infected with the alg3Δ strain 20 . This finding confirmed that truncated core N-glycans were less efficient in activating host immune responses than N-glycans carrying a WT structure.
We further investigated the effect of the absence of N-glycans by generating the NΔMP84 protein through the removal of all four predicted N-glycosylation sites. In the present study, we replaced Asn codons in the predicted sites with Ala codons to minimize the effect of amino acid substitution on the structure of MP84. Glutamine (Glu) shares more similar chemical properties to Asn in the aspect of bearing an amide side chain. However, reflecting its bigger size than Asn due to the presence of an extra methyl group, the in silico 3D structure of MP84 with Glu substitution was predicted to be bulkier with a disorganized C-terminus compared to those of the original MP84 protein and Ala-substituted MP84 (Supplementary Fig. 3A). Previous works reported that there were no functional differences in endothelial lipases in which the N-glycosylation sites were mutated to either Ala or Gln 44 . In addition, a work by Valliere-Douglass et al. identified the presence of a glutamine-linked protein glycosylation motif in human recombinant IgG proteins 45 . Considering that Ala substitution caused the less change of 3D-structure ( Supplementary Fig. 3A) and could avoid possible ectopic glycosylation at Gln, Ala substitution was chosen to generate the NΔMP84 protein. In contrast to the effect of the truncated core N-glycans, the complete abolishment of N-glycans exerted quite unexpected effects; thus, the adhesion affinity to host cells dramatically increased nonspecifically, and the induced activity of cytokine secretion from BMDCs was higher than that of MP84 carrying WT N-glycans (Fig. 5). These results strongly indicated that MP84 protein itself had a high adhesion capacity and a strong immune-stimulating activity. The recombinant proteins of C. neoformans Cda2 (MP98) and Cda3 (MP84) expressed in Escherichia coli was shown to induce protective immunity in mice against C. neoformans infection, indicating that the recombinant aglycosylated MP98 and MP84 can inherently modulate host immune responses 46 . Another recent study on the efficient protection of mice against experimental cryptococcosis by chemically synthesized peptides 47 www.nature.com/scientificreports/ After the cell adhesion activity unexpectedly increased upon abolishing N-glycans in MP84 (NΔMP84), MP98 lacking N-glycans was prepared via PNGase F treatment under non-denatured conditions ( Supplementary Fig. 4). The PNGase F-treated MP98 was purified and analyzed for its adhesion activity by co-incubating with host cells. Similar to the observations with NΔMP84, the N-glycan-removed MP98 showed approximately tenfold increased adhesion to A549 cells and about 60-fold increased adhesion to J774A.1 macrophage cells compared with those in MP98 with WT N-glycans ( Supplementary Fig. 4). In contrast to MP98 carrying WT N-glycans, which showed low adhesion activity to host cells, the N-glycan-removed MP98 became highly adhesive to lung epithelial cells and macrophage cells without cell-type specificity. This indicated that the presence of N-glycans prevented MP98 from attaching nonspecifically to host cells. A previous study on two plasma proteins, α1-acid glycoprotein (AGP) and haptoglobin (Hp), also revealed such varying effects of altered N-glycan structures on the interaction of plasma proteins 49 . The increased antennae branching and terminal fucosylation of N-glycans reduce the drug-binding affinity of AGP, but opposite effects of fucosylation and N-glycan branching occur in Hp-Hp interactions 49 . Several experimental data have also demonstrated that glycosylation can interfere with antigen processing and presentation 50 .
In conclusion, our data revealed the structure-dependent effects of N-glycans on the function of cryptococcal MP, and the altered structures of N-glycans attached to MP98 and MP84 exerted complicated effects on host cell interactions. Specifically, truncated core N-glycans, which are composed of five or seven mannose residues without xylose and xylose phosphate residues, did not affect the adherence of MPs to host cells, but resulting in a decreased immune-stimulating activity. The reduced host immune response in the interaction with alg3ΔMP98 and alg3ΔMP84 was consistent with the general notion that hypermannosylated fungal N-glycan structures are highly immunogenic to induce immune responses of host cells. However, the complete abolishment of N-glycans from MP84 and MP98 remarkably enhanced their activities with host cells; this result indicated that the increased immune-stimulating activity of the complete absence of N-glycans might be attributed to the increased exposure of protein epitopes or immunoreactive O-glycans of MPs to host cells. Therefore, recombinant MPs without N-glycans but still having heavily mannosylated O-glycans could be a better vaccine candidate than endogenous MP98 and MP84 carrying WT N-glycans. However, further studies should be performed to examine the defined roles of O-glycan structures of MPs in the interaction with host cells and provide insights into the development of new vaccines for cryptococcosis infection by modulating the structures of glycans assembled on MPs.
Materials and methods
Yeast strains, culture conditions, plasmids, and primers. Yeast cells were generally cultured in YPD medium (1% yeast extract, 2% bacto peptone, and 2% glucose) at 28 °C. For cell wall stability before the biolistic transformation of C. neoformans, YPD agar medium (1% yeast extract, 2% bacto peptone, 2% glucose, and 2% agar) containing 1 M sorbitol was used. YPD agar medium containing 200 μg/ml G418 (Duchefa, Netherlands) was used for C. neoformans transformant selection. E. coli TOP10 (Invitrogen, USA) was used to clone recombinant DNA and cultured in LB medium (0.5% yeast extract, 1% bacto tryptone, and 1% NaCl) containing 100 μg/ ml ampicillin (Duchefa, Netherlands). Yeast strains and plasmids are listed in Table 1 www.nature.com/scientificreports/ 6His-tagged MP84. The DNA fragments containing the N-truncated MP84 ORFs with the 6His codon as the C-terminal histidine tag without the GPI anchor (1.3 kb) was PCR amplified with the primers Cn_01239_O_F_ Xho and Cn_01239_O_B_EcoRV (Table S1). The PCR product of the truncated MP84 ORF was digested with XhoI/EcoRV and ligated into XhoI/EcoRV-digested pJAFS1_CNAG_01239Ter, generating pJAFS1-CNAG 01230His. Subsequently, the promoter of histone H3 (CNAG 06745), amplified via PCR by using the primer set H3 promoter_F and Infusion MP84_R, was inserted in front of the MP84 ORF in pJAFS1-CNAG 01230His via Infusion (Takara Bio, USA); thus, pJAFS1pH3CNAG012396His was obtained. For the expression of MP84 without N-glycans, the asparagine (Asn, N) codon in four predicted N-glycosylation sites were replaced with alanine (Ala, A) codon in MP84 ORF via site-directed mutagenesis and synthetic DNA fragment (Supplementary Fig. 4). The mutated MP84 ORF carrying the N(61, 149, 279, 293)A replacement was expressed under the control of H3 promoter in the pJAFS1 backbone, generating pJAFS1pH3CNAG01239(N del)His. The resultant vectors pJAFS1pH3CNAG 01239His and pJAFS1pH3CNAG01239(N del)His were digested at NsiI and delivered into cap59Δ and alg3Δ cap59Δ cells (Table 1) via a biolistic particle delivery system for integration into the chromosomal H3 promoter region through homologous recombination. Transformants were selected on a YPD plate containing 100 μg/ml neomycin, screened through PCR, and analyzed with western blot by using an anti-His antibody to detect His-tagged MP84.
Protein purification and western blotting. Recombinant C. neoformans cells expressing His-tagged secretory MP98 or MP84 were pre-cultured in 2 ml of YPD at 28 °C overnight. They were inoculated in 200 ml of YPD with an OD 600 of 0.5 in a shaking incubator (220 rpm) at 28 °C for 24 h. A cell-free supernatant medium was isolated through centrifugation (4000 rpm, 4 °C, 10 min) and then filtered using a 0. For western blotting, proteins were analyzed on 8% polyacrylamide gel and transferred to polyvinylidene fluoride (PVDF) membranes. Anti-His mouse antibody (1:1000 dilution, Santa Cruz Biotechnology, USA) was treated and incubated at 4 °C overnight. The washed membrane was treated with anti-mouse IgG antibody conjugated with alkaline phosphatase (1:10,000 dilution, Santa Cruz Biotechnology, USA) and incubated at RT for 1 h. Afterward, an AP conjugate substrate kit (Bio-Rad) was used to induce color change.
For MALDI-TOF analysis, N-glycans fractionated via HPLC were collected and dried. The matrix solution (the same ratio of 6-Aza-2-thiothymine solution and 2,5-Dihydroxy-benzoic acid solution) was mixed with the same volume of N-glycan sample. The sample was spotted on an MSP 96 polished-steel target (Bruker, Germany), and the crystallized sample was analyzed via MALDI-TOF (Bruker, Germany) in a linear negative mode.
Cultivation of A549 and J774A.1 cells. A549 (human lung epithelial cells) and J774A.1 (BALB/C mouse macrophages) were acquired from the Korean Cell Line Bank of Seoul National University. Each cell was cultured in cell culture medium (RPMI 1640 complete medium; HyClone, USA) supplemented with 10% fetal bovine serum (FBS, Access Biologicals, USA) and 1% penicillin/streptomycin (Gibco, USA) in a humidified incubator with 5% CO 2 conditions at 37 °C. The medium was changed every 1-2 days. A Trypsin-EDTA solution (Welgene, Taiwan) and a scraper were used to scrape A549 and J774A.1 cells, respectively.
In vitro adhesion assay of MP98 and MP84 in lung epithelial cells. The cultured mammalian cells were seeded in a 96-well plate at a density of 6.0 × 10 4 A549 cells per well and 1.25 × 10 4 J774A.1 cells per well in the cell culture medium for 1 day, respectively. After serum starvation, 1 μg of the purified and quantified MPs was added to the cells with FBS-free RPMI and incubated in an incubator containing 5% CO 2 at 37 °C for 80 min. The cells were washed with PBS-T four times and fixed with PBS-based 4% paraformaldehyde for 30 min. After each well was washed, the cells were blocked with 0.5% casein in PBS-T at 37 °C for 1 h and washed with PBS-T. After being treated with mouse anti-His monoclonal antibody (1:3000 dilution, Santa Cruz Biotechnology, USA) in PBS-T containing 0.5% casein at RT for 1 h, the cells were washed and incubated with HRP-conjugated anti-mouse IgG antibody (1:2000 dilution, Santa Cruz Biotechnology, USA) at RT for 1 h. After each well was washed, 100 μl of TMB-blotting substrate solution (Sigma-Aldrich, USA) was added to the cells, which were then incubated for 15 min to develop a color change. Then, 100 μl of the stop solution ( www.nature.com/scientificreports/ was added to stop the reaction. The absorbance of each well was measured using a UVM 340 microplate reader at 450 nm wavelength.
Analysis of in vitro protein uptake by bone marrow dendritic cells.
BMDCs were isolated from murine femur and tibiae (C57BL/6 J, 7 W, male) 20 . Progenitor cells were incubated in a culture medium with 20 ng/ml GM-CSF (PeproTech, USA) at 37 °C for 6 days to obtain the differentiated BMDCs. A fresh DC differentiation medium was added 3 days after the bone marrow cells differentiated into dendritic cells and floated. Alexa Fluor 647 protein labeling kit (Invitrogen, USA) was used to prepare the labeled MP proteins in accordance with the manufacturer's instructions. Briefly, MP proteins were mixed with Alexa fluor 647 dye and incubated while stirring at RT for 1 h. The labeled proteins were purified using a purification column. Afterward, 20 μg/ml labeled MPs in a RPMI medium containing 10% HI-FBS was added to the differentiated BMDCs in a 24-well plate at a density of 6 × 10 5 cells per well and then incubated in the absence and presence of 50 mM methyl-α-D-mannopyranoside (Sigma-Aldrich, USA) for 60 min. After the uptake at the indicated time points, the cells were washed and analyzed with an Attune NxT flow cytometer with acoustic-assisted hydrodynamic focusing (Thermo Fisher Scientific, USA). Uptake was expressed as the fluorescence geometric mean from an electronically gated live cell population.
Analysis of cytokine secretion from dendritic cells. The differentiated BMDCs were seeded in a 96-well plate (u-type) at a density of 1 × 10 5 cells per well in a RPMI medium containing 10% HI-FBS and incubated for 1 h. Each of the purified MP proteins (20 μg/ml) in the RPMI medium containing 10% HI-FBS was added to BMDCs, which were then incubated for 24 h. After the supernatant was extracted, a bead-based immunoassay was performed using a LEGENDplex bead capturing cytokine kit (BioLegend, USA) to measure the cytokine profile in accordance with the manufacturer's instruction.
Data availability
All data generated or analyzed during this study are included in this published article and its supplementary information file. www.nature.com/scientificreports/ | 2023-01-21T14:06:32.056Z | 2023-01-20T00:00:00.000 | {
"year": 2023,
"sha1": "0976898711c800d5b3a3c691a394700272b04c06",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a7b46f1445b0b0081f80f504123f0002fe5bb578",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252922363 | pes2o/s2orc | v3-fos-license | Subcutaneously implantable electromagnetic biosensor system for continuous glucose monitoring
Continuous glucose monitoring systems (CGMS) are becoming increasingly popular in diabetes management compared to conventional methods of self-blood glucose monitoring systems. They help understanding physiological responses towards nutrition intake, physical activities in everyday life and glucose control. CGMS available in market are of two types based on their working principle. Needle type systems with few weeks lifespan (e.g., enzyme-based Freestyle Libre) and implant type system (e.g., fluorescence-based Senseonics) with few months of lifespan are commercially available. An alternate to both working methods, herein, we propose electromagnetic-based sensor that can be subcutaneously implanted and capable of tracking minute changes in dielectric permittivity owing to changes in blood glucose level (BGL). Proof-of-concept of proposed electromagnetic-based implant sensor has been validated in intravenous glucose tolerance test (IVGTT) conducted on swine and beagle in a controlled environment. Sensor interface modules, mobile applications, and glucose mapping algorithms are also developed for continuous measurement in a freely moving beagle during oral glucose tolerance test (OGTT). The results of the short-term (1 h, IVGTT) and long-term (52 h, OGTT) test are summarized in this work. A close trend is observed between sensor frequency and BGL during GTT experiments on both animal species.
www.nature.com/scientificreports/ enzyme-based blood glucose detection have been extensively reviewed to improve lifespan of sensor, accuracy of CGMS in literature 13,14 . Optical methods that use light-emitting diode (LEDs) of different wavelengths [15][16][17] are also extensively studied for blood glucose detection. Usually, the sensor is a photo sensitive detector that can detect variations in optical intensity owing to variations in blood glucose level. Near-Infrared optical frequencies show glucose dependent intensity variations due to spectral absorption when illuminated, and the reflectance changes with the blood glucose variations. However, the strong absorption and weak intensity of the reflected signal affects the accuracy. To overcome this, improved optical methods using fluorescent materials with glucose selectivity have also been investigated 18,19 . The method uses a glucose-binding molecule that causes a change in fluorescent activity depending on the glucose level in blood. However, this method is limited by the degradation of the fluorescent material itself over time. As a result, the fluorescence reflectance diminishes gradually, and as time passes, it is barely detectable 20,21 . The development of electromagnetic based glucose sensors has been studied in various lab experiments to non-invasive measurement of blood glucose level 22 . From a recent study it is understood that mechanism of glucose metabolism is not straightforward rather a sophisticated cascade of biochemical reactions. However, the dielectric response of water is directly affected by the glucose and a relevant marker for indirect measurement of glucose level in vivo 23 . The fundamental working principle of EM-based glucose sensors is to sense glucose dependent dielectric permittivity changes. The glucose dependent dielectric permittivity of blood has been characterized over a wide frequency range 24 . It has been observed that the dielectric permittivity decreases with increase in glucose level 25 . In general, glucose level dependent permittivity changes are reflected as a change in the resonance frequency of these EM sensors. Few of the relevant published works using electromagnetic based glucose sensor are in vitro measurement technique 26,27 and wearable type measuring in vivo from outside body 28 . EM-based permittivity sensing biosensors have already proven to be effective for the detection of tumors and malignant cells in the body. The dielectric properties of various organs and tissues in living bodies have been previously characterized over a wide spectrum 29,30 . The resonance frequency of proposed EM-based implant sensor depends on permittivity of the surrounding environment. The sensor frequency changes inversely with changes in dielectric permittivity of the material in which sensor is embedded. The dielectric permittivity of blood changes as the glucose level changes. This glucose dependent permittivity changes are reflected in the sensor resonance frequency. Using a regression model, sensor frequency can be mapped to BGL. The EM-based sensors for BG measurement have already been attempted, and encouraging results have been reported in several research 31,32 .
EM resonators can be designed with different shapes and sizes and optimized for different frequencies of operation. Measurable parameters in the EM sensor that are indicative of the glucose level can be either reflection-based S 11 or transmission-based S 21 considering both magnitude and phase characteristics. Non-invasive measurement of glucose from outside the body has several challenges owing to high reflection of signals from the upper skin layer and low penetration depth of the signal 33 . The electromagnetic signal from the external transmitter experiences multiple reflections and strong attenuation from upper tissue layers. A marginal power that can reach subcutaneous layer and has weak sensing performance.
A minimally invasive implant type EM-based sensor is presented as a possible biosensor for real time BGL tracking. The proposed sensor is capable of detecting and tracking minute changes in the dielectric permittivity of interstitial glucose 34 (Fig. 1a). The sensor was designed and simulated using full-wave electromagnetic simulator CST Microwave Studio. The sensor was optimized for maximum sensitivity considering a bio-environment similar to that of muscle and fat for ISM band. Here authors refer sensitivity as the frequency variations for small permittivity change of the material surrounding the sensor. The present sensor can't detect glucose directly. The sensor frequency changes according to small permittivity changes. In the modelling of sensor in CST simulator, we consider permittivity variations of the material surrounding the sensor. Of course, we didn't simulate the effect www.nature.com/scientificreports/ of glucose level on the permittivity of bio tissue environment, as it is not supported by the simulator. However, we did in vitro experiment to check effect of glucose level variations on permittivity of aqueous glucose solutions (supplementary file, method 1, Sect. 2). The proposed implantable sensor is illustrated in Fig. 1b (also supplementary material Method 1, Sect. 1). The size of the sensor is compared against the size of a coin and shown in Fig. 1c. The sensor diameter is only 4 mm, compact enough and suitable for subcutaneous implantation. Figure 1d shows the sensor frequency variation with BGL trend. The sensor was subcutaneously inserted into the animal body. We performed both the intravenous glucose tolerance test (IVGTT) and oral glucose tolerance test (OGTT) with our sensor implanted in the animal (farm pig and beagle). In addition, we also developed a standalone sensor interface circuit board and mobile application that can continuously measure the sensor resonance frequency in the long-term evaluation of the sensor. Using the sensor system (EM sensor, interface circuit, and Android mobile application).
In the manuscript, short-term test is referred to one-time IVGTT test of the sensor performed on both swine and beagle, while long-term test is referred to a total 52 h (monitoring 17 h, measuring 35 h) OGTT experiment performed on a beagle. The long-term test on beagle was to verify continuous operation of sensor in tracking BGL while the animal was free to move around inside the experimental facility. Data processing algorithms such as linear regression and Kalman filter were used to remove fluctuations and high-frequency noise in the sensor reading. The mean absolute relative difference (MARD) and regression correlation coefficients were calculated to validate the sensor's ability in tracking real time BGL.
Results
In vivo short-term IVGTT on a swine. In first in vivo experiment, we evaluated the sensor response towards real time blood glucose variations in a middle size farm pig. The sensor was subcutaneously implanted by the veterinary surgeon and IVGTT was conducted on the swine. At the implant site we observed body fluids, ISF and small trace of blood which was cleaned carefully before sensor implantation. After inserting the sensor and suture skin at the implant site, we continuously observed the sensor resonance frequency for approximately three hours before injecting glucose intravenously. We also injected phosphate-buffered saline (PBS) without any glucose and continuously recorded any change in the sensor resonance frequency. However, we did not observe any noticeable change in sensor resonance frequency. The total resonance frequency variations were significantly less than the frequency variation observed upon glucose injection.
After sensor implantation, bio tissue surrounding sensor changes slowly and the adhesion with sensor improves over the time. This slow changes of tissue around sensor causes sensor frequency drift. However, after some time the process saturates/diminishes and sensor reference frequency is set. It should be noted that during short-term IVGTT experiment, the animal was given controlled anesthesia. Hence sudden body movement was not noticed which potentially disturbs sensor reference frequency. This was observed during OGTT experiment on freely moving beagle. Other factor such as animals has loose skin and can't hold the sensor firmly at a location. This also disturbs the sensor reference frequency. Figure 1a shows the initial sensor resonance frequency behavior after surgery and insertion. We observed the continuous frequency drift to the lower side until 1 h and then slowly settling. We performed IVGTT by injecting glucose solution into the back leg vein. The sensor resonance frequency was recorded continuously with high sampling points setting in the vector network analyzer (VNA). We used two different methods, commercial blood glucose meter (BGM) and the standard Yellow Spring Instrument (YSI2500) to check BGL at every 5-min interval during IVGTT experiment. We observed an error margin within 10% between the two measured values with little higher reading on BGM at higher BGL ranges. The blood was drawn from the leg vein using a syringe. The blood sample was separated into blood plasma and red blood cell using a centrifuge as blood plasma have similar glucose level. The BGM (Caresens-N, i-SENS, Korea) reading were noted as the 'measured BG level' and data readings from the YSI2500 were noted as the 'reference BG level'. All glucose concentration was measured from the blood plasma. The BG level reached a maximum value at 376 mg/dL in a short period of time, and thereafter continued to decrease owing to natural insulin action of the body. The sensor resonance also followed a similar trend, albeit with a time delay. This is because the diffusion process of glucose from the blood vessel to ISF takes approximately 5-30 min depending on the metabolic rate of the living body 35 . A time delay of approximately 12 min was recorded between the peak BG level and the peak sensor resonance. The sensor frequency shifted by approximately 32 MHz, that is, from 2.395 GHz (the lowest BG level was 61 mg/dL) to 2.362 GHz (the highest BG level was 376 mg/dL) ( Fig. 2b upper graph). The sensitivity is approximately 104 kHz/1 mg/dL (0.104 MHz/mg/dL) frequency variation from the peak BG variations. The body temperature of a swine was in the range of 36.4 ± 0.3 °C, which is similar to that of a human ( Fig. 2b lower graph).
The proposed EM sensor resonance data (S-parameters, S 11 and S 22 ) were continuously observed and recorded. Figure 2c,d show the resonance frequency point according to the BG level. The minima were tracked, and the trends are shown in Fig. 2b. The sensor resonance was observed around 2.4 GHz. The change in the resonance frequency was because of the change in the peripheral permittivity owing to the change in glucose level. As the glucose level increased, the resonant frequency shifted to a higher frequency band. The characteristics of S 11 and S 22 were slightly different but showed a similar trend.
In vivo short-term IVGTT on a beagle. In the second experiment, short-term IVGTT was conducted on a healthy beagle to verify the sensor frequency sensitivity with glucose level changes. Here, the sensor-embedded bio-environment is different from the swine case. The beagle had a very thin subcutaneous layer compared to the swine. Therefore, it was difficult to implant the sensor in the subcutaneous fat. Hence, it was implanted between the muscle and the skin layer. In general, ISF is present and covers all tissue. Owing to the higher permittivity of muscle tissue compared to that of subcutaneous fat, the sensor after implantation in the beagle had a lower refer-Scientific Reports | (2022) 12:17395 | https://doi.org/10.1038/s41598-022-22128-w www.nature.com/scientificreports/ ence resonance frequency compared to swine case. Muscle has a higher dielectric permittivity (ε r ~ 40) whereas subcutaneous fat (ε r ~ 8). As a result, even with the same sensor, the reference resonance frequency moved about 300 MHz to the left in beagle compared to swine. In addition, the temperature effect on the sensor behavior should not be neglected. Since, the dielectric permittivity is also affected by temperature variations. During experiment core body temperature of the animal subject measured using a rectal temperature sensor. Experiments were done inside a temperature-controlled surgery room. During first IVGTT experiment on swine, it was observed the body temperature of the swine dropped after surgery, anesthesia, and glucose injection. However, the sensor frequency was unaffected and followed the BGL trend. In the second IVGTT experiment on beagle, heating pads were used to keep animal warm and body temperature was maintain during experiment. The total variation in measured body temperature was 0.6 °C and 0.1 °C for swine and beagle respectively during experiment.
After inserting the sensor into the beagle, the resonance frequency was measured continuously for approximately 2 h until the resonance frequency was reached at a stable condition without variation. Thereafter, PBS without glucose was administered into the beagle and the sensor frequency was continuously observed (1 h). This was done to mimic the sham test and verify the sensor behavior. Then, the IVGTT was performed, and continuous sensor resonance measurement and BG level measurement at regular 5-min time intervals were performed. As BG changed, the sensor frequency also changed. The sensor frequency and trend in beagles varied in a similar way as in the case of swine. As soon as the glucose solution was injected, the BG level rapidly increased compared to the sensor frequency (Fig. 3a upper). The delay between the BG level and sensor frequency was approximately 10 min due to the glucose diffusion process from the blood to the ISF.
When the BG level changed from 450 to 129 mg/dL (ΔGl = 321 mg/dL), the resonant frequency shifted from 2.0836 GHz to 2.0732 GHz (Δf = 10.34 MHz). As a result, the sensitivity change with respect to BG was approximately 32 kHz per 1 mg/dL of change in BG levels (Δf/ΔGl = 10.34/321 MHz/mg/dL). The variations in rectal temperature were recorded using a precision thermistor sensor inserted into the body. The temperature change was 35.4-35.5 °C during the experiment (Fig. 3a lower). With an increase in the BG level, the resonance www.nature.com/scientificreports/ frequency also increased. When the BG level decreased, the resonance frequency shifted to a lower frequency (Fig. 3b,c).
In vivo long-term OGTT on a beagle. After the initial in vivo IVGTT experiments were performed on swine and beagle, sensor resonance frequency and operational bandwidth were analyzed for the development of sensor interface circuit board. During IVGTT the animals were anesthetized, and network analyzer (E5071C, Keysight) attached with long RF cables was used to measure sensor frequency. However, this setup was not suitable for conducting long-term tests as the animal was not under anesthesia. Moreover, it was dangerous to keep the animal under anesthetized for a long time. Therefore, the sensor was implanted, and connected to the interface board for continuous monitoring. The portable glucose monitoring system consisted of a sensor and an interface board (Fig. 4a). The sensor was inserted under the skin, and the interface board was taped outside the body. To supply power to the sensor and interface board, a high-capacity battery was also attached on a supporting jacket. The jacket was used to firmly hold the battery such that the sensor connections are not affected by movements of the beagle. The interface board can continuously measure the sensor resonance frequency without any interruption and send the data to the Android mobile application through a Bluetooth link (supplementary materials method 2, Sect. 4). After implanting the sensor, the beagle was kept inside a cage and allowed to move freely outside the cage. The animals were kept overnight in normal conditions and fed with food and water until the next morning. During this period, no glucose injection or BG level monitoring was performed. The sensor frequency BG level was measured in the morning of the next day with OGTT (Fig. 4b). The measurement plot is divided in four different zones (Fig. 4c). 'Zone (i)' shows the measurement results when the beagle was fed and 'Zone (ii)' shows the oral glucose feeding and corresponding sensor frequency variation. A small change in BG and a similar change in the sensor frequency can be seen. Instead of injecting glucose solution intravenously, it was administered orally. The oral glucose administration during OGTT did not increase the BG level in the beagle as fast as IVGTT. This is because glucose spreads to the blood through the digestive tract, and then it spreads from blood to ISF, which is a slower process. After reaching the highest point 'Zone (iii), the sensor frequency also decreased following the BG level 'Zone (iv)' . Strong fluctuations in the sensor reference frequency
Frequency to BGL regression and analysis. During all in vivo IVGTT and OGTT experiments, after
intravenous glucose administration or oral glucose administration, the BGL was observed to change. It moved higher, reached peak level then decreased naturally. The sensor frequency in each case also followed the BGL trend. The proposed sensor can track the BGL trend from a reference point. Sensor frequency to BGL mapping can be derived for calculating BGL for certain sensor frequency. The linear regression and correlation between the BGL and sensor frequency are derived and shown in Fig. 5a. the black line is for swine linear fit and the red line is for beagle linear fit. The proposed sensor shows good linearity with BGL in both cases. Even though two modeling linear regression are not same for different animal, the proposed sensor worked in a similar manner. The y-intercept of the regression is the reference frequency of sensor in individual cases. It can be seen that there is an offset in reference frequencies in both cases. Figure 5b was difference glucose level between reference measurement and calculation by each linear regression arranged from lower to higher BGL. The calibration coefficients of the linear models for the beagle and the pig are similar but not the same. For the generalized performance of the sensor, we inverse calculate sensor frequency for swine using beagle calibration equation with beagle BGL data, and vice versa. Since the sensor reference frequency in case of swine and beagle are different, an offset in frequency was added. From Fig. 5c it can be seen that original calibration line and inverse calculated frequency has similar trend, however a higher difference is observed for higher BGL values. This is mainly limited availability of BGL data points to derive original calibration line. This can be improved with more experiments and more data collection. The Clarke's error gird analysis (EGA) is shown in Fig. 5d. It can be seen that data points are mostly distributed in Zone-A and Zone-B. The accuracy of each BG level range was ascertained using the MARD value. Accuracy could be checked through error grid analysis (EGA); however, it was difficult to determine the range in which the accuracy was high and low due to availability of fewer reference blood glucose data. Therefore, the MARD values were classified and analyzed according to the ranges. MARD results Fig. 5e and supplementary material method 3 Sect. 7. The mean absolute deviation (MAD) results (Fig. 5f) were separated in five ranges according to BG levels. In the Range 1
Conclusion
Present work is an effort for the realization of implantable electromagnetic-based sensor, which can be an alternate to enzyme-based or optical-based glucose sensor. The sensor does not detect or track glucose molecules directly from blood or ISF, rather the resonance frequency of the sensor changes with changes in dielectric permittivity of ISF owing to change in BGL. In a way, it can track blood glucose trend from a reference or calibration point. Once a day calibration with SBGM can be used to measure blood glucose and set the reference frequency point of sensor. A linear regression model between sensor frequency and BGL is also developed for frequency to BGL mapping. Initial proof-of-concept in vivo experiment was performed with sensor implanted to swine and beagle in a controlled environment. A good correlation between sensor frequency and BGL is seen during in vivo IVGTT and OGTT experiments on swine and beagle. In addition, the developed sensor interface module is capable of continuous measurement, and the real-time sensor data can be visualized using the Android mobile application. Our proposed sensor and system are indeed in the early stage of development. However, the proofof-concept in vivo results show promising correlation between BGL and sensor frequency response. Indeed, the sensor shows the ability to track BGL trend. For actual sensor implantation we must consider bio compatible packaging and foreign body reactions (FBR) for long term applications. In addition, improved sensor interface system is under development.
Experimental methods
The study and experimental procedure were carried out in accordance with the ARRIVE guidelines. All experiments were performed according the relevant guidelines and recommendations. Experiments were performed at Animal Testing Center of KBIO Health Medical Device Development Center (Osong Medical Innovation Foundation, Osong, Chungbuk, Korea) following approval of the Institutional Animal Care and Use Committee (KBIO-IACUC-2020-172). We performed animal experiment for in vivo evaluation of our sensor on swine (n = 1, Cornex, Korea) and beagle (n = 1, Orient bio, Korea). The experiment was conducted on healthy animals after the acclimation period of 1 week. For the short-term test, anesthesia was done by the veterinarian. Intramuscular injection of tiletaminezolazepam 5 mg/kg (Zoletil®, virbac, south korea) and Xylazin 2 mg/kg (Rompun®, bayer, south korea) at endotracheal tube intubation was performed. Anesthesia was maintained with 1-1.5% isoflurane (Terrell™, piramal critical care, USA) using anesthesia machine (Fabius GS premium, dragger, Germany) during the experiment.
Biosensor based on an EM resonator.
Depending on their far-and near-field characteristics, EM resonators are widely used as communication antennas and sensors, respectively. A stronger far field leads to a radiative nature with excellent transmission and reception in wireless communication systems. However, they are not suitable for sensing applications. The EM property near the field is highly suitable for sensing and detecting applications ranging from EM sensors to actuators. The proposed sensor utilizes high-quality (ratio of stored energy to radiated energy) resonance with a strong oscillating near field. As an inherent interaction mechanism, EM-based sensors can be designed to be significantly sensitive in detecting dielectric permittivity changes. Suitably designing an EM resonator with tailored near-field energy confinement in a thinner structure (compared to the guided wavelength) is the key behind these sensors. This minimized the radiative energy from the sensor.
Metamaterials have been the most popular over the past few decades for designing electromagnetic resonators and structures for various applications. The planar (2D) versions of these materials are called metasurface (MTS). These MTS are periodic in nature by repeating a structure called a unit cell. The EM properties of the MTS are controlled by suitably designing the unit cell. The proposed sensor has these advantages and can be realized using a truncated MTS. The regular periodic nature of metamaterials limits their benefits in realizing compact sensors. However, in the present case, a special two-port coupled excitation was designed and tuned to achieve Short-term IVGTT for in-vivo studies of a swine. The first in vivo test was conducted on a swine. It had a similar BG variation to that of humans. Therefore, it was reasonable to implant our sensor and perform an IVGTT to evaluate the sensor behavior. The swine used was a farm swine (Cornex, Republic of Korea). The weight of the four-month-old female swine was approximately 40 kg. The body temperature was 37.2 °C. The swine was under fasting conditions for 24 h before the experiment. The sensor was inserted into the subcutaneous fat of the back flank. The subcutaneous layer of swine was considerably thick compared to other animals and the area was easily distinguished between muscle layer and subcutaneous fat layer to insert the sensor. Bleeding was observed when the skin was incised at the sensor implantation site. A heated knife was used to incise the skin to avoid local bleeding. Small blood traces were carefully removed and wiped as it interferes with sensor characteristics A heating pad (Blanketrol-II, Gentherm, USA) was used to maintain the subject's body temperature during the insertion procedure and experiments. After inserting the sensor into the body of the subject, an IVGTT was conducted 36 . A 20% glucose solution was injected through the ear vein. A commercial blood glucose meter and YSI2500 were used to measure BGL from blood sampled from leg vein at regular intervals. Sensor frequency and measured BGL was used for frequency to BGL mapping.
Short-and long-term GTTs for in-vivo studies of a beagle.
In biomedical research, beagles are usually considered for proof-of-concept evaluation to device performance measurements during development of various biomedical sensors. It is a suitable candidate to us for the short-term measurements as well as long-term measurements of proposed sensor 37 . The beagle was 22-months old female weighing 12.5 kg. The sensor was inserted under the skin and connected to a network analyzer for measurements in short-term experiment. The change in the sensor resonance frequency and corresponding BG level were recorded. Whereas, for long-term measurements we used a customized compact measurement circuit board that detects sensor frequency and transmits the data to a mobile using Bluetooth link. An Android app running on the mobile displays the information in a graphical view. The IVGTT was conducted by injecting a 20% glucose solution into the leg vein, and blood was collected from the opposite leg. In the case of long-term measurements, the subject cannot be put under anesthesia for a long time; hence, a sensor interface board was developed. Long term test's experimentally time was totally 52 h (monitoring 17 h, measuring 35 h). After surgery to the implant sensor, the subject (animal) needed self-curing time to be normalized about body condition such as maintaining normal temperature, stopping bleeding etc. The sensor interface board is compact and can be programmed to track the sensor's resonance point. Blood was collected for a certain period to ensure the glucose trend. In the long-term experiment, 20% glucose solution was used in the OGTT. Data processing. The data were recorded continuously during the short term IVGTT experiments using a vector network analyzer and during long-term sensor measurements using in-house developed sensor interface circuit board. We observed more noise in the measured frequency data during the long-term experiment. Firstly, the sudden changes in the frequency are motion induced spikes due to the movement of beagle. Another source of noise was introduced by the measurement circuit itself due to ADC (analog to digital conversion) resolution. Since, we used a network analyzer to measure sensor resonance frequency during short-term experiment, the data were much cleaner compared to the long-term measurement. We used moving averaging filter to remove sudden high changes and fluctuations in measured frequency data. Depending on the window size, the memory requirement also changes. A larger window size requires more memory to save previous time series data, but gives a smother filtering compared to that of smaller window size filter. A larger window may also remove small and valid trends in data. It smoothens the frequency trend, however also introduce time lag depending on the filtering window size (supplementary materials Fig. S5c). A larger averaging window also requires more storage memory. An alternate method is to use Kalman filter which requires only last time step data to calculate/ predict filtered data. In case of long-term test, it can be seen that it has sudden and high rate of change in frequency. So, we applied Kalman filtering to discard those spikes. The noise in measured data was also due to various factors e.g., motion induced fluctuations (due to movement of beagle), body fluid accumulation around sensor, and changes in tissue contact surrounding the sensor over time. Owing to these factors, the resonance frequency was not clean compared to the situation when the sensor was suspended in air.
Considering time required for each full range frequency sweep and wireless data (measured sensor frequency information) transmission, 12 times per minute sample were possible and recorded. A higher sampling rate over a narrow frequency span around sensor resonance point can reduce noise significantly. However, the memory requirement and processing time was a limiting factor. The Kalman filter was applied to find a clear trend in a noisy measurement environment 38 www.nature.com/scientificreports/ fluctuations compared with a regular averaging filter. Linear regression modeling was also performed on the filtered data to calculate the correlation between sensor frequency and BG level 39 (supplementary material Method 3, Sect. 6). Glucose level corresponding to a sensor frequency can be calculated (predicted) using the linear regression equation. The MAD (Mean absolute deviation) MARD (Mean absolute relative difference) was calculated by comparing the predicted BG level with the reference BG level. The MAD and MARD was defined in Eqs. (1) and (2).
BG predicted(i) was the measured glucose concentration value that was converted from frequency to glucose level. That value was derived through linear regression. BG Ref(i) was measured using YSI2500).n represents the number of measurements. MAD indicated error value between BG predicted(i) and BG Ref(i) . MARD was an indication of error rate scale between predicted glucose level and reference glucose level. Generally, a BG meter has an accuracy of 15% MARD 40 (supplementary material Method 3, Sect. 7). From the EGA analysis, it can be seen that the sensor data are uniformly distributed from the hypoglycemic to the hyperglycemic range 41 . This ensures the sensor is capable of tracking BGL variations with high confidence. | 2022-10-18T13:45:39.733Z | 2022-10-17T00:00:00.000 | {
"year": 2022,
"sha1": "8a0675f97a6ab2bfff33311a7d9ef4d2afb9dd8b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "8a0675f97a6ab2bfff33311a7d9ef4d2afb9dd8b",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
125756585 | pes2o/s2orc | v3-fos-license | The Role of Jordanian Public Universities in Promoting International Educational Principles from the Perspective of their Faculty Members
The present study aimed at exploring the role of Jordanian public universities in promoting international educational principles from the perspective of their faculty members. In order to meet the study’s goals, a descriptive approach was adopted and a questionnaire was developed. The questionnaire consists from twenty five (25) items. The validity and reliability of the questionnaire were measured. The study’s sample consists from three hundred (300) faculty members. They were selected from three Jordanian public universities (i.e. the University of Jordan, Yarmouk University and Mu’tah University). The researchers concluded the following results: 1)The Jordanian public universities play a moderate role in promoting international educational principles from the perspective of the faculty members 2)There isn’t any statistically significant difference between the respondents’ attitudes towards the role of Jordanian public universities in promoting international educational principles which can be attributed to gender. However, there are statistically significant differences between the respondents’ attitudes in this regard which be attributed to their academic rank. The latter differences are for the favor of the associate professors. There are statistically significant differences between the respondents’ attitudes in this regard which be attributed to type of faculty. The latter differences are for the favor of the ones who work in scientific faculties. The researchers of the present study recommend exerting more efforts by the administrations of Jordanian public universities to promote international educational principles. They also recommend providing more attention for international education in Jordanian universities. That should be done through holding seminars in order for faculty member to attend and hold discussions about international educational principles and concepts. Such seminars shall enrich the knowledge that faculty members have in this regard. The researchers also recommend promoting awareness among faculty members about the significance of addressing international education-related issues in their lectures.
Several changes have been witnessed, and technological developments and knowledge have been increasing.In addition, borders have been broken down between countries.Thus, the world has become a small village.In the light of such changes, it has become necessary to promote awareness among nations about the significance of building a common future that units them together.Building such a future has become necessary in the light the contemporary challenges.ideas, services, products and knowledge.Cultural and intellectual diversity have been increasing.Openness to the other and interacting with him have been increasing too.People's ways of thinking have been changing.Knowledge has been exchanged due to the spread of technology (Al Eid, AlZboon,2018).Borders have been broken down and news can be known on the spot.People are influencing the other and the other is influencing them.In the light of such changes, it has become necessary to promote new international educational principles and master foreign languages.It has also become necessary to have greater knowledge about various cultures (Yas, 2002).
University education is a source of power for any community.For instance, education provides people with knowledge, skills, values, orientations and capabilities.That shall enable people to address current and future challenges.It should be noted that development and advancement can be achieved in any community, if it has good intellectual outcomes.Therefore, countries should improve the quality of their university education.In addition, universities should provide international education through their educational system.They should also integrate international educational principles in their educational programs and curricula.That should be done because human capital is the real wealth in the contemporary age.Such a wealth can increase through empowering students, promoting knowledge among them about the contemporary changes, and enabling them to participate in these changes (Azab, 2011).
International education involves several principles.For instance, it involves promotion of cooperation and integration between nations.That can be done through promoting awareness about the fact that people share the same origin.Therefore, one must show respect to the other and avoid showing any act of discrimination.International educational principles also involve promotion of equality and justice between humans in terms of rights, duties, jobs, treatment and opportunities.Such a promotion is considered a global and regional demand.International educational principles also involve promotion of respect for human rights.Such rights include: one's right to receive education, own things, live, and benefit from things.Such rights include one's right to usufruct things and express opinion.Such rights also include freedom of religion and showing respect for minorities.
In other words, International educational principles also involve promoting respect for human rights and humans regardless of one's race, beliefs and thoughts.Such principles also involve promotion of respect for the other's religion and acceptance for sectarian diversity.Such principles also involve promoting of peaceful coexistence and toleration.That can be done through eliminating latent grudges within people (Jaydoory, 2012).
International education seeks promoting many principles and values.For instance, international education aims to promote toleration, peace and peaceful coexistence.It also aims to promote a sense of creativity, and acceptance for multiculturalism.It also aims to promote valuing for heritage.In addition, it seeks providing one with adequate skills and capabilities that enable him to use modern technology efficiently.The latter education seeks providing one with adequate skills and capabilities that enable him to produce and distribute information in a manner that shall benefit humanity (Abed Al-Hay, 2013).
There is a variety of international education fields.For instance, there is afield of international education that is concerned with peace and security.The latter education shall promote toleration and rejection for violence.Another field is the international education that seeks disarming people.The latter field seeks promoting feelings of rejection to war among the ones who consider it a mean for settling disputes.Another field is the international education that seeks promoting development.The latter field seeks establishing relationships between developed and developing countries and afair international system.Another field is the international education that seeks promoting citizenship values.The latter field seeks promoting feelings of loyalty and belonging to homeland.It also seeks promoting awareness among people about their duties and responsibilities towards their society.Another field is the international environmental education.The latter field seeks promoting knowledge about nature and global environment as being a common cultural heritage.It also seeks promoting knowledge about the way of protecting the environment.Another field is the international education that seeks promoting knowledge about cultural, social, economic, and political human rights.It seeks promoting respect to such rights in accordance with the Universal Declaration of Human Rights and international conventions.Another field is the international multicultural education.It seeks promoting understanding for foreign cultures, and other nations' customs, beliefs, values and orientations (Lasheen, and Abed Al-Jawad, 2012).
International education aims to promote educational knowledge and develop positive educational orientations.It aims to promote an international respect for cultures, customs and traditions of every nation and civilization.In addition, it aims to promote respect for human rights and freedoms which are stipulated in international conventions, declarations and covenants.International education plays a significant role.For instance, it enables the society -regardless of its social, economic, and political systems -to become free of violence, decimation and armed dispute.It also serves as a tool that promotes respect for all societies and humans regardless of one's color, gender, beliefs, customs or traditions (Al-Milad, 2013).
International education has been given different names in international conferences and conventions.For instance, it has been called: academic movement, international cooperation, multicultural education, peace education, transcultural education, internationalization of education, globalization of education, and global citizenship education.The latter field of education emerged at the end of the 20 th century and the beginning of the 21 st century.Other names for international education may include: education for international understanding.UNESCO used the latter term.Other names for international education may include global education.The latter term is widely used in the United States of America (Al-Kaltham, 2016).
International education is associated with many values, issues, problems and nations' history.Many organizations take up international educational causes.Such causes include: international peace, human rights, democracy, toleration, global citizenship, international understanding, cross-cultural education, and environmental causes.It should be noted that international education aims to promote international cooperation, peace, and understanding.It aims to achieve that through encouraging people to launch projects that can achieve such goals (Al-Shal, 2012).
International education has been facing many challenges.Such challenges include scientific advancement.These challenges have increased after witnessing an increase in the number of scientific branches.They have increased after connecting scientific fields with one another through modern technological channels.They have increased after witnessing several economic transformations.Such transformations occurred due to the developmental programs that were carried out by developed countries in developing countries.
In addition, these challenges have increased after witnessing several political transformations.Such transformations have turned national problems into international ones.Therefore, it has become necessary to find solutions for international problems because they affect several countries.The challenges facing international education have been increasing due to witnessing several social transformations.Such transformations required keeping up with social developments, and acquiring values that are necessary in the modern civil society.These values can be acquired through educational systems (Al-Buhy, 2014).
Similar to other universities, the administrations of Jordanian public universities believe that it's highly important to promote toleration and acceptance for other cultures through education.The latter administrations also aim to promote cultural awareness among students.They also aim to promote recognition among students for their cultural identity.They also aim to promote a sense of creativity and innovation among students in accordance with global values.They aim to achieve that through encouraging students to search, explore and investigate things.
The Jordanian public universities aim to develop students' skills and enable them to acquire knowledge.They also aim to enable students to employ their skills in many academic and practical fields.They also aim at promoting international cooperation and encouraging students to participate in solving problems.In addition, Jordanian public universities aim at enabling students to respond to the global requirements in a manner that shall benefit their home country (Al-Tarawneh, 2010).
However, the dominant educational culture in Jordanian universities is still a conventional culture.For instance, the latter universities do not employ means for promoting international educational principles.In such universities, these principles are promoted through a limited number of courses.Hence, these universities must benefit from other universities.They must also create an international educational culture through which international educational principles can be promoted.In the light of the aforementioned, the present study aimed to explore the role of Jordanian public universities in promoting international educational principles from the perspective of the faculty members
Statement of the Problem
The present study aimed to provide an answer to the following question: (What is the role of Jordanian public universities in promoting international educational principles from the perspective of the faculty members?)
The Study's Objectives and Questions
The present study aimed at identifying the role of Jordanian public universities in promoting international educational principles from the perspective of the faculty members.In order to achieve the study's goals, the researchers aimed to provide answers to the following questions: Q.1)-What is the role of Jordanian public universities in promoting international educational principles from the perspective of the faculty members?Q.2)-Is there any statistically significant difference between respondents' attitudes which can be attributed to their gender, type of faculty, or academic rank?
The Study's Significance
The researchers of the present study believe that the present study shall be useful for: 1)-The Jordanian Ministry of Higher Education: The present study shall encourage the latter ministry to promote awareness among universities' administrations about the significance of promoting and adopting international educational principles and complying with them.
2)-Universities: The present study shall provide universities with information about the significance of having courses that promote toleration and rejection for violence.It shall encourage universities to increase extracurricular activities that encourage students to adopt international educational principles.
3)-Researchers: The present study shall provide researchers with a theoretical framework about the role of Jordanian public universities in promoting international educational principles from the perspective of the faculty members.
Definition of Terms
The researchers provided definitions for the following terms: 1)-Jordanian universities: They refer to educational institutions that are affiliated with the Jordanian Ministry of Higher Education.Jordan includes eleven (11) public universities.Such universities include: the University of Jordan, Yarmouk University and Mu'tah University.
2)-International education: It is an educational field that seeks promoting acceptance for people regardless of their national or regional ideologies.This education involves promoting several international values.It deals with human being as being a global citizen.It aims to promote peace, cooperation, and mutual understanding between nations.It also aims to establish good relationships between nations which vary from one another in terms of social and political views and activities.It also aims to promote respect for human rights and freedoms.It also aims to promote knowledge about the rights recognized through the UN charter and the Universal Declaration of Human Rights (Ismai'l, 2016).
International education (the operational definition): It refers to an education that Jordanian universities seek providing.They seek providing it in the aim of promoting global values, such as: peace, toleration, and respect for minorities and human rights, cooperation with the other.Such values include respect for the culture of dialogue.Jordanian universities seek providing such education in a way that is in agreement with the cultural identify of students.
3)-The faculty members: This expression refers to the ones who teach at Jordanian universities and hold a PhD degree.They vary in terms of academic ranks.Such ranks include the following: lecturer, professor, associate professor, and assistant professor (Al-Hazaymeh, 2017).
The Study's Limits
These limits are represented in the following: 1)-Human limits: The sample of the present study is restricted to the faculty members who work in three Jordanian public universities; the University of Jordan, Yarmouk University and Mu'tah University.
2)-Temporal limits: The present study was conducted during the academic year of (2018 / 2019).
3)-Spatial limits: These limits refer to the University of Jordan, Yarmouk University and Mu'tah University
Previous Studies
The researchers of the present study reviewed several relevant Arab and foreign studies.They are listed below and arranged in a chronological order from the oldest to the newest:
Arab Studies
Jaydoory (2012) aimed to explore the role of faculty members in promoting global citizenship values among students.The latter study was conducted at Taibah University.A descriptive approach was adopted and a questionnaire was developed to meet the study's goals.The sample of the latter study involves all the faculty members who work at Taibah University.It was found that there is a statistically significant difference between the respondents' attitudes which can be attributed to their major.The latter difference is for the favor of the ones who are specialized in educational sciences.It was found that there is a statistically significant difference between the respondents' attitudes which can be attributed to their gender.The latter difference is for the favor of males.It was found that there isn't any statistically significant difference between the respondents' attitudes which can be attributed to their academic rank.Nasr (2013) aimed to shed a light on human communities' characteristics and cultural diversity.She suggests that the nature of the educational system that the community adopts is the result of the challenges that are faced by that community.She suggests that education is the mean for achieving development within any developing community.In addition, she suggests that providing international education shall add a value to the educational system.She adds that international education shall participate in generating new educational visions.
Al-Kaltham (2016) aimed to explore the extent of listing global educational concepts in the social and national sciences curricula of the intermediate stage in Saudi schools.A descriptive approach was adopted and a questionnaire was used for meeting the study's goals.The study's population involves all the national and social sciences female teachers of the intermediate stage in Saudi schools in Al Majma'ah.It was found that the extent of availability of global educational concepts in the latter curricula is moderate.In addition, it was found that the extent of promoting knowledge about these concepts is moderate from the perspective of the national and social sciences female teachers.
Ismai'l (2016) aimed at activating the international educational dimensions among the students who obtained a scholarship in King Saud University.Quantitative and qualitative approaches were adopted.A questionnaire was used for meeting the study's goals.The sample consists from experts who work at King Saud University and students enrolled at the latter university.It was found that providing international education shall enable students to address global challenges.It was found that there is much attention provided to the promotion of several international educational values in King Saud University.These values are ranked respectively based on the extent of promoting them as follows: compliance with human rights, interdependence, toleration, peace, and international understanding.
Abu E'laiwa (2017) aimed to shed a light on global citizenship.This issue has been receiving much attention globally.It is highly connected to globalism.The latter researcher aimed to shed much attention on an initiative that was launched by UNESCO.This initiative is called (the global citizenship education).The latter researcher aimed to shed a light on several issues related to global citizenship education, such as: multiculturalism, and accepting cultural diversity.The latter researcher aimed to distinguish between global and conventional citizenship.She suggests that global citizenship education shall participate in changing one's way of thinking, and ideology.She suggests that the latter education shall also participate in promoting international peace and equality.She suggests that the latter education shall also participate in achieving sustainable development.Shijkazu et al. (2017) aimed at promoting knowledge about the International Association of Teaching and Curriculum which is headed by Shijkazu.It is a non-governmental association that is affiliated with UNESCO.The latter association perceives the world as a place that is full of conflicts and wars.It claims that wars and conflicts lead to a decline in society.The decline experienced by developing countries proves this point of view.
The latter association encourages its members to learn foreign languages in order to promote the international educational principles.That is done for preventing conflicts, and disputes and encouraging people to renounce all the forms of discrimination and violence.The latter association advocates for setting and promoting a common philosophy that involves peaceful coexistence and acceptance for the other.The latter philosophy involves peace, respecting humans, democracy, equality, and international cooperation and understanding.The latter association advocates for promoting this philosophy among people of all age categories.
El-Jezawi (2017) aimed to shed a light on the change that has occurred upon the conventional meaning of citizenship.The conventional meaning of citizenship is associated with granting several political and civil rights to a citizen.This change occurred due to several global changes.The latter researcher identified the meaning of several concepts related to global citizenship.She advocates for the acceptance of multiculturalism and addressing problems from a global perspective.She suggests that Arab institutions must participate in promoting international educational principles.For instance, they should seek promoting international peace, equality, toleration and respect for the culture of dialogue with the other.
Foreign Studies
Acosta (2011) aimed to explore the role of international education in the community colleges located in California from the perspective of executive managers.He also aimed to explore the reality of international education in these colleges and significance of providing it.In addition, he aimed to identify the impact of international education on the legislative policy in these colleges.Interviews were conducted to collect data.The sample consists from several executive managers who work at the community colleges located in California.It was found that international education is increasingly provided in these colleges.It was found that international education has a significant impact on the educational process.It was found that leadership plays a significant role in improving the quality of the provided international education.The latter researcher found that international education should be provided in the colleges of all states.Jabbar (2012) aimed to shed a light on international education in the University of Jordan.A field survey was conducted.The latter researcher adopted qualitative and quantitative approaches.Interviews were conducted with students and a questionnaire was developed for collecting data.The sample consists from several Asian, American, and European students who are enrolled in the University of Jordan.The sample also consists from several faculty members who work at the University of Jordan.It was found that international education plays a significant role in promoting cultural awareness and knowledge about global affairs.It was found that international education plays a significant role in achieving personal growth, and career advancement.
Dwayne (2016) aimed at examining the way in which international education programs have been framed in British Columbia by media institutions and the government.The latter researcher used the content analysis method.For instance, he analyzed several governmental documents.These documents were obtained from local media sources.Thus, the sample consists from several governmental documents that were obtained from local media sources.These documents deal with international education.It was found that there isn't adequate cooperation between the government and media institutions in terms of international educational programs.
Weidman (2016) aimed to establish a series of frameworks that participate in enhancing international education and enabling people to understand educational and social changes.The sample consists from several scholars, and practitioners who are specialized in the field of international education.It also consists from finance agents and makers of educational policies.
The latter researcher provided a description for the main trends in international education.Then, he made a discussion for the post-2015 international education-related trends which were issued by the United Nations agencies.Then, he identified the conceptual foundations of international education.He also identified the relationship between these foundations and the historical background of comparative education and international education.It was found that providing international education shall positively affect the educational systems.
Yu-Chih (2017) aimed to conduct a comparison between comparative education and international education.He aimed to conduct this comparison in terms of theory, practices and methods.In order to conduct this comparison, the latter researcher reviewed the book written by Phillips and Schwe is furth (2014).According to the latter authors, comparative studies usually have an international nature.They also suggest that international studies are implicitly comparative.In addition, Yu-Chih (2017) discussed the similarities and differences between several cultures based on the views of a French philosopher Chrystal (2017) aimed to examine international higher education partnerships through a mutuality lens.He conducted this examination based on management strategies and organizational theories.A qualitative inquiry method was used.The sample consists from the partnerships of a US university.It was found that that it is highly significant to establish such partnerships, and promote cultural exchange, and stakeholder engagement.It was found that that it is highly significant to choose the right partners.
A Brief Summary for the Previous Studies and the Contribution of the Present Study
The aforementioned studies vary in terms of goals, variables, and spatial dimensions.For instance, Ismai'l (2016) aimed to develop a proposed model for promoting international educational principles among the students enrolled in King Saud University.He also aimed to identify the significance of international education.As for Al-Kaltham (2016), he aimed to explore the extent of listing global educational concepts in the social and national sciences curricula of the intermediate stage in Saudi schools.
It should be noted that the aforementioned studies vary in terms of spatial dimensions.For instance, some studies were conducted in USA, whereas others were conducted in the United States of America (USA).Other studies were conducted in Britain.It should be noted that the aforementioned studies vary in terms of the adopted approach.For instance, some studies adopted quantitative approach, whereas other studies adopted a qualitative approach.As for the present study, it adopted a descriptive approach.
It should be noted that the aforementioned studies vary in terms of instrument.For instance, some studies used a questionnaire, whereas other studies used the interview method.Other studies used the observation method.Some previous studies used the content analysis method.
The aforementioned studies were reviewed in order to enrich the theoretical framework of the present study.The present study is one of the first few studies that shed a light on the role of Jordanian public universities in promoting international educational principles from the perspective of the faculty members.
The Study's Methodology
The present study adopted a descriptive approach.
The Study's Population
The population of the present study involves all the faculty members who work at three Jordanian public universities (i.e. the University of Jordan, Yarmouk University and Mu'tah University).These members were selected during the academic year of 2018 / 2019.According to the statistics issued by the Jordanian Ministry of Education for the latter academic year, the study's population consists from 10129 faculty members.
The Study's Sample
A sample was selected through using the random stratified method.It consists from 300 faculty members.Table 1.presents information about the sample of the present study.
The Study's Instrument
The present study aimed to explore the role of Jordanian public universities in promoting international educational principles from the perspective of the faculty members.In order to achieve the study's goals, the researchers of the present study reviewed the relevant previous studies.Through reviewing the studies conducted by Jabbar (2012), and El-Jezawi (2017), the researchers developed a questionnaire that consists from thirty (30) statements.In addition, the three point Likert scale was adopted.This scale consists from three categories; high, moderate, and low.
The Instrument's Validity
In order to check the instrument's validity, the instrument was passed to ten (10) experts who are specialized in relevant fields.That was done to obtain those experts' opinions and comments about the instrument's structure, language, and clarity.All the experts' opinions and comments were taken into consideration and several modifications were made in the light of them.
The Instrument's Reliability
In order to measure the instrument's reliability, the overall value of Cronbach's Alpha coefficient was calculated.The latter value is 0.893
The Study's Variables
The variables of the present study are represented in the following: 1)-The independent variable: The role of Jordanian public universities in promoting international educational principles.
2)-The dependent variable: The attitude of the faculty members who work at three Jordanian public universities 3)-Mediating variables: Gender: Males and females.
Type of faculty: Humanitarian and Scientific faculties Academic rank: Professor, associate professor, and assistant professor
Statistical Analysis Methods
In order to meet the study's goals, the SPSS program was used for analyzing the study's data.In addition, the relevant statistical methods were used.These methods include the following: 1)-Frequencies and percentages were calculated.They were calculated to describe the respondents' characteristics 2)-Arithmetic means and standard deviations were calculated.They were calculated to identify the respondents' attitudes towards each item of the questionnaire's items 3)-The overall value of Cronbach's Alpha coefficient was calculated.It was calculated to identify the instrument's reliability.
4)-The t-test for independent samples was conducted.It was conducted to identify the statistical significance of the differences between two independent groups.5)-One-way ANOVA analysis was conducted.It is usually used for identifying the statistical significance of the differences between three independent groups or more.
6)-The least significant difference (LSD) test was conducted.
7)-The three point Likert scale was adopted to identify respondents' attitudes.Therefore, means were classified based on the following criteria: High: 2.34 or more Moderate:2.33-1.67 Low: 1.66 or less.
The Study's Results
Through this part, the researchers presented the results of the present study:
Results Related to the Study's First Question
The study's first question is listed below: Q.1)-What is the role of Jordanian public universities in promoting international educational principles from the perspective of the faculty members?
In order to answer the study's first question questions, arithmetic means and standard deviations were calculated.
In addition, ranks and levels were identified.These results are presented in table (2) below: for the research conducted by its faculty members about international education).As for the overall mean, it is 2.17 which is considered moderate.That indicates that Jordanian public universities play a moderate role in promoting international educational principles from the perspective of the faculty members.
This result can be attributed to the fact that these universities exchange faculty members with universities located in multicultural countries.Such exchange is carried out in order to exchange educational expertise with the faculty members working in universities located in developed countries.It is also carried out for establishing good relationships with universities and cooperating with them.It is carried out for gaining knowledge about the modern educational means which can effectively promote international educational principles.These principles include acceptance for the other.
As for the result of statement 12, this result is attributed to the fact that Jordanian public universities suffer from budget deficiency.Such a deficiency is attributed to the rise in the operating expenses.It is also attributed to the poor attention provided to international education and the decrease of the governmental financial support for university education.The result of the study's first question is consistent with the results concluded by Acosta (2011), Ismai'l (2016), and Al-Kaltham (2016).
significance of the differences between these means.These results are presented in table (5) below: 5), it can be noticed that there appear to be differences between the respondents' attitudes towards the role of Jordanian public universities in promoting international educational principles which can be attributed to their academic rank.In order to explore the statistical significance of these differences, one-way ANOVA analysis was conducted.The results of the latter analysis are presented in table ( 6) below: Table 6.The results of one-way ANOVA analysis for identifying the statistical significance of these differences between respondents' attitudes which can be attributed to their academic rank 6), it can be noticed that the statistical significance value is 0.000 which is less than the statistical significance level of (a = 0.05).That means that there is a statistically significant difference -at the statistical significance level of (a = 0.05) -between the respondents' attitudes towards the role of Jordanian public universities in promoting international educational principles which can be attributed to academic rank.In order to identify the rank that the difference is in favor for, the least significant difference (LSD) test was conducted.The results of the latter test are presented in table (7) below: 7), there is a statistically significant difference -at the statistical significance level of (a = 0.05)between the respondents' attitudes towards the role of Jordanian public universities in promoting international educational principles which can be attributed to academic rank.The latter difference is for the favor of the associate professors.That indicates that associate professors have more positive attitudes than professors and assistant professors.That indicates that associate professors aims to promote the culture of dialogue, positive community values, and international educational principles and deliver global citizenship education.The latter result is in agreement with the result concluded by Jaydoory (2012).
Recommendations
In the light of the aforementioned results, the researchers of the present study recommend the following: 1)-Exerting more efforts by the administrations of Jordanian public universities to promote international educational principles.
2)-Providing more attention for international education in Jordanian universities.That should be done through holding seminars in order for faculty member to attend and hold discussions about international educational principles and concepts.Such seminars shall enrich the knowledge that faculty members have in this regard.
3)-Promoting awareness among faculty members about the significance of addressing international education-
Table 1 .
Distribution of the sample in accordance with the type of faculty, academic rank and gender
Table 2 .
Arithmetic means, standard deviations, ranks and levels that concern the first question It is considered a high mean.The latter statement states the following: (The university exchanges faculty members with universities located in multicultural countries).As for the other means, they are moderate.Statement 12 shows the least mean, which is 1.95.The latter statement states the following: (The university allocates money
Table 5 .
Results of the t-test for identifying the statistical significance of the differences between means in accordance with the respondents' academic rank Source of variance Sum of squares Degree of freedom (df) Mean square F value Sig.
Table 7 .
The results of the least significant difference (LSD) test | 2019-04-22T13:13:16.311Z | 2018-12-15T00:00:00.000 | {
"year": 2018,
"sha1": "de663f5d99197103c16dbbfd77b3994ddd6d2d14",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/mas/article/download/0/0/37861/38289",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "de663f5d99197103c16dbbfd77b3994ddd6d2d14",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
256477154 | pes2o/s2orc | v3-fos-license | Social capital in vulnerable urban settings: an analytical framework
Social capital has been identified as crucial to the fostering of resilience in rapidly expanding cities of the Global South. The purpose of this article is to better understand the complexities of urban social interaction and how such interaction can constitute ‘capital’ in achieving urban resilience. A concept analysis was conducted to establish what constitutes social capital, its relevance to vulnerable urban settings and how it can be measured. Social capital is considered to be constituted of three forms of interaction: bonds, bridges and linkages. The characteristics of these forms of interaction may vary according to the social, political, cultural and economic diversity to be found within vulnerable urban settings. A framework is outlined to explore the complex nature of social capital in urban settings. On the basis of an illustrative case study, indicators are established to indicate how culturally specific indicators are required to measure social capital that are sensitive to multiple levels of analysis and the development of a multidimensional framework. The framework outlined ought to be adapted to context and validated by future research.
Introduction
Urban areas can be rendered vulnerable due to multiple exacerbating factors such as rapid and unplanned development, environmental degradation, precarious livelihoods and resource pressures. These challenges are likely to grow given that the proportion of the world's population living in urban areas is projected to increase from the current 53% to a projected 70% by 2050 (IDMC & NRC 2014;UNISDR 2014). Over the past 40 years, the urban population in lower income and fragile countries has increased by 326% (UNISDR 2014). Approximately one billion people or one third of the developing world's urban population live in slums, mostly in highly vulnerable areas (UN-Habitat 2009;Lall and Deichmann 2012). Among displaced persons, more than half seek safety and opportunity in urban areas, often living alongside the urban poor and other migrants and exposed to the risk of abuse, exploitation and a range of hazards (Global CCCM Cluster 2014).
The 'typical' humanitarian crisis of the future is likely to be urban rather than rural with all the attendant systemic complexity that cities present (Apraxine et al. 2012;Pantuliano et al. 2012;Pavanello 2012;Parker and Maynard 2015;World Humanitarian Summit Secretariat 2015). This sentiment is echoed in a raft of recent global policy documents that warn of the future urban threat, including the Sendai Framework for Disaster Risk Re- While the debate concerning the definition and practice of urban resilience 1 continues, a growing number of academics and policymakers are suggesting that the solutions to humanitarian needs should come from within affected communities or what is being termed 'localised response' (Gingerich and Cohen 2015;Wall and Hedlund 2016). In this vein, the World Humanitarian Summit (WHS) consultations recommended that humanitarian aid organisations invest in building social capital and strengthening local structures (World Humanitarian Summit Secretariat 2015, p.57). Similarly, Gibbons et al. (2017, p.10) advise that recognising resources and capacities of beneficiaries requires close engagement and the building of a sense of empathy with affected populations that recognises their agency as they bid to recover and return to normality. However, urban contexts, especially in such places as informal settlements and slum areas, are extremely diverse and complex such that enhancing the absorptive, adaptive or transformative capacity requires a deep understanding of the social context. Contemporary approaches and strategies do not fully consider the community institutions and the relationships that shape the quality and quantity of social interactions in urban settings that ultimately impact on the lives and livelihoods of vulnerable urban dwellers (Concern Worldwide and USAID 2014). Such community institutions and relationships emerge and develop under conditions characterised by the increased transience of populations, greater communication possibilities, increased marketisation of relations and stark economic, and social and ethnic heterogeneity (Knox-Clarke and Ramalingam 2012). While a link between social capital and improved individual, household and community welfare in resource-poor settings has been identified, the contribution that social capital makes to resilience is still unclear (Aldrich 2012a, b;Story 2013;Béné et al. 2015;Aldrich and Smith 2015;Pfefferbaum et al. 2015;Béné et al. 2016).
The complexity and diversity of urban social systems combined with the recognition of increasing urban vulnerability provide an ideal testbed for exploring this relationship. Nevertheless, it is important to be cognisant that the measurement of social capital remains problematic. This is especially the case in informal urban settlements where aggregated data can mask stark inequalities within populations which in turn can undermine social capital (Concern Worldwide and USAID 2014). The adaptation and validation of approaches to measuring social capital at various levels of analysis, in particular in the multifaceted complexity of informal urban settlements, is thereby urgently warranted.
Within the current context in which the humanitarian sector has been exhorted to recalibrate their approaches to take into account the urban dimension, Aldrich and Smith (2015, p. 6) identify the need to guide the aid community concerning the form of social capital-bonding, bridging or linking-that should be emphasised according to the particularities of a variety of humanitarian settings. However, various research studies conducted in urban areas, in slums and in informal settlements on social capital do not adequately provide theoretical basis hence lacks sufficient presentation and use of social capital conceptual and analytical framework. Differentiating the forms of social capital is important given the emergence of literature that eschews the unmitigated celebration of social capital and highlights its potential dark side, arising from bonding social capital (Portes 1998(Portes , 2014. By exploring the complex nature of social capital in vulnerable urban settings, informed recommendations can be provided to the aid community to build on social capital in their programming. This article by way of concept analysis advances the debate concerning the building of social capital in vulnerable urban areas, in particular the value of social capital in localising humanitarian response 2 in slums and informal settlements. The aim is to develop a framework to exploit existing potential social capital that can enhance efforts to build resilience in vulnerable urban settings. Therefore, the objectives are twofold: (a) to establish what constitutes social capital, its relevance to vulnerable urban settings and how it can be measured and (b) to develop a multilevel and multidimensional framework to aid in exploring the complex nature of social capital in vulnerable urban settings. In pursuing this latter objective, an illustrative case study is presented that serves to demonstrate the operationalisation of a conceptual/theoretical framework of social capital in vulnerable urban settings. It does so by drawing on preliminary findings from a research project conducted in informal settlements in Nairobi.
Research approach: concept analysis in humanitarian action
Concept analysis as a research method is a process of determining the similarities and contrasts between concepts (Walker and Avant 1995, 2005, 2011Gibbons 2017). In that manner, it helps to clarify and describe the concepts belonging to the whole, their characteristics and relations they hold within systems of concepts (Nuopponen 2010). The methods of Walker and Avant (1995, 2005, 2011 and Gibbons (2017) were used as the framework to guide the analysis. The steps involved in the concept analysis were as follows: (1)The analysis of 'social capital' was conducted to clarify the meaning of the concept, to develop an operational definition and to distinguish the relevance of the concept in vulnerable urban contexts.
(2)The defining attributes of social capital were determined including the instrumental and emancipatory rationale for its application in urban settings. (3)Appropriate indicators for the attributes of social capital were developed to establish a framework. (4)An illustrative case of social capital that assigns appropriate and relevant indicators to the attributes in a context-specific vulnerable urban setting is presented.
Although social capital has become increasingly invoked in humanitarian discourse in recent times, the term has been in use for decades. Keeley (2007) indicates that the term 'social capital' may first have appeared in a book authored by Lyda Hanifanin published in 1916. The book discussed how neighbours could work together to oversee schools. The term was used to describe what counts in the daily lives of people: namely good will, fellowship, sympathy and social intercourse (Keeley 2007). Its understanding has evolved over these decades resulting in a rich body of literature on social capital. In its most generic sense, social capital can be defined as the 'networks of relationships among people who live and work in a particular society, enabling that society to function effectively' (Oxford English Dictionary, 2017 online). However, scholars who popularised the concept like Bourdieu, Coleman and Putnam emphasised social capital as a collective asset (Lin 1999a). Bourdieu's (1986) contribution concerned the size and strength of networks, while Coleman (1990) considered social capital as a resource that can be deployed by social actors and transformed into other forms of capital, including human capital. Putnam (1993Putnam ( , 2000, on the other hand, was interested in social organisations emphasising the importance of features including norms, trust and networks.
More recent definitions focus on the positive links, shared values and understanding of social interaction (Keeley 2007). For instance, Siegler (2014) contends that social capital brings about connections that generate benefits due to tolerance, solidarity and trust. Scrivens and Smith (2013) argue that the term social capital conveys the idea that human relations and norms of behaviour have instrumental value in improving different aspects of people's lives. Such aspects play a significant role in shaping individual as well as collective wellbeing outcomes. However, one should be cognisant that not all social interaction constitutes social capital. Portes (1998Portes ( , 2014 notes how social interaction can result in 'social liabilities'. Typical examples of such social liabilities include organised crimes, nepotistic practices, social stratification and corruption that can arise from dense networks of social interaction. In bringing together the disparate and rich understandings of social capital, the study recognises both the quantity and quality of the social interactions, the varying levels of such social interaction (individual, community and locality) and the value of the interaction in achieving individual and collective goals.
Moreover, the urban 3 setting provides a particular type of locality typified by high population density, higher mobility and high interdependency as compared to the self-sufficiency of the rural. The population in vulnerable urban settings, especially in informal settings and slums, rely on each other to fulfil basic needs and access services. There is greater interdependence on existing infrastructure, social, political and economic systems, e.g. access to water, security, and microfinance (Parker and Maynard 2015). In rural areas, the households and population may be self-dependent as long as they are food secure and have disposable income.
Social relations may be more instrumental in urban settings than tangential towards achieving basic needs. This leads us to an interesting question of what social capital means in vulnerable urban environments. Therefore, by combining significant contributions on the meaning of social capital and the urban, the authors propose that social capital in vulnerable urban contexts can be understood as: 'the institutions and relationships that shape the quality and quantity of social interactions in vulnerable urban settings, which in the end enhance individual, community and society's capacity to collaborate in the achievement of both individual and collective aims before, during and after a humanitarian crisis.' Understanding the concept and its relevance in complex urban settings Avant (1995, 2011) indicated that establishing the defining attributes of the concept under analysis is the most critical part of concept analysis research. The process of defining attributes involves examining the clusters of elements associated with the concept. Therefore, the starting point to analysing attributes of social capital is to examine the relationships that shape the quality and quantity of social interaction that is relevant and appropriate in vulnerable contexts. The literature identifies that such relationships are embedded in three forms of social capital: bonding, bridging and linking capitals (Lin 1999b;Narayan 1999;Dasgupta and Ismail 1999;Lofors and Sundquist 2007;Keeley 2007;Ledogar and Fleming 2008;Hawkins and Maurer 2010;Álvarez and Romaní 2017).
Bonding (social) capital
Bonding capital refers to personal relations that are based on a sense of collective identity such as family, close friendship and the sharing of the same culture or ethnicity. Siegler (2014) considers it to be concerned with whom people know and what they do to establish and maintain their personal relationships. Therefore, it concerns the quality, structure and nature of people's relationships (Scrivens and Smith 2013). These relations influence physical and mental health, economic wellbeing and life satisfaction. Such factors are what Lin (1999a) argued to be the outcomes or in other words the returns of social capital due to expressive actions. Furthermore, the urban poor in the developing world relies heavily on bonding capital to help them 'get ahead' (Woolcock 1998(Woolcock , 2005. It is not self-evident to the outsider that such reliance would remain the same in the situation of extreme vulnerability arising due to humanmade or natural hazards. The next three sections present the attributes of bonding capital. These attributes are the quantity of relationships (structure and nature), the quality of relationships (norms of trust and reciprocity) and the degree of social influence in vulnerable urban settings.
Quantity of relationships (structure and nature of relationships) The nature of relationships mainly refers to the structure and strength of the connections among and between individuals. Social fabric is made up of a network of individuals or groups connected by one or more specific types of interdependency, such as friendship, kinship, common interest, financial exchange, sexual relationships or relationships of beliefs, knowledge or prestige (Wasserman and Faust 1994;Freeman 2004). In drawing on the social network 4 analysis (SNA) technique, one can assess the 'quantity' or frequency of the social relationships (interdependence) in terms of both-ego-centric analysis, 5 also referred to as personal network analysis, and socio-metric analysis or whole network analysis 6 (Freeman 2004;Rice and Yoshioka-Maxwell 2015). Fay (2005) indicates that social networks are less stable in urban areas, with relationships more likely to be based on the quality of reciprocal links between and among individuals and friends than on familial obligations. Understanding the value and stability of social networks in the urban context will contribute to the much-needed knowledge required to realise and enhance the localisation of humanitarian response. It will, therefore, be interesting in this study to find out how slum dwellers use social networks (connections and interdependencies) to enhance preparedness and resilience.
Quality of relationships (norms of trust and reciprocity) Putnam (2000) and Stone (2001) indicate that bonding social capital involves trust and reciprocity in closed networks and helps the process of 'getting by' in life on a daily basis. Siegler (2014) argues that the quality of relationships (trust and values) that are beneficial for society, and therefore constitute capital, can determine how much people in society are willing to cooperate with one another. Also, through social influence, people obtain normative and informative guidance by relating their behaviour to others within the same group or among different groups (Coleman et al. 1957). The terms normative and informative guidance are used neutrally due to the capacity of bonding capital to lend itself towards negative as well as positive outcomes (Scrivens and Smith 2013).
Sako (1992: 69) defines trust as 'a state of mind an expectation held by one trading partner about another that the other behaves or responds in a predictable and mutually expected manner'. Trust is achieved when there is both commitment and intimacy (Drew et al. 2012), in the social interaction irrespective of the context or domain over which it is conferred (Kramer and Tyler 1996;Hardin 2002;Nooteboom 2002). Literature identifies three types of trust (Cook 2001;Smith 2010;Paliszkiewicz 2011), namely, (a) generalised trust, a kind of trust that is based largely on social learning and developmental processes; (b) particularised trust, the idea that people 'like me' can be trusted, but that other groups may not share my moral values; and (c) strategic trust, the idea that specific others have the appropriate motives and intentions in the belief that significant others can be relied upon to act in one's interests in specific situations and around specific issues. Stone (2001) likened strategic trust to public trust. An example of this kind of trust is the trust in the institution, the belief that police officers have appropriate motives towards citizens and are technically competent to protect citizens (Jackson et al. 2011, p. 270). Klinenberg (2002;as cited in Woolcock 2005) contends that even the most isolated individuals are better off if they happen to live in communities with high levels of trust and participation. The World Bank (2003) indicates that in poor urban areas, social fragility due to high ethnic diversity and profound economic inequality creates lowlevel generalised trust. The relative importance of these three types of trust (generalised, particularised and strategic) needs further investigation within vulnerable urban contexts, notwithstanding that indicator of trust may manifest itself differently across cultures (Keeley 2007).
On the other hand, reciprocity includes the processes of exchange within a relationship whereby there is an expectation of repayment of 'goods and services' provided (Stone 2001, p.30). She further argues that reciprocal relations are governed by norms, such that parties to the exchange understand the social contract they have entered and therefore their obligations. Based on the study findings of Robert (1973) conducted in Guatemala City, Woolcock (2005) reports that relationships in urban slums were forged by the quality of reciprocal links between individuals and friends other than familial obligations. Siegler (2014) indicates that the support received may be reciprocated with a different resource, e.g. one might receive emotional, practical or financial assistance, advice and guidance in return for unpaid work (or informal volunteering). Nevertheless, the extent to which such reciprocal links and the effect or influence of in times of stress in vulnerable urban contexts requires further investigation.
Bridging (social) capital
Bridging capital concerns the peoples' relations or links that stretch beyond a shared sense of identity, for example to distant friends, colleagues and associates. Drawing on Lin (1999a), it can be argued that bridges give rise to instrumental actions with three possible outcomes. These outcomes include economic (material resources such as wealth), political and social. Kreuter and Lezin (2002) argue that bridging social capital is comparable to institutional infrastructure; therefore, it should be detected at the organisational level, where norms, values and social structures facilitate more macroconnections. These values could be formal or informal and well-articulated. Bridging capital, as opposed to bonding capital, is about 'getting ahead' , involving multiple networks which may make resources and opportunities which exist in one network accessible to a member of another network or locality (Stone 2001;Woolcock 2005). Furthermore, Woolcock (2005) argue that the urban poor in the developing world rely heavily on their friends and relatives to help them 'get ahead'. However, it is not clear whether such reliance remains the same in a situation of vulnerable urban contexts that are characterised by high mobility and greater interdependencies.
Quality and quantity of meso-level connections
Bridging capital facilitates collective action, civic engagement or citizen participation. Such actions and behaviours contribute positively to the collective life of a locality, community or society (Scrivens and Smith 2013;Siegler 2014). Civic engagement includes activities such as volunteering, political participation and other forms of community actions (Grootaert and Bastelaer 2001;Siegler 2014). Evaluation of bridging capital requires examination of such aspects of community governance and decision making, identification of community institutions, characterisation of community-institutional relationship and assessment of institutional networks and organisational density (Krishna andShrader 1999, 2000). This means that bridging capital is mainly assessed at the meso level to achieve a more focused observation of local institutions, collective actions and civic engagement (Grootaert and Bastelaer 2001;Scrivens and Smith 2013;Siegler 2014).
The literature on measurement of bridging social capital in institutions, especially local associations, includes the examination of their internal heterogeneity and their wider networks, including their horizontal 7 and vertical connections. Examining heterogeneity entails assessing the differences that exist between and among members of a locality/institution regarding gender, religion, ethnicity, wealthy and age (Grootaert and Bastelaer 2001;Stone 2001). Socio-structural issues might include organisational density and related characteristics, networks and mutual support mechanisms, exclusion or engagement barriers, collective action including formal and informal groups and conflict resolution systems. For example, Grootaert (1999), through his study of social capital, household welfare and poverty in Indonesia, found that greater heterogeneity of local associations (along with factors such as education, occupation and economic status) confers the greatest benefits of sharing information and knowledge. However, the effect of homogeneity and heterogeneity on the quality of social relations would also be useful in understanding the impact of bridging social capital on addressing urban vulnerabilities in informal settlements. For example, in the aftermath of the 1995 earthquake in Kobe, Japan, community groups in long-established areas of the city were shifted to temporary shelters. Shaw and Goda (2004) argue that such movement had a negative impact on community links, thereby hampering the rebuilding interpersonal relationships. It follows that it would be very interesting to determine how community links are developed and used in vulnerable urban settings.
Linking (social) capital
Linking capital concerns relations of individuals and communities with societal institutions-links to people or groups further up or lower down the social ladder (Keeley, 2007, p.102). Linking social capital is used to define relations that are characterised by power differences, the accumulation of ties with individuals in power and institutions of influence (Titeca and Vervisch 2008;Manzano Nunez 2016).
These linkages can be viewed to be hierarchical and reflect power, wealth and social status. Linking mainly refers to connections between individuals and groups in a community and formal institutions and systems such as education, governance and the economy. It involves social relations with those in authority, often the type of capital used to garner resources or power (Stone 2001). Musinguzi et al. (2017), in a research conducted in Uganda on the ability of Village Health Teams to link and connect communities with formal health care, premised their study on three assumptions that serve to outline in greater detail the nature and utility of such relations. Firstly, they argued that linking social capital assumes that networks do exist that connect vulnerable populations with those in power, that the people should have the capacity to engage in vertical connections to access resources and, finally, that the means of engagement facilitates and promotes linking social capital (ibid). It has also been argued that the connections between survivors and national and international nongovernmental organisations (NGOs) can play a vital role in helping to secure necessities and broader community recovery (Aldrich 2012b p. 173;Hawkins and Maurer 2010). Again, assessing vertical connections between vulnerable populations and those in power would also help to identify the impact of expectations the affected population may have during preparedness for, response to and recovery from disasters. Different typologies around expectations of response to the disaster can affect recovery processes as demonstrated by Chamlee-Wright and Storr (2010), which drew on a case of rebuilding processes by communities in New Orleans in the aftermath of Hurricane Katrina. Linking capital describes the amount of trust between individuals and societal institutions, in our case vulnerable urban societies (Sundquist et al. 2014). The importance of trust in addressing vulnerability to disasters has been highlighted in the literature relating to the disaster. Reinhardt (2015), who examined the political drivers of post-disaster resettlement in post-Katrina New Orleans, found that survivors exhibited less trust in public officials to manage disasters than those who were not affected.
Quality and structure of linkages and institutions North (1990) states that institutions and organisations are different but related terms. Institutions comprise of rules, norms of behaviour, conventions and values that bind individuals together, structures that humans impose on their dealings with each other. On the other hand, organisations consist of groups of individuals engaged in a purposive activity (North 1990(North , 1992. Institutions are frequently viewed by sector such as the economic, social and political environments that shape the social structure (Grootaert and Bastelaer 2001). Such institutions encompass informal [local and horizontal] and formal [hierarchical and vertical] associations and relationships. Societal, institutional structures (organisations) include education, health services, the economy, and political and juridical sectors. Institutional relationships include macro-level governance, political regime, the rule of law, the court system and civil and political liberties [as part of quality]. Adger (2003) argues that the relations between actors and institutions play a significant role in shaping people's ability to act when collectively adapting to and recovering from natural disasters. Situations of humanitarian crisis require greater openness of institutions to engage with individuals or organisations. An understanding of the existing institutions and organisations and the linkages with individuals in vulnerable urban settings would then provide an opportunity for both national and international aid organisations to engage better with the affected population. Therefore, analysing linking capital in vulnerable urban settings would aid in obtaining not only a better understanding of the existing institutions and organisations but also the amount of trust people may have in these institutions and the access to the services (e.g. jobs, microfinance, education and justice) they provide in vulnerable urban settings.
Study framework and selection of indicators for vulnerable urban contexts
Scholars agree that obtaining a single or universal means of measuring social capital at local, national or international level is still challenging (Gallaher et al. 2013;Siegler 2014;Babcicky and Seebauer 2016). This is due to a number of reasons, including the following: (a) most of the definitions highlight that social capital is a multidimensional concept with different levels of analysis, (b) the nature and form of social capital changes over time, (c) the application of the concept is still essentially in its infancy (Keeley 2007) and (d) a wide range of approaches are taken in defining and measuring social capital 8 (Gallaher et al. 2013;Siegler 2014). Critics also argue that the term social capital is vague, hard to measure and poorly defined while others challenge whether it can be considered a form of capital at all (ibid.). Against this backdrop, the authors argue that it is even more challenging to try to measure social capital in vulnerable urban contexts such as slums and informal settlements exposed to a wide range of disasters (including those arising from urban violence, natural hazards and extreme poverty) without a welldefined framework. Woolcock and Narayan (2000) focused on measuring membership in informal and formal associations and networks. They examined formal group functioning, contributions to groups, participation in decision making and heterogeneity of membership, interpersonal trust and changes over time. Stone (2001) proposes focusing on the structure of social relations (network types, structure and systems) and quality of social ties (norms of trust and reciprocity). Grootaert and Bastelaer (2001), while interested in the issue of economic development, 9 recommended focusing on the structural (local and societal institutions) and the cognitive dimensions (norms, trust and governance at a higher level) of social capital. More recent studies measure social capital as it relates to economic well-being (Scrivens and Smith 2013 10 ; Siegler 2014Siegler , 2015. Scrivens and Smith proposed a twoby-two measurement framework with two levels of analysis, individual and collective. Siegler (2014) used the same measure to assess how social capital contributes to the well-being of the people in the United Kingdom (UK).
This study proposes the designing of the broader framework based on the understanding of social capital forms as bonds, bridges and linkages as they relate to urban slums. Nevertheless, both qualitative and quantitative indicators need to be developed according to the relevant vulnerable urban setting so that there is an inductive construction of indicators (Béné et al. 2012;Mitchell 2013). Such an approach would enable the establishment of context-specific indicators on the basis of culturally informed indicators that give meaning to the contribution of social capital to the provision of basic needs across several universal domains such as water, food, education, security and health in vulnerable urban settings.
Understanding how to measure social capital in slum areas of for example Jakarta, Bogotá and Nairobi for effective humanitarian action does require focusing on not only cognitive or collective actions but also structural factors such as macro-level governance and the quality of institutions. Figure 1 demonstrates this multilevel and multidimensional conceptualisation of the nature of social capital. The figure shows that social capital ranges from the cognitive to the structural and the micro to the macro as defined by Grootaert and Bastelaer (2001) while depicting the multidimensional factors as argued by Stone (2001). It also ranges from the individual, through collective, to the societal level, reflecting the multilevel nature of the concept (Scrivens and Smith 2013; Siegler 2014). Such an approach will help to understand how in a particular historical, cultural, political and economic context (Woolcock 2005), bonds, bridges and linkages are used for greater preparedness and enhancing resilience to address urban vulnerabilities.
An illustrative case of measuring social capital in vulnerable urban contexts: Concern Worldwide's engagement in Nairobi, Kenya
The purpose of the case study is to illustrate the operationalisation of a conceptual/theoretical framework in diverse, vulnerable urban settings. As indicated in the research approach section, concept analysis research method requires that a case should assign appropriate and relevant indicators to give meaning to the attributes of the concept under study, in this case, social capital in vulnerable urban settings. This case, therefore, illustrates the attributes of social capital set out in this paper by drawing on preliminary findings from a research project being conducted in informal settlements in Nairobi. While not setting out to measure social capital per se, the Indicator Development for Surveillance of Urban Emergencies 11 (IDSUE) study provides some guidance as to the indicators of social capital in urban slums-especially about vulnerability. The IDSUE study has responded to the need for more disaggregated data in informal urban settlements that would allow for the identification of the threshold whereby chronic poverty tips into an urban emergency. Conducted in slums and informal urban settlements of Nairobi from 2012 to 2015, the study involved a baseline study and subsequent quarterly surveillance 12 rounds. Approximately 30,000 households were surveyed. In all the surveys and rounds of surveillance, quantitative data were collected about household livelihoods, household income and expenditure, shocks and coping strategies. This illustrative case is based on the quantitative baseline data that was collected in Nairobi in the year 2015 (n = 1153) in the slums of Kibera, Kawangware and Eastleigh. The livelihood context Estimates of the population size of Nairobi's informal settlements are highly contested. Kibera, which in 2009 had a population size of about 170,000 people, 13 is one of the biggest slums and 'informal economies' in Africa. 14 Kawangware, a densely populated, impoverished informal settlement in eastern Nairobi, has a population size of around 150,000 to 200,000 people. On the other hand, Eastleigh, regarded as the second largest slum in Kenya, had a population size of about 174,000 in 2012 according to UNHCR. 15 In the slums, daily labour and street hawkers (casual and temporary, e.g. dumpsites for scavenging) are part of the livelihood opportunities. The use of negative coping strategies is part of survival mechanisms in most of the slums. Housing and decent accommodation are common problems. The complexity of land ownership and security plays a significant role in slum development and the possibilities for upgrading the provision of essential services. Constant threats of eviction by landowners means that residents, NGOs and government are hesitant to improve or provide services to most of the slums. Water supply by the city authorities is not adequately available or at least expensive. Solid waste management (household, medical, other waste), drainage and excreta disposal (toilet facilities) remains a challenge. Preliminary findings have shown us that quantitative indicators, where appropriately adapted to context, can provide insights into aspects of the social capital framework. The indicators are captured based on bonding, bridging and linking capital gleaned from IDSUE in the Nairobi survey.
Bonding capital
Applying the framework set out in Table 1, several indicators of bonding capital are identified in Table 2.
More than half of the residents (67%) in the slums rely on only one breadwinner 16 to provide for the needs of the household. The household size ranges from three members to more than 12 members. This indicates how strong ties are required for the survival of the families. Relations such as brother, brother-in-law, sister, sisterin-law and cousins were evident in the survey, which indicate that the residents tend to rely on extended families to survive. At the time of the survey, 50.5% of the households reported having a member of the household who was sick in the last 2 weeks. The majority of those who fell sick relied on close friends and family members for care and support.
Though only 17% of the households indicated to have removed a child from school, this is significant regarding bonding capital as the children may have been withdrawn either because of lack of support or because the child has to help in obtaining food or money to support the family. More than half of the respondents (56%) reported being worried about food when asked to rate their distress about adjustment in food consumption they may have to make due to food insecurity. Even though some of the residents indicated they would never worry about spending a day or night hungry, those worrying about eating smaller or fewer meals, unwanted food and limited variety of food were in the majority. This shows the limited existence of social capital as people with high social support should be able to use Individual and collective access to services and resources their ties and cope with distressful moments. Regarding coping strategies, the most frequently observed coping strategy among inhabitants of the slums of Eastleigh, Kawangware and Kibera was the purchase of food on credit-36.4%. The second most commonly resorted to coping strategy was found to be asking for support from neighbours or relatives (28.8%) and removing children from school. This indicates that a high proportion of vulnerable urban dwellers relied on people familiar to themselves for help and support in times of need. However, several other aspects of bonding capital need to be further explored in vulnerable urban contexts as per the framework, for example, the implication of network size on coping strategies and food security.
Bridging capital
The indicators detailed in Table 3 were found to be relevant for bridging capital.
The findings indicate at least some of the surveyed slum dwellers participate in local initiatives and are members of merry-go-round associations. Nonetheless, the influence or the impact of the internal heterogeneity and decision-making mechanisms of such organisations are not clear. Also, the proportion of slum dwellers who participate in social clubs, youth groups, political parties and other local institution is not apparent. Further exploration of bridging capital in vulnerable urban contexts is required.
Linking capital
Applying the framework set out in Table 1, the indicators of linking capital detailed in Table 4 were evident.
Concern Worldwide, through the IDSUE study and other programmes they have been implementing in the slums, have managed to establish partnerships and linkages with key stakeholders. The international organisation managed to establish links with Kenya Red Cross Normative (change to fit in a group-to be liked or accepted) Informational (change because of desire to be correct, belief that others have right information) Proportion of individuals who changed thoughts, attitudes, feelings and behaviour because of interaction with others; doing things for others with a choice to deny; doing things for others without an opportunity to refuse
Society (KRCS) and World Vision Kenya among others.
However, the partnerships and networks of slum residents with national and international organisations need further exploration. Also, linkages between individuals and societal institutions require further examination. Regarding access to education, the preliminary findings of the study indicate that just slightly more than half of the household heads (56%) had completed primary education. Only 7.3% reported having attended tertiary education. This shows some disconnect between slum dwellers and institution of higher learning. 40.8% of the surveyed slum dwellers reported having a regular source of livelihood, an indication of some access and connections to people with resources, power and influence. The households which did not have any source of livelihood depended on well-wishers and non-governmental organisations for food and other financial support. The majority of the residents participate in local and national elections even though ethnicity and tribal inclinations often influence the participation and voting pattern. However, partnerships and networks between individuals and communities with those (individuals and institutions) with power, resources and influence need to be further explored. Moreover, other aspects of linking social capital, particularly those relating to the existence of microfinance, education, health, security, market, religious and justice institutions, merit further exploration.
Implications for future research
Based on certain attributes of bonding, bridging and linking social capital detailed in Tables 1, 2, 3 and 4 above, the IDSUE study findings indicate that some individuals in informal settlements have higher social capital than others. This highlights that quantitative indicators, where appropriately adapted to context, can provide insights into aspects of the social capital framework.
However, quantitative indicators alone are insufficient for applying the comprehensive social capital framework.
Ego-centric networks 17 should be investigated to determine the strength of the social ties and to determine the perceived importance of individuals in vulnerable urban settings. Such an assessment demands the deployment of methods that are more interpretive in nature. Further exploration of bridging capital in vulnerable urban contexts through socio-metric network analysis, profiling of local institutions 18 and qualitative (phenomenological and ethnographic) 19 studies is essential to better understanding bridging capital in slums. Similarly, there are many factors at play including the strength of linkages to socioeconomic institutions and connections of individuals with those with power, resources and influence. Therefore, mechanisms of engagement between individuals and the people with power, resources and influence need further exploration. Such exploration can take the form of aggregated survey-based responses and qualitative studies. While an important step in contributing aggregated data concerning urban emergencies in urban settlements, future research ought to be conducted that draws on the broader social capital framework. Data availability is crucial to obtaining a full picture of social capital. The Preparedness and Resilience to address Urban Vulnerabilities project in which the authors are engaged builds on the IDSUE project and aims to collect data relating to bridging and linking capitals in addition to bonding capital that will allow for a better understanding of social capital in urban settings. It will do so by exploring the relevance of such forms of capital not only in absorbing recurrent shocks but also in adapting and transforming in response to such shocks. Comparisons of Nairobi with vulnerable settings in Bogotá and Jakarta with vastly different cultural, economic, social and political settings will be facilitated. In so doing, it is recognised that it can be challenging to collect data, for security reasons and because of the high turnover of populations living in such areas. Relationships with key stakeholders need to be established to ensure access to the informal settlements. Once partners rooted in the informal settlements have been identified and engaged, seminars ought to be conducted concerning how the indicators from the IDSUE project can be adapted. In applying the social capital framework, the project has served to highlight the importance of adapting indicators according to the context to measure the same attributes. The project has also served to highlight that exploring social capital in vulnerable urban settings requires a holistic approach and the use of a mixed methods approach as well as the establishment of relationships with key local stakeholders.
Conclusion
Understanding social capital (bonding, bridging and linking capitals) is the key to understanding the value of social interaction for effective localisation of humanitarian response in vulnerable urban contexts. However, what constitutes such capitals and how they can be measured need to be understood based on context-specific issues and indicators. This article has developed and provided the understanding of the attributes of the social capital concept, giving rise to a multilevel and multidimensional framework that ought to be adapted to vulnerable urban contexts. In particular, the indicators for the measurement of bonds, bridges and linkages need to be adapted to each context in which social capital is to be measured. In line with the concept analysis approach, further most likely and least likely cases of social capital in humanitarian action, particularly in urban areas, ought to be studied to clarify the concept and its relevance to humanitarian action further.
Endnotes 1 Urban resilience in this paper is understood as referring to 'the ability of an urban system-and all its constituent socio-ecological and socio-technical networks across temporal and spatial scales-to maintain or rapidly return to desired functions in the face of a disturbance, to adapt to change and to quickly transform systems that limit current or future adaptive capacity' (Meerow et al. 2016, p.45). Resilience is a measure of households' , communities' and societies' ability to both address their vulnerabilities by improving their capacitiesto absorb and adapt to existing and anticipated shocks and stresses while strengthening their capacities to transform/overcome to a level where these stresses are no longer relevant. Resilience is to be considered a concept that is 'co-created' by all actors in the research process, both researchers and participants.
2 Gibbons et al. (2017) argues that localising humanitarian response demands that humanitarian action should always endeavour to build on the capacities and abilities of affected populations to provide assistance and protection to vulnerable populations in their own communities and societies. With the recognition that its action is subsidiary to the people, it purports to support and not simply a tool of aid donors (page 10).
3 Urban is defined in relation to the capacities and vulnerabilities arising from the interdependencies that characterise highly complex and interlinking social, protection, legal, security and health systems. Urban areas feature high population density, diverse livelihoods and means of production and are often sites of governmentprovided facilities/infrastructure. 4 Social network is an 'enumeration of the relationships that exist between groups of individuals or organizations (i.e. who knows whom). The structure of these networks and the character of these links between those individuals (or nodes), influence such things as how effectively the network can produce various results, its vulnerability, whether the network is well integrated or balkanized'. (Harvard Kennedy School, Harvard University). 5 For more detailed discussion, see among others, Grund (2014) and Eom and Jo (2014). 6 On this point, see Watts (1999) and Granovetter (1973a, b). 7 Vertical and horizontal terms are used in the sense to describe and distinguish institutions and associations based on motivations that led to their formation and their functions. Governmental/non-governmental initiated macro institutions are in some studies referred to as vertical and locally initiated, informal associations as horizontal. For further description, see Grootaert and Bastelaer (2001 12 Surveillance refers to regular monitoring of those areas/households that are particularly vulnerable to potential shocks and stresses. 13 That is according to Kenya National Bureau of Statistics housing and population census report of 2009. However, other sources indicate the population may vary 'widely from 500,000 to 1,000,000'. See for example Erulkar and Matheka (2007a, b) and Mutisya and Yarime (2011a, b). 14 UN-Habitat (2016) and World Cities Report (2016) 15 UNHCR obtained the population size from the Eastleigh local administrative authorities. However, other sources indicate the population of Eastleigh may be over 350,000 at that time (see for example Asoka et al. 2013). 16 The data findings also indicate that in most cases (more than 85%), the breadwinner is also the head of the household. 17 The social network analysis (SNA) is intended to elicit the networks and contacts that are deemed essential in the locality for the slum and informal settlement dwellers to address vulnerabilities and enhance their resilience. The SNA tool questions focus on personal networks and group networks referred to as ego-centric networks and socio-metric networks respectively. Indepth interviews with residents can be conducted in each locality of the selected testbeds. Key or influential people could be identified through consultation with the principal local stakeholders involved in various programme and projects. The questions regarding the organisation influence net-map should not be intended to rank the organisations working in the area but to highlight to organisations that have influence when it comes to addressing urban vulnerabilities thereby helping the residents to be aware of which organisations to contact in times of need. 18 The overall objective of the institutional profile is to delineate the relationships and networks that exist among formal and informal institutions operating in the locality, as a measure of bridging social capital. Specifically, the profile assesses the organisations' origins and development (historical and locality context, longevity and sustainability), quality of membership (reasons people join, degree of inclusiveness of the organisation), institutional capacity (quality of leadership, participation, organisational culture and organisational capacity) and institutional linkages. Between three and six institutions per locality can be profiled. For more on institutional profiles, see Krishna and Shrader (1999). 19 Qualitative semi-structured questions can be asked during the focus groups and during the key informant's interviews to get rich data on bonding capital. Issues explored can be mainly on norms of trust (particularised, generalised and strategic trust), norms of reciprocity (receiving and giving help in kind) and network support among others. Regarding bridging capital, question to be asked could be around collective actions, civic engagement, association membership and participation among other attributes of bridging social capital. Likewise, issues of interest for linking capital could be mainly on individual and collective access to opportunities, individual and collective connections to people with power, resources and influence.
Funding
This article has been written as part of the Preparedness and Resilience to address Urban Vulnerabilities (PRUV) project. The project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement no. 691060.
Authors' contributions DM conducted the literature review and developed the framework. He also developed the initial draft and made changes based on deliberations with co-authors. PG provided the outline and scope of the paper and provided considerable guidance to the lead author in developing the paper. RMD contributed to the writing of the introduction and illustrative case study sections in particular and assisted with the overall writing process. All authors read and approved the final manuscript.
Ethics approval and consent to participate Ethical approval was obtained for this study and informed consent of participants obtained.
Consent for publication
The study respondents provided consent to use the data for publication purposes. | 2023-02-02T15:16:22.945Z | 2018-04-02T00:00:00.000 | {
"year": 2018,
"sha1": "be25ee369558903f85d576ac80ea3e3703d4c246",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s41018-018-0032-9",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "be25ee369558903f85d576ac80ea3e3703d4c246",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
261315968 | pes2o/s2orc | v3-fos-license | VIDAR-Based Road-Surface-Pothole-Detection Method
This paper presents a VIDAR (a Vision-IMU based detection and ranging method)-based approach to road-surface pothole detection. Most potholes on the road surface are caused by the further erosion of cracks in the road surface, and tires, wheels and bearings of vehicles are damaged to some extent as they pass through the potholes. To ensure the safety and stability of vehicle driving, we propose a VIDAR-based pothole-detection method. The method combines vision with IMU to filter, mark and frame potholes on flat pavements using MSER to calculate the width, length and depth of potholes. By comparing it with the classical method and using the confusion matrix to judge the correctness, recall and accuracy of the method proposed in this paper, it is verified that the method proposed in this paper can improve the accuracy of monocular vision in detecting potholes in road surfaces.
Introduction
Road cracks and road-surface potholes are the most common types of road damage.Road cracks are mainly caused by substandard construction techniques and the overloading of vehicles, while road cracks evolve into road-surface potholes after being washed away by rain, struck by hard objects and crushed by vehicles [1].Road-surface potholes not only reduce the service life of the road, but because they are depressed downwards, there is no three-dimensional height information of a conventional obstacle, which makes them difficult to identify accurately with conventional vision-detection methods.When a vehicle drives over a pothole in the road, the pothole can cause damage to the vehicle's suspension and tires, which in turn can affect the vehicle's performance and make it vulnerable to accidents when braking or changing lanes.
The current common method of road pothole detection relies on technicians to carry out an analytical review of road images and video data collected by road surveillance vehicles, a process that is time-consuming and susceptible to the subjective intentions of inspectors [2].To address these drawbacks, researchers have progressively used intelligent algorithms for road-surface pothole detection.The existing automatic pothole-detection methods can be broadly classified into three methods: vision-based methods, vibrationbased methods and 3D reconstruction-based methods.
The vision-based automatic pothole-detection method is the most cost-effective and common method available.It solves the road pothole-detection problem using image classification, object recognition or semantic segmentation algorithms [3].The vibrationbased automatic pothole-detection method uses the accelerometer of a smartphone to implement the detection function.However, its biggest drawback is that it requires a vehicle carrying a vibration sensor to pass through the pothole in order to complete the detection, which still causes damage to the vehicle [4].The method based on 3D reconstruction is achieved by means of a laser scanner, which is the most accurate method of inspection and the most expensive of the three methods [5].
For laser scanning-based 3D road-surface reconstruction methods, She et al. proposed that the optimal spacing between two adjacent transverse profiles should be fully considered when reconstructing 3D models using laser technology, while verifying that a spacing of 2 mm is the optimal spacing for reconstructing 3D models [6].Feng et al. compared five crack-detection algorithms using terrestrial laser scanner (TLS) point clouds and found that the along-range and cross-range profile-based filtering methods performed the best in crack detection [7].
For machine learning methods based on visual sensing, Egaji et al. preprocessed the data with a 2 s non-overlapping window and compared five binary classification machine learning models (plain Bayesian, logistic regression, SVM, KNN and random forest tree) and found that random forest had better processing results [8]. Lee et al. considered the minimum temperature, relative humidity, precipitation and traffic volume to build a deep learning model based on convolutional neural networks to detect potholes and calculate damage ratios (pixel method), a method that could provide the government with effective budget management and reduce road accidents caused by potholes on certain road sections [9].Saisree et al. used tensor flow to train and test keras pothole images and found that the InseptionResNetV2 model and VGG19 model had a better detection accuracy than the Resnet50 model [10].Especially, Bhatia et al. achieved an accuracy of 97.08% for pothole detection using a pothole-detection system based on a CNN-based ResNet model combined with thermal imaging [11].
For the vibration approach using accelerometer sensors, Lee et al. developed a system of smartphone-based dual-acquisition methods that can capture images of pavement anomalies and measure vehicle acceleration when detected, exploring the complementary advantages of the two different approaches [12].Setiawan et al. proposed an improved U-Net architecture with an integrated bi-directional long short-term storage layer for the semantic segmentation of smartphone motion sensor data for pavement classification [13].Notably, Pandey et al. proposed a new method for applying accelerometer data to convolutional neural networks, which is more advantageous in terms of detection accuracy and computational complexity [14].
In addition to the above methods, Li et al. fused genetic algorithms with visual sensing to achieve the 3D reconstruction and extraction of the morphological features of potholes through point cloud processing [15].Notably, Zhang et al. used an unmanned aerial vehicle (UAV) road-damage database and described a multi-level attention mechanism called multilevel attention block (MLAB) to enhance the use of basic features using YOLO v3 (You Only Look Once version 3) [16].However, the method is only 68.75% accurate and uses UAVs, which cannot be used directly on smart vehicles.
Particularly, R Fan et al. distinguished whether a road was damaged or not by transforming a dense parallax map, estimating the transformation parameters using golden section search and dynamic programming, then extracting undamaged road areas from the transformed parallax map using Otsu's thresholding method, and finally reducing the effect of outliers by the consistency of random samples to achieve the fusion of 2D road images and 3D pavement modelling under binocular vision, in which the successful detection accuracy of the system was about 98.7% and the overall pixel-level accuracy was about 99.6% [17].But this method has some limitations in measuring potholes: its setup of pothole-detection parameters cannot be applied to all situations and it cannot measure the dimensional information of potholes.
In addition, the EU-autonomous HERON project will develop an integrated automation system to design an autonomous ground-based robotic vehicle supported by autonomous drones to adequately maintain road infrastructure.The vehicle will use sensors and scanners for 3D mapping, in addition to an AI toolkit, to help coordinate road maintenance and upgrading workflows with the aim of reducing accidents, lowering maintenance costs and increasing the capacity and efficiency of the road network [18].VIDAR [19] is a method that uses a monocular camera and an inertial measurement unit as the basic sensing unit.In order to ensure that road potholes can be effectively detected in real time while the vehicle is in motion, and to reduce the degree of damage to vehicle performance caused by road potholes, as well as the occurrence of traffic accidents, this paper proposes a VIDAR-based road-pothole-detection method.The identification process of this method is as follows: first, the rectangular frame of the road pothole is determined by extracting feature points, the width of the road pothole is calculated using the transformation relationship between pixel coordinates and world coordinates, and then the depth of the road pothole is calculated by tracking the feature points of the road pothole and using the transformation relationship between pixel coordinates and world coordinates.It is assumed that when the actual inspection is carried out, the vehicle remains in a straight line on a flat road and that the road pothole is really present, with a certain depth, width and length.
The VIDAR-based pothole-detection method for the road surface proposed in this paper has a faster processing time than the Adaboost cascade detection method.The combination of an MSER-based fast image region-matching method and the VIDAR-based obstacle detection method can bypass obstacle classification and reduce the time and space complexity of road-environment perception [20].Meanwhile, the method fuses monocular vision with IMU to calculate the dimensional information (width, length and depth) of potholes on the road surface using camera imaging principles.The accuracy, recall and precision of the confusion matrix are compared and analyzed with the traditional method, which verifies that the method proposed in this paper has a higher accuracy.
The remainder of this paper is as follows: Section 2 describes the principles of VIDAR (a vision IMU-based detection and ranging method) for obstacle detection.Section 3 details the VIDAR-based road-surface-pothole-detection method and the obstacle feature pointtracking method, and designs the relevant calculation methods for road-surface pothole width, length and depth as well as the update method for pothole depth.Section 4 presents simulation experiments and real-vehicle tests of the proposed method and comparison experiments with conventional methods to verify its detection accuracy and speed.Section 5 provides a summary of the proposed methodology and experimental results.
The Principle of VIDAR Detection
The research in this paper is based on the VIDAR detection method for stereo obstacles.VIDAR based on monocular vision has the advantages of a simple process, easy operation and high detection accuracy, which can accurately obtain the position information of the target obstacle ahead in practical detection, and at the same time can detect unknown obstacles that cannot be identified using machine vision, which is a practical and effective obstacle-detection algorithm and the basis for the subsequent research in this paper.
In the actual obstacle-detection process, VIDAR first uses the MSER (maximally stable extremal regions)-based fast image region-matching method to extract feature points of the obstacle, and image matching is performed on two consecutive frames of the acquired image.Then, the non-obstacle points extracted using the MSER image area-matching method are eliminated by the VIDAR discrimination principle for stereo obstacles, and the obstacles in the detected image are quickly and directly identified.
Static Obstacle Distance Detection
As shown in Figure 1, image acquisition is the process of mapping objects in a 3D space to a 2D image plane, and this process can be simplified as a pinhole camera model.
Let us consider the lowest point P of the MSER connected to the measured area as the intersection of the obstacle and the road plane.For the sake of calculation, assume that the camera optical axis is exactly pointing at a point, as shown in Figure 1.Assume that the effective focal length of the camera is f , the distance from the optical axis of the lens to the ground is h, the pixel size is µ, the pitch angle is θ, and the coordinates of the origin of the image coordinate system is (x 0 , y 0 ).It is known that the coordinates of the intersection point P of the obstacle in front of the self-vehicle and the road plane in the imaging plane is (x, y).The horizontal distance between the point P and the self-vehicle camera can be expressed as Let us consider the lowest point P of the MSER connected to the measured area a the intersection of the obstacle and the road plane.For the sake of calculation, assume tha the camera optical axis is exactly pointing at a point, as shown in Figure 1.Assume tha the effective focal length of the camera is f , the distance from the optical axis of the len to the ground is h, the pixel size is μ , the pitch angle is θ, and the coordinates of th origin of the image coordinate system is ( ) Suppose point A is the imaging point of the obstacle on the y 1 -axis in the previous frame, and point B is the imaging point of the obstacle on the y 2 -axis in the next frame, as shown in Figure 2. Point A is the point on the road plane corresponding to A. Point B is the point on the road plane corresponding to B. d 1 is the horizontal distance from the self-vehicle camera at the previous frame position to the point A , d 2 is the horizontal distance from the self-vehicle camera at the next frame position to the point B , where d 1 and d 2 can be obtained by Equation (1).∆d is the distance the camera moved between the two frames before and after the shot (it is also the distance the car moved since the time period), d 1 = d 2 + ∆d.In the actual detection, the obstacle is three-dimensional, so Let us consider the lowest point P of the MSER connected to the measured area as the intersection of the obstacle and the road plane.For the sake of calculation, assume that the camera optical axis is exactly pointing at a point, as shown in Figure 1.Assume that the effective focal length of the camera is f , the distance from the optical axis of the lens to the ground is h, the pixel size is μ , the pitch angle is θ, and the coordinates of the origin of the image coordinate system is ( )
Dynamic Obstacle Distance Detection
When the obstacle in front of the self-vehicle moves in the horizontal direction (as shown in Figure 3), the horizontal distance between the previous moment of the camera and the vertex of the obstacle is s 1 , and the horizontal distance between the camera and the vertex of the obstacle at the next moment is s 2 , the specific relationships between d 1 , d 2 , s 1 , s 2 , and ∆d are According to the characteristics of the relationship between the sides and angles of a right triangle, the specific relationships between h v , h, s 1 , s 2 , d 1 , and d 2 are Sensors 2023, 23, 7468 can be obtained according to Equations ( 1) and ( 2).Thus, by tracking and calculating the positions of feature points, obstacles can be identified in any case where h × s = h v × ∆d.
Dynamic Obstacle Distance Detection
When the obstacle in front of the self-vehicle moves in the horizontal direction (as shown in Figure 3), the horizontal distance between the previous moment of the camera and the vertex of the obstacle is 1 s , and the horizontal distance between the camera and the vertex of the obstacle at the next moment is 2 s , the specific relationships between 1 d s , and d Δ are According to the characteristics of the relationship between the sides and angles of a right triangle, the specific relationships between v h , h, 1 s , 2 s , 1 d , and 2 d are can be obtained according to Equations ( 1) and ( 2).
Thus, by tracking and calculating the positions of feature points, obstacles can be identified in any case where v h s h d × ≠ ×Δ .
VIDAR-Based Road-Surface-Pothole-Detection Method
The overall objective of this research is to validate the effectiveness of the proposed VIDAR-based pothole-detection method, then to verify that this method can identify potholes and calculate their sizes in real time by means of camera equipment.
Although road-surface potholes have no height information and are depressed downwards, they are detected in much the same way as real obstacles.When detecting potholes with VIDAR, the fast image region-matching method based on MSER is used to match feature points between two images obtained in a relatively short time and to eliminate redundant feature points.
In image processing, the MESR-based fast image region-matching method firstly passes the MSER extraction algorithm and fits all the MSERs (maximum extreme stable region) in the reference image and the target image into an elliptical region to provide more useful information, and then in improves the matching accuracy through the feature point detection method.Finally, the stability of MSERs is used to ignore the differences in the position and shape of MSERs in the two images, simplifying the matching process and increasing the matching speed.
VIDAR-Based Road-Surface-Pothole-Detection Method
The overall objective of this research is to validate the effectiveness of the proposed VIDAR-based pothole-detection method, then to verify that this method can identify potholes and calculate their sizes in real time by means of camera equipment.
Although road-surface potholes have no height information and are depressed downwards, they are detected in much the same way as real obstacles.When detecting potholes with VIDAR, the fast image region-matching method based on MSER is used to match feature points between two images obtained in a relatively short time and to eliminate redundant feature points.
In image processing, the MESR-based fast image region-matching method firstly passes the MSER extraction algorithm and fits all the MSERs (maximum extreme stable region) in the reference image and the target image into an elliptical region to provide more useful information, and then in improves the matching accuracy through the feature point detection method.Finally, the stability of MSERs is used to ignore the differences in the position and shape of MSERs in the two images, simplifying the matching process and increasing the matching speed.
Therefore, after generating the MSERs, the MSERs are marked with rectangular boxes, while the feature points at the widest and longest part of the rectangular box are extracted (as shown in Figure 4).Therefore, after generating the MSERs, the MSERs are marked with rectangular boxes, while the feature points at the widest and longest part of the rectangular box are extracted (as shown in Figure 4).
Calculation of the Width of Road-Surface Potholes
For the calculation of the width of road-surface potholes as shown in Figure 5, let A be the first imaging point of the width of the rectangular frame outside the road-surface pothole: its coordinates are ( ) B to the car camera can be calculated using Equation (1).It is easy to calculate the length w (the width of the road-surface pothole) of a rectangular body by using the principle of small-aperture imaging, which is derived as follows:
Calculation of the Width of Road-Surface Potholes
For the calculation of the width of road-surface potholes as shown in Figure 5, let A be the first imaging point of the width of the rectangular frame outside the road-surface pothole: its coordinates are (x 1 , y 1 ); B is another imaging point for the width of the rect- angular box: its coordinates are (x 2 , y 1 ).A and B are the points opposite A and B on the road plane, respectively.The distance d 1 and d 2 between points A and B to the car camera can be calculated using Equation (1).It is easy to calculate the length w (the width of the road-surface pothole) of a rectangular body by using the principle of small-aperture imaging, which is derived as follows:
Calculation of the Width of Road-Surface Potholes
For the calculation of the width of road-surface potholes as shown in Figure 5, let A be the first imaging point of the width of the rectangular frame outside the road-surface pothole: its coordinates are ( ) x y ; B is another imaging point for the width of the rec- tangular box: its coordinates are ( ) B to the car camera can be calculated using Equation (1).It is easy to calculate the length w (the width of the road-surface pothole) of a rectangular body by using the principle of small-aperture imaging, which is derived as follows: ( ) Among them, 2 + h 2 After substitution and simplification, the final expression of w is (5)
Length Calculation of Road-Surface Potholes
For the calculation of the width of road-surface potholes, as shown in Figure 6, let C be the first imaging point of the length of the rectangular frame outside the road-surface pothole, with coordinates (x 3 , y 3 ); D is another imaging point for the length of the rectan- gular frame, with coordinates (x 3 , y 4 ).C and D are the points opposite C and D on the road plane, respectively.The distance d 3 and d 4 between points C and D to the car camera can be calculated using Equation (1).It is easy to calculate the length l hole (the length of the road-surface pothole) of a rectangular body by using the principle of small-aperture imaging, which is derived as follows: After substitution and simplification, the final expression of l hole is the rectangular frame, with coordinates ( ) D to the car camera can be calculated using Equation (1).It is easy to c the length hole l (the length of the road-surface pothole) of a rectangular body by u principle of small-aperture imaging, which is derived as follows:
Depth Calculation of Road-Surface Potholes
With the depth of the potholes having more influence on the vehicle driving than the width and length, how to accurately detect the depth of the potholes is the key of the proposed method in this paper.Based on the VIDAR principle, the feature points are marked, while the pothole depth is solved based on the feature point tracking using the images of the before and after two frames.
For the calculation of the depth of road-surface potholes as shown in Figure 7, let E 1 (x 5 , y 5 ) be the imaging point of the lowest point of the road-surface pothole on the y 1 -axis in the previous image frame, and E 2 (x 6 , y 6 ) be the imaging point of the lowest point of the road-surface pothole on the y 2 -axis in the image of the latter frame.E is the point on the road plane corresponding to E 1 and E 2 , d 5 is the distance between E and the self-vehicle camera at the moment of the previous frame, d 6 is the distance between E and the self-vehicle camera at the moment of the latter frame, ∆d is the distance that the camera moves between the two frames before and after shooting (it is also the distance travelled by the vehicle during that time period), and d 5 = d 6 + ∆d.Through small-hole imaging and the similarity principle of a triangle, the distance between the lowest point of the road-surface pothole and the self-vehicle camera can be calculated H (H = h + ∆h).The specific derivation process is as follows: After substitution and simplification, the final expression of ∆h is
Depth Update of Road-Surface Potholes
Due to the peculiarity of potholes, their depth cannot be measured accurately at the beginning like width and length, so this paper proposes a method to update the depth of road-surface potholes based on the pothole depth detection proposed in Section 3.3, the principle of which is shown in Figure 8.Where 1 is the first image set, 2 is the second image set and 3 is the third image set.Each image set contains two frames.
Depth Update of Road-Surface Potholes
Due to the peculiarity of potholes, their depth cannot be measured accurately at the beginning like width and length, so this paper proposes a method to update the depth of road-surface potholes based on the pothole depth detection proposed in Section 3.3, the principle of which is shown in Figure 8.Where ① is the first image set, ② is the second image set and ③ is the third image set.Each image set contains two frames.In the image frames, every two frames are divided into one group, and in the first group, VIDAR feature point marking is first performed through the first image frame, and the equation in ( 9) is used to solve the depth of the detected potholes in the first group ( In the image frames, every two frames are divided into one group, and in the first group, VIDAR feature point marking is first performed through the first image frame, and the equation in ( 9) is used to solve the depth of the detected potholes in the first group ∆h 1 .The second and third groups repeat the process of the first group, and the corresponding detected depths ∆h 2 , ∆h 3 respectively, are found; the maximum depth of the pothole is determined by comparing the size of ∆h 1 , ∆h 2 and ∆h 3 .(2) Continuous inertial data acquisition.At the beginning of t = 0, continuously acquire inertial data using IMU rigidly connected with monocular camera with frequency F.
(3) Camera parameter updating.Calculate ∆d in period ∆t according to inertial data.
Image processing and pothole detection
Sensors 2023, 23, 7468 9 of 19 (1) Feature point extraction at t. Segment image at t, extract ROI, and extract points on the upper edge of ROIs as feature points.
(2) Horizontal length and width of the pothole calculation at t.Call camera parameters at t. Assuming feature points are located at the horizontal plane, calculate horizontal distance d i between feature points and the camera at t, and calculate the horizontal width w and length l hole of the pavement pothole at t.
(3) Feature point tracking and calculation of pothole depth.Acquire road image at t(t = t + ∆t), and extract the ROI and track the feature points at t(t = t + ∆t).Compare H(H = h + ∆h) and h (h is the height of the camera from the ground).If H ≤ h, the obstacle is a real obstacle (the obstacle has a certain height).If H ≤ h, the obstacle is a pothole.Calculate the depth of the pit ∆h at t.
(4) Updated calculation of pavement pothole depth.Calculate the pavement pothole depth ∆h n at time t + (2n − 1)∆t.Compare the magnitude of ∆h n and select the maximum value.
The flow chart of the VIDAR-based road-surface-pothole-detection method studied in this paper is shown in Figure 9.
Experiment and Analysis
The experiments in this paper included two parts: simulation experiments under controlled scenarios and real-vehicle experiments.According to the experimental results, it can be seen that the VIDAR-based road-surface-pothole-detection method proposed in this paper can effectively detect potholes in the road environment.Through comparison experiments with the classical method, our method was proved to have a high detection accuracy.
Simulation Experiments
On the experimental platform, experimental equipment such as IMUs and cameras
Experiment and Analysis
The experiments in this paper included two parts: simulation experiments under controlled scenarios and real-vehicle experiments.According to the experimental results, it can be seen that the VIDAR-based road-surface-pothole-detection method proposed in this paper can effectively detect potholes in the road environment.Through comparison experiments with the classical method, our method was proved to have a high detection accuracy.
Simulation Experiments
On the experimental platform, experimental equipment such as IMUs and cameras were installed.To ensure the effectiveness of the experiment, a foam board with stickers was used as a road, while holes of different sizes were drilled in this foam board as road potholes.The video captured by the camera generated image sequences at a frame rate of 20 fps on which the road-surface-pothole-detection algorithm was experimented.In order to simulate the road environment more realistically, two types of environments were set up: a road environment with only one pothole, and a road environment with multiple potholes of different sizes, as shown in Figure 10. was used as a road, while holes of different sizes were drilled in this foam board as road potholes.The video captured by the camera generated image sequences at a frame rate of 20 fps on which the road-surface-pothole-detection algorithm was experimented.In order to simulate the road environment more realistically, two types of environments were set up: a road environment with only one pothole, and a road environment with multiple potholes of different sizes, as shown in Figure 10.The vehicles in the simulation experiment are car models and the potholes are manmade potholes.The length of the car model was 15 cm, the width was 5.8 cm, the wheel diameter was 2.9 cm, the wheel width is 0.8 cm, and the chassis height is 0.8 cm.
The feature region and feature point extraction process for road-surface potholes using the MSER-based fast image region-matching method is shown in Figure 11.
Figure 11 is divided into four stages.The blue asterisks in the second stage represent the MSERs feature points; the block colors in the third stage represent the obstacle contours identified through the feature points; and the yellow rectangular boxes in the fourth stage represent the obstacle frames extracted through the feature points.After the potholes were successfully detected, the images were matched and feature points were removed from two consecutive frames using a fast MSER-based image-region matching method, as shown in Figure 12.The vehicles in the simulation experiment are car models and the potholes are manmade potholes.The length of the car model was 15 cm, the width was 5.8 cm, the wheel diameter was 2.9 cm, the wheel width is 0.8 cm, and the chassis height is 0.8 cm.
The feature region and feature point extraction process for road-surface potholes using the MSER-based fast image region-matching method is shown in Figure 11.was used as a road, while holes of different sizes were drilled in this foam board as road potholes.The video captured by the camera generated image sequences at a frame rate of 20 fps on which the road-surface-pothole-detection algorithm was experimented.In order to simulate the road environment more realistically, two types of environments were set up: a road environment with only one pothole, and a road environment with multiple potholes of different sizes, as shown in Figure 10.The vehicles in the simulation experiment are car models and the potholes are manmade potholes.The length of the car model was 15 cm, the width was 5.8 cm, the wheel diameter was 2.9 cm, the wheel width is 0.8 cm, and the chassis height is 0.8 cm.
The feature region and feature point extraction process for road-surface potholes using the MSER-based fast image region-matching method is shown in Figure 11.
Figure 11 is divided into four stages.The blue asterisks in the second stage represent the MSERs feature points; the block colors in the third stage represent the obstacle contours identified through the feature points; and the yellow rectangular boxes in the fourth stage represent the obstacle frames extracted through the feature points.After the potholes were successfully detected, the images were matched and feature points were removed from two consecutive frames using a fast MSER-based image-region matching method, as shown in Figure 12.After the potholes were successfully detected, the images were matched and feature points were removed from two consecutive frames using a fast MSER-based image-region matching method, as shown in Figure 12.The feature point matching results in Figure 12 revealed that, in addition to the studied road-surface pothole feature points, Figure 12a matched the vehicle's edge feature points; Figure 12b matched the wall stain at the rear of the experimental platform.According to the road-surface-pothole determination process in Figure 9, the above redundant feature points do not affect the detection of road-surface potholes and the calculation of the length, width and depth of potholes.
The inspection accuracy described in this paper consists of two components: one is the ability to detect potholes; the other is the difference between the dimensional information of the detected potholes and the true measurement (i.e., measurement error).In particular, the ability to detect potholes is judged by the relevant parameters of the confusion matrix.
The accuracy (A), recall (R) and precision (P) of its ability to detect potholes were measured by four metrics TP, FP, TN and FN.Let a be a pothole correctly identified as a positive example, b be a pothole incorrectly identified as a positive example, c be a pothole correctly identified as a negative example, and d be a pothole incorrectly identified as a negative example.Then, Therefore, the accuracy (A), recall (R) and precision (P) are calculated, as shown in Equations ( 10)-( 12):
TP TN A TP TN FP FN
At the same time, for the method proposed in this paper, the dimensional error in pit detection is calculated, as shown in Equation (13).
Among them, M is the measurement error; r w is the actual width of the pothole; w is the measured width; r l is the actual length of the pothole; l is the measured length; r p is the actual depth of the pothole; and p is the measured depth.
The simulation experiment collected information from 15 potholes.The test results are shown in Table 1.The feature point matching results in Figure 12 revealed that, in addition to the studied road-surface pothole feature points, Figure 12a matched the vehicle's edge feature points; Figure 12b matched the wall stain at the rear of the experimental platform.According to the road-surface-pothole determination process in Figure 9, the above redundant feature points do not affect the detection of road-surface potholes and the calculation of the length, width and depth of potholes.
The inspection accuracy described in this paper consists of two components: one is the ability to detect potholes; the other is the difference between the dimensional information of the detected potholes and the true measurement (i.e., measurement error).In particular, the ability to detect potholes is judged by the relevant parameters of the confusion matrix.
The accuracy (A), recall (R) and precision (P) of its ability to detect potholes were measured by four metrics TP, FP, TN and FN.Let a be a pothole correctly identified as a positive example, b be a pothole incorrectly identified as a positive example, c be a pothole correctly identified as a negative example, and d be a pothole incorrectly identified as a negative example.Then, TP = ∑ n i=1 a i , FP = ∑ n i=1 b i , TN = ∑ n i=1 c i , and FN = ∑ n i=1 d i .Therefore, the accuracy (A), recall (R) and precision (P) are calculated, as shown in Equations ( 10)-( 12): At the same time, for the method proposed in this paper, the dimensional error in pit detection is calculated, as shown in Equation (13).
Among them, M is the measurement error; w r is the actual width of the pothole; w is the measured width; l r is the actual length of the pothole; l is the measured length; p r is the actual depth of the pothole; and p is the measured depth.
The simulation experiment collected information from 15 potholes.The test results are shown in Table 1.
The data in Table 1 show that of the 15 sets of potholes, 13 can be correctly identified and 2 cannot be identified.Further analysis revealed that these 2 could not be identified because their pothole width and length were too short and were rejected by the algorithm in the image-matching process.
Through calculation, the proposed method in this paper was found to have an accuracy (A) of 86.67%, a recall (R) of 86.67% and a precision (P) of 100% in simulated experiments.
The difference between the dimensional information of the detected potholes and the true measurement is shown in Table 2.
3.2 1.8 1.9 1.0 0.9 6.2 In the analysis of the data in Table 2, the method proposed in this paper was found to have an average error of 4.76% in the detection of pit length, width and depth in simulated experiments.
Through analysis of the results in Tables 1 and 2, it was concluded that the VIDARbased road-surface-pothole-detection method proposed in this paper is effective in detecting potholes in simulated experiments.
Experiments on Real Vehicles
In the real-vehicle experiments, an electric vehicle was used as the experimental vehicle (as shown in Figure 13).The relevant equipment is as follows: the MV-VDF300SC industrial digital camera was mounted on the vehicle as a monocular vision sensor; the HEC295IMU was mounted on the bottom of the experimental vehicle for real-time positioning and reading of the vehicle's movement; the GPS was used to pinpoint the vehicle's position; and the computing unit was used for real-time data processing.The digital camera was a USB2.0 standard interface with the advantages of a high resolution, high accuracy and high definition, and the relevant parameters are shown in Table 3.In the actual calculation process, due to the complexity of multi-sensor data sources, fuzzy logic was used to deal with complex systems [21] to combine the acquired information in a coordinated way, to improve the efficiency of the system and to process the information efficiently in real scenarios.
In this paper, the Zhengyou Zhang calibration method was used to calibrate the camera.The calibration process and calibration procedure are shown in Figure 14.
position; and the computing unit was used for real-time data processing.The digital camera was a USB2.0 standard interface with the advantages of a high resolution, high accuracy and high definition, and the relevant parameters are shown in Table 3.In the actual calculation process, due to the complexity of multi-sensor data sources, fuzzy logic was used to deal with complex systems [21] to combine the acquired information in a coordinated way, to improve the efficiency of the system and to process the information efficiently in real scenarios.In this paper, the Zhengyou Zhang calibration method was used to calibrate the camera.The calibration process and calibration procedure are shown in Figure 14.calculation process, due to the complexity of multi-sensor data sources, fuzzy logic was used to deal with complex systems [21] to combine the acquired information in a coordinated way, to improve the efficiency of the system and to process the information efficiently in real scenarios.In this paper, the Zhengyou Zhang calibration method was used to calibrate the camera.The calibration process and calibration procedure are shown in Figure 14.The aberrations of the camera included three types of aberrations: radial aberrations, thin-lens aberrations and centrifugal aberrations.The superposition of by these three types of aberrations caused a nonlinear distortion, the model of which can be represented in the image coordinate system, as shown in (14).
Among them, s 1 and s 2 are the centrifugal aberration coefficients of the camera; k 1 and k 2 are the radial aberration coefficients of the camera; and p 1 and p 2 are the thin-lens aberration coefficients of the camera.To facilitate the calculation, the centrifugal distortion of the camera is not considered in this paper, so the internal reference matrix of the camera used in this paper can be expressed as ( 15 The calibration of the external parameters of the camera can be calculated by obtaining the edge object points of the lane line, and the calibration results are shown in Table 4. Throughout the real-vehicle experiments, the IMU obtained the angular acceleration of the self-vehicle; the camera's pitch angle was obtained and updated solving the camera pose using the Quaternion method.The image was processed by a fast MSER-based image area-matching method, and the self-vehicle acceleration was used to calculate the horizontal distance between the self-vehicle and the road-surface pothole and the maximum width and maximum length of the road-surface pothole.The depth of the road-surface pothole was solved by using small-aperture imaging and a vertical distance from the obstacle that was always constant during real-vehicle movement.
Based on the above principles, the evaluation of the real-vehicle experiment was continued using the evaluation indicators in Section 4.1.Some of the test results are shown in Figure 15.The results of the experimental results are shown in Tables 5 and 6.
Among them, 1 s and 2 s are the centrifugal aberration coefficients of the camera; k and 2 k are the radial aberration coefficients of the camera; and 1 p and 2 p are th thin-lens aberration coefficients of the camera.
To facilitate the calculation, the centrifugal distortion of the camera is not considered in this paper, so the internal reference matrix of the camera used in this paper can be ex pressed as (15): The calibration of the external parameters of the camera can be calculated by obtain ing the edge object points of the lane line, and the calibration results are shown in Table 4 Throughout the real-vehicle experiments, the IMU obtained the angular acceleration of the self-vehicle; the camera's pitch angle was obtained and updated by solving the cam era pose using the Quaternion method.The image was processed by a fast MSER-based image area-matching method, and the self-vehicle acceleration was used to calculate th horizontal distance between the self-vehicle and the road-surface pothole and the maxi mum width and maximum length of the road-surface pothole.The depth of the road-sur face pothole was solved by using small-aperture imaging and a vertical distance from th obstacle that was always constant during real-vehicle movement.
Based on the above principles, the evaluation of the real-vehicle experiment was con tinued using the evaluation indicators in Section 4.1.Some of the test results are shown in Figure 15.The results of the experimental results are shown in Tables 5 and 6.Upon calculation, the proposed method in this paper had an accuracy (A) of 91.43%, a recall (R) of 91.43% and a precision (P) of 100% in the real-vehicle experiments.
The data in Table 5 show that of the 35 sets of potholes, 32 could be correctly identified and 3 could not be identified.Further analysis revealed that 3 sets of data could not be detected due to the depth of the pits being too shallow to be correctly identified as potholes.
In order to better cope with the different sizes and types of potholes in the road surface, we stipulated that one of the lengths or widths of the potholes used in the experiments should be greater than 150 mm, and the depth should be greater than 50 mm.
Analysis of the 35 sets of data shown in Table 6 revealed that the VIDAR-based roadsurface-pothole-detection method proposed in this paper could detect the width, length and depth of road-surface potholes more accurately in the actual inspection process, and the average error for the 35 sets of data was 6.23%.
The experimental pits were categorized into six types based on length, width and depth, as shown in Table 7.
In order to further verify the accuracy and correctness of the method proposed in this paper, the detection results of the Faster-RCNN and YOLO-v5 detection methods were compared with the VIDAR-based pothole-detection method proposed in this paper.
Shaoqing Ren proposed the Faster-RCNN model in 2015 [22].The model uses a small Region Proposal Network (RPN) instead of the Selective Search algorithm, which substantially reduces the number of proposal boxes, improves the drawback that Selective Search is too slow in generating proposal windows, and increases the processing speed of images.YOLO (You Only Look Once) is one of the most typical algorithms in the field of target detection and is able to perform the task of target detection very well.The YOLO-v5, proposed by Glenn Jocher in 2020, introduces adaptive anchor-frame calculation and adaptive scaling techniques, which have the advantages of a simple structure, fast speed and high accuracy.For the YOLO-v5 [23] detection method, images of various types of potholes on the road were divided into a training set and a validation set, and the image sequences shown in Section 4.1 of this paper were used as the test set.
In the comparison experiment, the experimental vehicles were in the same position, stationary, and used the VIDAR-based pothole-detection method, the Faster-RCNN method, and the YOLO-v5 method, to detect 80 potholes of different sizes.
The results of the three methods are shown in Table 8, and the average error of the three methods in detecting the pothole information are shown in Table 9:
Testing Method Average Error
vidar-based road-surface-pothole-detection method 6.86% faster-rcnn -based road-surface-pothole-detection method 19.31% yolo-v5s-based road-surface-pothole-detection method 15.26% In conjunction with Table 8, the accuracy (A), recall (R), precision (P) and detection time of the three methods were further analyzed, as shown in Table 10.
Analysis of the experimental results in Tables 9 and 10 reveals that the vidar-based pothole-detection method excludes the interference of road obstacles and only marks and calculates the non-road obstacle feature points, so it can detect potholes more correctly than faster-rcnn and yolo-v5s.from the overall results, we can see that as vidar solved the camera pose problem through imu, the vidar-based pothole-detection method proposed in this paper improved the correct rate by 16.89%, recall rate by 13.01%, accuracy by 6.04% and detection error by 12.45% compared to the faster-rcnn method; compared to the yolo-v5 method, the correct rate improved by 10.8%, the recall rate by 8.78%, the precision by 2.9% and the detection error by 8.4%.However, since the length, width and depth information of the pothole needs to be detected and the depth information of the pothole is obtained from the target tracking and data parsing of multiple images, the method proposed in this paper does not have a significant advantage in detection time.
Conclusions
This paper focused on detecting potholes and measuring their lengths, widths and depths by selecting and matching the feature points of the potholes and using the smallhole imaging principle of the camera.The research method in this paper can achieve the real-time detection of potholes on the road surface while the vehicle is moving, thus helping the vehicle to better avoid obstacles.This paper also focused on analyzing the methods for calculating the length, width and depth of potholes in the road surface, with particular emphasis on the method for updating the depth of potholes in the road surface based on feature point matching.
As described in Section 3 of this paper, relevant computational methods were analyzed for their length, width and depth information extraction of road-surface potholes, and depth-updating ideas and algorithms were provided for depth considering the field of view.
In Section 4, the VIDAR-based road-surface-pothole-detection method proposed in this paper was simulated in an indoor controlled scenario.The ability of the method to correctly detect potholes and the accuracy of the information related to the detection of potholes were evaluated through a unique evaluation system.
In addition, in Section 4, the VIDAR-based road-surface pothole algorithm proposed in this paper was verified to have a stronger detection capability and higher detection accuracy by comparing the experiments with the classical Faster-RCNN detection method and the YOLO-v5 detection method.
In summary, the VIDAR-based method for detecting potholes on roads proposed in this paper ensures that potholes can be identified in real time and accurately, and that the size and depth of the potholes can be detected during normal vehicle driving.This method avoids the delay of traditional road-maintenance work and has important research value for self-driving vehicles and active safety systems.The VIDAR detection method avoids the disadvantage of machine learning methods, which can only detect known types of obstacles.At the same time, machine learning cannot detect the exact size of the target obstacle, whereas the VIDAR detection method incorporates an IMU to obtain all information about the obstacle in front of the vehicle during travel by means of the camera imaging principle and pose analysis.The experimental results show that the method in this paper can effectively identify and detect potholes on the road surface and possess a stronger detection capability.Compared with the Faster-RCNN method, the correct rate was 16.89% higher, the recall rate was 13.01%higher, the accuracy was 6.04% higher and the detection error was 12.45% lower; compared with the YOLO-v5s method, the correct rate was 10.8% higher, the recall rate was 8.78% higher, the accuracy was 2.9% higher and the detection error was 8.4% lower, demonstrating that it can meet the requirements of detection in complex environments.
With the increase in vehicles on the road, the automatic detection of potholes on the road surface can effectively prevent road accidents, and at the same time can reduce the maintenance cost of the road.The VIDAR-based pothole-detection method integrates an IMU and a camera, can detect potholes of various sizes and types, and is a reliable and effective detection method.
However, due to the limitations of the experimental site, we were not able to provide more types of pavement pothole sizes and types; therefore, selected partial data from each type of pothole were examined.In addition, due to the failure to consider the vehicle's own attitude, the road surface detected by the method in this paper must be flat and straight, and the size of the pothole must satisfy the rule that the length and width are both
1 d 1 d and 2 d
It is known that the coordinates of th intersection point P of the obstacle in front of the self-vehicle and the road plane in th imaging plane is ( ) , x y .The horizontal distance between the point P and the self-ve hicle camera can be expressed as point A is the imaging point of the obstacle on the 1 y -axis in the prev ous frame, and point B is the imaging point of the obstacle on the 2 y -axis in the nex frame, as shown in Figure 2. Point A ' is the point on the road plane corresponding t A .Point B ' is the point on the road plane corresponding to B . is the horizonta distance from the self-vehicle camera at the previous frame position to the point A ' , d is the horizontal distance from the self-vehicle camera at the next frame position to th point B ' , where can be obtained by Equation (1).d is the distance th camera moved between the two frames before and after the shot (it is also the distance th car moved since the time period), , then A ' and B ' have heigh on which, if d is known, then static obstacles can be identified by l .
then A and B have height, on which, if ∆d is known, then static obstacles can be identified by ∆l.
2 d 1 d and 2 d
It is known that the coordinates of the intersection point P of the obstacle in front of the self-vehicle and the road plane in the imaging plane is ( ) , x y .The horizontal distance between the point P and the self-vehicle camera can be expressed as point A is the imaging point of the obstacle on the 1 y -axis in the previous frame, and point B is the imaging point of the obstacle on the 2 y -axis in the next frame, as shown in Figure 2. Point A ' is the point on the road plane corresponding to A .Point B ' is the point on the road plane corresponding to B . 1 d is the horizontal distance from the self-vehicle camera at the previous frame position to the point A ' , is the horizontal distance from the self-vehicle camera at the next frame position to the point B ' , where can be obtained by Equation (1).d is the distance the camera moved between the two frames before and after the shot (it is also the distance the car moved since the time period), , then A ' and B ' have height, on which, if d is known, then static obstacles can be identified by l .
1 1 , 1 ,
x y ; B is another imaging point for the width of the rec- tangular box: its coordinates are ( )2x y .' A and ' B are the points opposite A and B on the road plane, respectively.The distance 1 d and 2 d between points ' A and '
D
are the points o C and D on the road plane, respectively.The distance 3 d and 4 dbetwee ' C and '
Sensors 2023 ,
23, x FOR PEER REVIEW 9 of 20After substitution and simplification, the final expression of h Δ is
Figure 8 .
Figure 8. Schematic diagram of road-surface pothole depth update.
1 h 3 . 5 .
Δ .The second and third groups repeat the process of the first group, and the corresponding detected depths 2 h Δ , 3 h Δ respectively, are found; the maximum depth of the pothole is determined by comparing the size of 1 h Δ , VIDAR-Based Method for Pavement-Pothole Detection 1. Camera parameter updating based on IMU data (1) Calibration of camera initial parameters.Calibrate the monocular camera mounted on a stationary vehicle, obtain the camera focal length f , mounting height h, and the pitch angle ∂, and obtain p (the pixel size of the photosensitive chip).
Figure 8 .
Figure 8. Schematic diagram of road-surface pothole depth update.
3. 5 .
VIDAR-Based Method for Pavement-Pothole Detection 1. Camera parameter updating based on IMU data (1) Calibration of camera initial parameters.Calibrate the monocular camera mount-ed on a stationary vehicle, obtain the camera focal length f , mounting height h, and the pitch angle ∂, and obtain p (the pixel size of the photosensitive chip).
( 4 )
Updated calculation of pavement pothole depth.Calculate the pavement pothole depth The flow chart of the VIDAR-based road-surface-pothole-detection method studied in this paper is shown in Figure 9.
Figure 10 .
Figure 10.Schematic diagram of road-surface potholes under the simulation experiment.
Figure 11 .
Figure 11.Feature point extraction and rectangular-box marking.
Figure 12 a
,b represent the processing for two consecutive image frames.The block shading represents the position of the obstacle in the second frame image.The green crosses represent the feature points of the first frame image, the red circles represent the corresponding feature points of the second frame image, and the yellow line segments indicate the correspondence.
Figure 10 .
Figure 10.Schematic diagram of road-surface potholes under the simulation experiment.
Figure 10 .
Figure 10.Schematic diagram of road-surface potholes under the simulation experiment.
Figure 11 .
Figure 11.Feature point extraction and rectangular-box marking.
Figure 12 a
,b represent the processing for two consecutive image frames.The block shading represents the position of the obstacle in the second frame image.The green crosses represent the feature points of the first frame image, the red circles represent the corresponding feature points of the second frame image, and the yellow line segments indicate the correspondence.
Figure 11 .
Figure 11.Feature point extraction and rectangular-box marking.
Figure 11
Figure 11 is divided into four stages.The blue asterisks in the second stage represent the MSERs feature points; the block colors in the third stage represent the obstacle contours identified through the feature points; and the yellow rectangular boxes in the fourth stage represent the obstacle frames extracted through the feature points.After the potholes were successfully detected, the images were matched and feature points were removed from two consecutive frames using a fast MSER-based image-region matching method, as shown in Figure12.Figure 12a,b represent the processing for two consecutive image frames.The block shading represents the position of the obstacle in the second frame image.The green crosses represent the feature points of the first frame image, the red circles represent the corresponding feature points of the second frame image, and the yellow line segments indicate the correspondence.
Figure 11 is divided into four stages.The blue asterisks in the second stage represent the MSERs feature points; the block colors in the third stage represent the obstacle contours identified through the feature points; and the yellow rectangular boxes in the fourth stage represent the obstacle frames extracted through the feature points.After the potholes were successfully detected, the images were matched and feature points were removed from two consecutive frames using a fast MSER-based image-region matching method, as shown in Figure12.Figure 12a,b represent the processing for two consecutive image frames.The block shading represents the position of the obstacle in the second frame image.The green crosses represent the feature points of the first frame image, the red circles represent the corresponding feature points of the second frame image, and the yellow line segments indicate the correspondence.
Figure 12 .
Figure 12.Feature point tracking and image matching.
Figure 12 .
Figure 12.Feature point tracking and image matching.
Figure 14 .
Figure 14.Camera calibration and alignment process.
Figure 14 .
Figure 14.Camera calibration and alignment process.Figure 14.Camera calibration and alignment process.
Figure 14 .
Figure 14.Camera calibration and alignment process.Figure 14.Camera calibration and alignment process.
Figure 15 .
Figure 15.Selected results of pothole detection under the real environment.
Figure 15 .
Figure 15.Selected results of pothole detection under the real environment.
'
B are the points opposite A and B on the road plane, respectively.The distance 1 ' A and d and 2 d between points ' A and ' , x FOR PEER REVIEW 10 of 20 horizontal distance i d between feature points and the camera at t , and calculate the horizontal width w and length hole lof the pavement pothole at t .
stacle is a real obstacle (the obstacle has a certain height).If H h ≤ , the obstacle is a pot- hole.Calculate the depth of the pit h Δ at t .
Table 1 .
Simulation test results.
Table 2 .
The difference between the pothole dimensional information detected in the simulation experiment and the real measurements.
Table 3 .
Some performance parameters of the MV-VDF300SC camera.
Table 3 .
Some performance parameters of the MV-VDF300SC camera.
Table 3 .
Some performance parameters of the MV-VDF300SC camera.
Table 4 .
External parameter calibration of the MV-VDF300SC.
Table 4 .
External parameter calibration of the MV-VDF300SC.
Table 5 .
Results of experimental tests on real vehicles.
Table 6 .
The difference between the pothole dimensional information detected in the real network experiment and the real measurements.
Table 7 .
Category of potholes.
Table 8 .
Results of the three methods.
Table 9 .
Comparison of the average error of three methods for detecting pothole information.
Table 10 .
Comparative accuracy of the three methods for the detection of the pothole information. | 2023-08-30T15:06:07.950Z | 2023-08-28T00:00:00.000 | {
"year": 2023,
"sha1": "28a722bddb8ea1c5b26df7702bf0ead8dd5c1f7d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/17/7468/pdf?version=1693211392",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e36c9889727dd52235bff3fcd332571a6d4d094e",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
5022683 | pes2o/s2orc | v3-fos-license | Water-Extraction by Split-Roots of Sesbania and Pigeon Pea Exposed to Spatially Heterogeneous Distribution of Soil Water
Abstract Previous studies have suggested that the deep roots of sesbania (Sesbania sesban) function less efficiently in water acquisition than those of pigeon pea (Cajanus cajan) despite similar rooting depths. To investigate this phenomenon, both species were grown in a vertically split-root system. The top soil was watered at two-day intervals and the bottom soil was kept wet. Fifty-seven days after sowing, the watering to the top soil was withheld and the water uptake was monitored in both the layers. At any given rate of transpiration, the water influx rate per unit root surface (WIR/RS) was higher in the top soil than in the bottom soil in sesbania, despite the greater availability of water in the latter. By contrast, in pigeon pea, the WIR/RS was higher in the bottom soil than in the top soil. In sesbania, aerenchyma tissue was observed only in the cortex of the roots in the bottom soil. On the other hand, aerenchyma tissue was scarcely observed in pigeon pea roots, suggesting that the presence of aerenchyma tissue led to the reduced WIR/RS of sesbania roots in the bottom soil. Thermal image analysis showed that the stomata of sesbania leaves did not respond to water shortage. Instead, the sesbania leaves were shed in order to avoid desiccation, further reducing the potential to extract water. We therefore conclude that the water-extraction ability of deep roots was lower in sesbania than in pigeon pea as a result of aerenchyma formation and leaf shedding.
The development of deep rooting systems in plants is thought to be a drought avoidance strategy (Kramer and Boyer, 1995;Huang, 2000). In natural environments, soil water potential tends to increase downwards in the soil profi le. Although rainfall events temporarily disrupt the soil water potential gradient, it recovers as a consequence of gravimetric water fl ow, runoff and evapotranspiration. Deep rooting systems might therefore allow plants that are experiencing drought conditions to utilize water sources deep in the soil layers, which are not available to those with shallow rooting systems. Indeed, several studies have shown that genotypes with deeper rooting systems have greater resistance to water defi cit in species such as peanut (Arachis hypogaea L.) (Ketring and Reid, 1993), soybean (Glycine max (L.) Merr.) (Hirasawa et al., 1994) and cowpea (Vigna unguiculata (L.) Walp.) (Timsina et al., 1994).
However, gaining access to water sources through the root system does not always improve the water status of a plant. For instance, transpiration rates in rice (Oryza sativa L.) (Ishihara and Saito, 1987), maize (Zea mays L.) (Hirasawa and Hsiao, 1999) and soybean (Huck et al., 1983) are reduced by hydraulic resistance in their root systems even when water is available.
The presence of a partial dry region within a single root system can also cause a reduction of stomatal conductance, even when the rest of the roots are kept wet (Gowing et al., 1990;Davies et al., 2002). The mechanism of this process involves abscisic acid (ABA)based chemical signals that are produced in the roots in the dry region . This means that deep rooting systems might not be able to fully utilize a water source if such a partial drying effect induces the substantial stomatal closure.
Previously, we found that deep roots were not always successful in water acquisition (Sekiya and Yano, 2002). We attempted to collect xylem sap exudation from sesbania (Sesbania sesban L.) and pigeon pea (Cajanus cajan Druce) grown in semi-arid Zambia during the dry season. Although both species had root systems that were able to reach groundwater at a depth of approximately 2 m, only pigeon pea exuded xylem sap. Furthermore, hydrogen stable isotope analysis revealed that pigeon pea obtained groundwater and supplied it to neighboring maize plants through hydraulic lift. However, there was no evidence of this process in sesbania (Sekiya and Yano, 2004). On the basis of these results, we suppose that the deep rooting systems in the two species differ in their water acquisition properties.
The present study was designed to test this hypothesis. Sesbania and pigeon pea plants were grown in a vertically-split root system, in which the top compartment was dried and the bottom compartment was kept wet to imitate the fi eld environment. The rate of water infl ux into the roots was measured in both compartments. Furthermore, thermal image analysis of canopies was carried out to determine the stomatal response of the two species to this heterogeneous soil water environment. The main aim of the study was to determine the water-extracting ability of roots in two soil layers with different moisture levels. In addition, we also investigated the potential causes of interspecifi c differences in water acquisition.
Experiment 1 (1) Culture system and growth condition
A split-root culture system was used in this experiment (Fig. 1). Polyvinyl chloride tubes (15 cm height, 5 cm diameter) were prepared so that their bottom ends were covered with 0.15 mm nylon mesh in order to prevent soil erosion while allowing root penetration. Each tube was fi lled with 400 g of loamy sand. Two tubes were then joined together using masking tape to tightly seal the junction between them. There was an air gap of approximately 8 mm between the two soils, which prevented the movement of water except through evaporation. The connected tubes were placed in a 1.0 L semi-transparent container along with 0.5 L of water. Measurements of water uptake and leaf area were carried out on four different dates, with four replications of each, so a total of 16 individual plants were required for each species. All plants were grown in a green house at Nogoya University, Japan, until the measurements were made.
Sesbania (Sesbania sesban L, provided by the International Center for Research in Agroforestry: ICRAF) and pigeon pea (Cajanus cajan Druce, purchased from Snow Brand Seed Co. Ltd.) were used. Two germinated seeds were sown in the top soil of each culture system on 22 August, 2001, and the seedlings were thinned to one stand when the third leaf emerged. The soil in the top tube of each culture system received 80 mL of tap water at two-day intervals and 100 mL of 1/500 Hyponex solution (5 : 10 : 5) at four-day intervals. It was estimated that the soil water content in the top tube fl uctuated from 8.5 to 30% in pigeon pea and from 12 to 30% in sesbania throughout the experimental period. The soil in the bottom tube of each culture system was kept wet by maintaining the initial volume of water in the semitransparent container. Consequently, all the roots developed in this tube were uniformly exposed to approximately 31% of soil water content and hence assumed to experience the similar oxygen levels.
(2) Measurement of water infl ux and leaf area Fifty-seven days after sowing (17 October 2001), the top soil of each culture system was supplied with 100 mL of water and the semi-transparent container at the bottom was fi lled with water to maintain the initial volume. The surfaces of the top tube and the semitransparent container were then wrapped with plastic sheets to prevent all moisture loss other than through transpiration. Twenty-four hours after watering, the volumetric soil water content in the top tube was measured using a soil moisture meter (HydroSense, Campbell Scientifi c, Australia). The soil water content in the bottom tube was measured by tracing the water surface level in the semi-transparent container. These measurements determined the initial soil water content in each tube. On the same day, all of the plants were transferred into a growth chamber, which was illuminated with natural light (30/25°C; 70% RH; 14/10 h day/night).
After 24 h (19 October 2001), the fi rst measurements of water uptake and leaf area were made. The shoots were cut at their bases. The leaves were then removed from the stems and spread on a transparent sheet, without overlap. Digitised images were taken using a scanner with a resolution of 200 dpi and an output format of 256 grey-scales. Leaf areas were calculated from the images using the NIH Image version 1.60 image-analysis software.
Immediately after removal of the shoots, the water content of the soil in each tube was measured using the methods described above. The amount of water that was lost from the soil through transpiration was calculated for each tube by subtracting the soil water content from the initial values. Each of these values was then divided by the number of days after withholding water, in order to determine the water infl ux rate into split-roots in the soil in each tube. The measurements of water uptake and leaf area were repeated every two days in order to assess the water infl ux rate into splitroots, the amount of transpiration per plant, leaf area and transpiration rate per unit leaf area.
(3) Measurement of root length, root surface area, root weight per unit root volume and water infl ux rate Root length and width were measured for the plants sampled on the fi nal day of the experiment. The roots were obtained separately from the top and bottom tubes by removing the soils on a sieve (212 µ m). Each root sample was divided into two sub-samples of similar fresh weight. Then one was preserved in FAA (formalin: acetic acid: 70% ethanol = 1 : 1 : 18 by volume) until the morphological measurements and the other was dried at 80°C for 48 hours to weigh the dry matter. Each root sample in FAA was then rinsed with water and spread on a transparent sheet, without overlap. The digitised images were produced using the methods described above. Root length was determined by diameter class, using a macro-program developed by Kimura et al. (1999) on the NIH Image version 1.60 software. Root surface area and root volume were estimated from the root length by diameter, assuming that the roots for a given diameter were cylindrical. Root weight per unit root volume (RW/RV) was then calculated as the root dry weight divided by the estimated root volume. The root surface area was divided by 65 days (the period between sowing and the fi nal day) in order to determine the increase rate of root surface area, and the root surface area on each sampling date was estimated from the increase rate. Water infl ux rate per unit root surface (WIR/RS) was then calculated as the water infl ux rate into split-roots divided by the root surface area for each tube.
(4) Observation of root cross sections
Anatomical examinations were carried out on cross sections of the root samples preserved in FAA. Firstorder lateral roots were randomly collected from the top and bottom soil, and cross sections were prepared by hand under a stereomicroscope. The sections were then observed under an inverted microscope (IX 70, Olympus, Japan) equipped with a digital camera, and the images were produced using image-editing software (Cool SNAP, Roper Scientifi c, USA).
(5) Statistical analyses
The data were analysed using analysis of variance (ANOVA). The mean separation between treatments was then determined using Fisher's protected leastsignifi cant difference method (PLSD) for all analyses. In addition, simple regression analysis was performed between WIR/RS and transpiration rate per plant.
Experiment 2 (1) Culture system and growth condition
The same polyvinyl chloride tubes as those used in Experiment 1 were prepared so that their bottom ends were sealed with polyvinyl chloride disks. Each tube was fi lled with 350 g of loamy sand. Two water regimes (wet treatment and dry treatment) were tested with fi ve replications of each, so a total of ten individual plants were required for each species. All plants were grown under controlled conditions in a growth chamber (30/25°C; 65% RH; 12/12 h day/night; light intensity 300 µmol m -2 s -1 ). Two germinated seeds were sown in each tube, and were thinned to one stand when the third leaf emerged. Each tube was weighed and watered daily to maintain 20% soil water content, and 1/500 Hyponex solution (5 : 10 : 5) was applied at fi ve-day intervals.
(2) Water regime and thermal image analysis Thirty-nine days after sowing, watering was withheld from one half of the tubes of each species (dry treatment); the remaining tubes received water daily to maintain 20% soil water content (wet treatment).
To compare leaf temperatures between treatments, thermal images of the leaves from both treatment groups were captured using a thermal image analyser (TVS-200, Avionics, USA) immediately before the wettreatment group received water. Emissivity was set at 1, as an absolute measurement of leaf temperature was not required. Images were analysed using thermographic software (PicEd AVIO, METZGER EDV, Germany).
Experiment 1 (1) Root length, root surface area and root weight per unit root volume
The mean length, surface area and weight per unit volume of sesbania and pigeon pea roots in the top and bottom soils are shown in Table 1. Sesbania developed signifi cantly longer roots with a larger root surface area in the bottom soil compared with those in the top soil. By contrast, the length and surface area of pigeon pea roots did not signifi cantly differ between the two soils. In the bottom soil, the surface area of the sesbania roots was signifi cantly larger than that of the pigeon pea roots although there was no statistically signifi cant difference in root length between the two species.
Root weight per unit root volume (RW/RV) varied with the species and the soils. In each species, the roots in the bottom soil had signifi cantly lower RW/RV than those in the top soil. In the bottom soil, the RW/RV of sesbania roots was signifi cantly lower than that of pigeon pea roots.
(2) Changes over time in leaf area and transpiration rate per unit leaf area and per plant
Transpiration rate per plant is a function of the leaf area and the transpiration rate per unit leaf area. The values of these three parameters throughout the time period during which water was withheld are shown in Fig. 2. In sesbania, the leaf area remained relatively stable after withholding water until day 3 ( Fig. 2A). It had signifi cantly decreased at day 5, owing to falling leaves. This trend continued until day 7, by which time the leaf area had been reduced to approximately 56% of the values recorded on day 1. In pigeon pea, after withholding water, some older leaves were lost between day 1 and day 3, which resulted in a signifi cant reduction in leaf area ( Fig. 2A). However, new leaves subsequently emerged and leaf area had signifi cantly increased by day 5, reaching a stable level by day 7 (Fig. 2A). Leaf area on day 7 was approximately 79 % of that recorded on day 1 in pigeon pea. The transpiration rate per unit leaf area of sesbania did not signifi cantly change throughout the period that water was withheld (Fig. 2B). In pigeon pea, the transpiration rate per unit leaf area decreased signifi cantly between day 1 and day 5, then showed a slight increase by day 7 (Fig. 2B).
In sesbania, the transpiration rate per plant began to decrease on day 5 and was kept low until day 7 (Fig. 2C). This trend was similar to that observed in leaf area ( Fig. 2A), but was the opposite of the nonsignifi cant trend observed in transpiration rate per unit leaf area. The transpiration rate per plant for pigeon pea was signifi cantly reduced at day 3 (Fig. 2C) as a result of a reduction in both leaf area ( Fig. 2A) and transpiration rate per unit leaf area (Fig. 2B). However, the transpiration rate per plant did not signifi cantly increase until day 7 (Fig. 2C), despite the recovery of leaf area on day 5 ( Fig. 2A). This might have been a result of the reduction in transpiration rate per unit leaf area on day 5 and its slight recovery on day 7 ( Fig. 2A). It, therefore, seems likely that water effl ux in sesbania is predominantly determined by leaf area, whereas water effl ux in pigeon pea is controlled by a combination of both leaf area and transpiration rate.
(3) Water infl ux rate into split-roots and water infl ux rate per unit root surface
The rate of water infl ux into split-roots of both species during the period that water was withheld is shown in Fig. 3. In sesbania, the water infl ux rate into split-roots in the top soil was signifi cantly reduced on day 3 and subsequently remained at this level owing to the depletion of water (Fig. 3A). In the bottom soil, the water infl ux rate had signifi cantly increased at day 3, however, it decreased on day 5 and had not recovered on day 7 probably as a result of the reduced leaf area (Fig. 2A).
In pigeon pea, the water infl ux rate into split-roots in the top soil continuously decreased until day 5 as a result of water depletion (Fig. 3B). In the bottom soil, the rate was reduced on day 3, but subsequently increased. This increase in water infl ux rate into split-roots in the bottom soil can be explained by concomitant increases in both leaf area ( Fig. 2A) and transpiration rate per unit leaf area (Fig. 2B). Fig. 4 shows the changes over time in WIR/RS in the top and bottom soils during the period that water was withheld. In sesbania, the water infl ux rate on day 1 was signifi cantly higher in the top soil than in the bottom soil, however, this difference was not observed between day 3 and day 7 (Fig. 4A). In pigeon pea, also, a signifi cant difference in the water infl ux rate of the two soils on day 1 became absent between day 3 and day 5, however, WIR/RS in the bottom soil was signifi cantly increased on day 7 (Fig. 4B). Fig. 5 shows the correlation between transpiration rate per plant and WIR/RS. In sesbania, at any given transpiration rate per plant, the WIR/RS was higher in the top soil than in the bottom soil. By contrast, in pigeon pea, at any given transpiration rate per plant, the WIR/RS in the bottom soil was higher than that in the top soil. We predicted that WIR/RS at a given transpiration rate per plant would be higher in the bottom soil than in the top soil, as water depletion progressed in the top soil and more water was available in the bottom soil. This was the case in pigeon pea, but not in sesbania. These results suggest that sesbania roots in the bottom soil had some resistance against water infl ux that was not present in those roots in the top soil.
(4) Root cross sections
To investigate the cause of the differences in water infl ux rate per unit root surface at a given transpiration rate per plant, root cross-sections were examined (Fig. 6). The most signifi cant anatomical difference between the two species was the presence of aerenchyma in the cortex of sesbania roots from the bottom soil; these tissues were not observed in pigeon pea roots. The low RW/RV of sesbania roots in the bottom soil may be attributed to the presence of aerenchyma tissues in the cortex. Fig. 7 shows thermal images of the canopies of sesbania and pigeon pea plants from the wet and dry treatment groups. The soil water content for the dry treatment group corresponds to approximately -170 kPa, according to the soil water retention curve calculated previously. Signifi cant differences in leaf temperature between the two treatments were observed in pigeon pea, but not in sesbania. Differences in leaf temperature measured under a certain atmospheric conditions can be attributed to differences in transpiration rate (Hashimoto et al., 1984;Merlot et al., 2002). As plants from the dry treatment group experienced dry conditions only for two-day period, differences in leaf surface morphology were not expected between treatments. It is therefore possible that the differences in leaf temperature were caused by stomatal behaviour. These results suggest that pigeon pea reduced its stomatal apertures in response to the dry treatment, whereas the stomata of sesbania remained open even at approximately -170 kPa of the soil water potential.
Discussion
The split-root system used in the present study allowed us to evaluate the water-extraction ability of roots in different soil layers without the need for isotope labels. As a result, we have revealed that the deep rooting system of sesbania is unable to fully exploit water sources.
We propose that the presence of aerenchyma tissues in the root cortex was the major cause of reduced WIR/RS in the deeper roots in sesbania (Fig. 6). Aerenchyma formation is often associated with dense cellular packing, suberin deposits and lignifi cation in root cells, which reduces radial oxygen loss (Visser et al., 2000;McDonald et al., 2002;Colmer, 2003). All of these factors are also thought to reduce water infl ux Fig. 7. Thermal images of sesbania and pigeon pea plants from the dry and wet treatment groups.
The values above and below the images indicate the temperature and soil water content, respectively. rate through increased hydraulic resistance in roots (Kramer and Boyer, 1995). In addition, Miyamoto et al. (2001) suggested that water shortage in wellwatered rice was caused by the presence of barriers in the peripheral layers and the endodermis as a result of aerenchyma formation. These studies support our theory that aerenchyma formation in the deeper roots of sesbania increases the hydraulic resistance and lowers the WIR/RS. Interspecifi c difference in the water-extraction ability of deeper roots was caused by stomatal behaviour also. Interestingly, the stomata of sesbania lacked the predicted response to water shortage (Fig. 7). This might have caused sesbania to shed its leaves in order to avoid desiccation (Fig. 2), as the water depletion progressed in the top soil and the water supply from the bottom soil remained stagnant probably due to the high hydraulic resistance in deeper roots (Fig 3). As a consequence, there was a decrease in the driving force of water acquisition, which lowered the WIR/RS in the bottom soil (Fig. 5).
It is well known that stomatal behaviour is regulated by ABA, which accumulates around the guard cells in response to drought stress (Assmann and Shimazaki, 1999;Schroeder et al., 2001). ABA is also known to be involved in leaf senescence, and so, by promoting senescence, it might indirectly increase the formation of ethylene, which stimulates leaf shedding (Taiz and Zeiger, 1998). In our study, the leaves of sesbania turned yellow in colour before they fell, which is a typical symptom of leaf senescence, suggesting the accumulation of ABA in the leaves. However, leaf temperature showed little response to drought stress in sesbania. Therefore, we suggest that the guard cells of sesbania might be insensitive to ABA signalling. Further research is required to clarify this issue.
Our study highlights that simple measurements of root length density are not always suffi cient to estimate the water-extraction ability of roots, especially those in deeper soil layers (Table 1). Several previous authors have reported that root length density in the deeper soil layers is positively correlated with water relation parameters in plants under drought conditions, and have attributed this correlation to water acquisition by the deep roots (Ketring and Reid, 1993;Hirasawa et al., 1994;Timsina et al., 1994). In addition, some investigators have attempted to compare the relative abilities of different species competing for soil water within plant community by measuring root length densities (Lehmann et al., 1998;Smith et al., 1999;Livesley et al., 2000). However, as shown in the present study, the water-extraction ability of the deeper roots might differ from that predicted on the basis of root length, as a result of structural differences both within and between species.
Aerenchyma formation is recognized as an adaptive response of plants to oxygen-defi ciency in a wide range of wetland species (Armstrong and Drew, 2002). Rice plants have been shown to develop the aerenchyma tissue even under upland conditions. Several studies have suggested that deeper rooting systems should be targeted as an important primary trait for upland rice production under drought conditions (Fukai and Cooper, 1995;Kondo et al., 2003). However, our results suggest that improvements in the anatomical response to soil water environments should also be considered.
In conclusion, the deep roots of sesbania have a relatively low water-extraction ability compared with those of pigeon pea. We propose that aerenchyma formation is a primary factor for the low waterextraction ability in the deeper roots of sesbania. Furthermore, leaf shedding in response to the water shortage also reduces the water infl ux into the deep roots of sesbania. It is considered that the leaf shedding in this species is induced by the lack of a stomatal response to water shortage. | 2018-04-23T23:07:09.327Z | 2006-01-01T00:00:00.000 | {
"year": 2006,
"sha1": "5c9366067b1d1cc02d813d0385a5835d5cd746c1",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1626/pps.9.191",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "26a463d8a371434af91de2ab47b479b85e1116bc",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
1257688 | pes2o/s2orc | v3-fos-license | Messenger RNA sequencing and pathway analysis provide novel insights into the biological basis of chickens’ feed efficiency
Background Advanced selection technologies have been developed and continually optimized to improve traits of agricultural importance; however, these methods have been primarily applied without knowledge of underlying biological changes that may be induced by selection. This study aims to characterize the biological basis of differences between chickens with low and high feed efficiency (FE) with a long-term goal of improving the ability to select for FE. Results High-throughput RNA sequencing was performed on 23 breast muscle samples from commercial broiler chickens with extremely high (n = 10) and low (n = 13) FE. An average of 34 million paired-end reads (75 bp) were produced for each sample, 80% of which were properly mapped to the chicken reference genome (Ensembl Galgal4). Differential expression analysis identified 1,059 genes (FDR < 0.05) that significantly divergently expressed in breast muscle between the high- and low-FE chickens. Gene function analysis revealed that genes involved in muscle remodeling, inflammatory response and free radical scavenging were mostly up-regulated in the high-FE birds. Additionally, growth hormone and IGFs/PI3K/Akt signaling pathways were enriched in differentially expressed genes, which might contribute to the high breast muscle yield in high-FE birds and partly explain the FE advantage of high-FE chickens. Conclusions This study provides novel insights into transcriptional differences in breast muscle between high- and low-FE broiler chickens. Our results show that feed efficiency is associated with breast muscle growth in these birds; furthermore, some physiological changes, e.g., inflammatory response and oxidative stress, may occur in the breast muscle of the high-FE chickens, which may be of concern for continued selection for both of these traits together in modern broiler chickens. Electronic supplementary material The online version of this article (doi:10.1186/s12864-015-1364-0) contains supplementary material, which is available to authorized users.
Background
Genetic selection has tremendously improved livestock and plant production over the past 50 years [1,2]. Advanced selection technologies have been developed and continually optimized to genetically improve traits of agricultural importance [1,3,4]. However, these methods have been primarily applied without knowledge of underlying biological changes that may be induced by selection [5,6]. Previous studies reported possible association between selection for improved performance and increased rate of physiological and metabolic disorders in modern breeds [7][8][9]. For example, chickens and turkeys selected for high growth rate have shown increased incidence of muscle disorders, heart failure syndrome and ascites [10][11][12]. A detailed characterization of traits of breeding interest may help to anticipate unfavorable consequences of long-term selection programs and adjust or perhaps redefine breeding objectives accordingly.
One of the most important traits in broiler chicken production is feed efficiency (FE), which defines the chicken's ability to convert feed into body weight. As feed cost represents nearly 70% of the total cost in poultry production, improving FE has been an important goal in broiler chicken breeding programs [13]. Selection for FE in broiler chickens can be accomplished using different measurements and procedures. A widely used measure of FE in broiler chickens is residual feed consumption (RFC), which is defined as the difference between an animal's actual feed intake and expected feed intake on the basis of body weight and growth [13]. Although moderate heritability, ranging from 0.42 to 0.45, for RFC has been reported in a previous study using more diverse chicken populations [14], to our knowledge this trait exhibits lower heritability (~0.2) in the commercial pure lines, possibly explaining the relatively slow progress in improving FE in commercial breeding programs. Insights into the biological basis of differences in chicken FE are required to develop more efficient and sustainable selection strategies.
Previous studies have revealed a link between mitochondrial function and FE in broiler chickens. Lower electron transport chain coupling and greater hydrogen peroxide (H 2 O 2 ) production were observed in mitochondria of low-FE birds [15]. A microarray gene expression analysis of breast muscle samples from high-and low-FE broiler chickens identified 782 differentially expressed genes [16,17]. Most of the genes up-regulated in high-FE chickens were related to anabolic metabolism, whereas genes up-regulated in low-FE chickens were associated with muscle fiber development, muscle function, cytoskeletal organization and stress response [16]. With the rapid development of next-generation sequencing technologies, RNA sequencing (RNA-seq) has been replacing microarray technology for transcriptome-wide gene expression analysis. Avoiding technical issues inherent to microarray such as cross-hybridization and narrow ranges of signal detection, RNA-seq can provide more accurate and comprehensive information regarding changes in gene expression between different conditions or different phenotypes [18][19][20][21]. Therefore, a global gene expression study using RNA-seq is required for better understanding the molecular basis of FE in broiler chickens.
The objective of this study is to characterize the biological basis of differences between high-and low-FE chickens through breast muscle RNA-seq analysis. Using tissue samples from extreme high-and low-FE broiler chickens, the present study identifies genes and pathways differentially regulated in breast muscle between these two groups of chickens, providing important information toward understanding the biological basis of variation in FE in broiler chickens.
Animals and sample collection
Six groups of 400 male commercial broiler chickens from a cross between three commercial broiler pure lines were sampled at 29 days of age from the field in the Delmarva region of the United States and transferred into individual cages for feed efficiency measurement. The cages were arranged in rows at two levels, top or bottom levels, and each row had 100 cages. The birds were individually weighed at the beginning (29 days of age, BW 29 ) and end of the FE test (46 days of age, BW 46 ) and fed ad libitum until 47 days of age. At 47 days of age, chickens were euthanized by cervical dislocation. Breast muscle samples (~1-2 g) were obtained from high-and low-FE birds, immediately flash frozen in liquid nitrogen and kept at -80°C until further processing. Body weight post-euthanization (BW 47 ) and breast muscle weight (BMW 47 ) were also recorded and used for estimating the percentage of breast muscle [(BMW 47 / BW 47 )*100]. The total feed consumption of each bird was measured by subtracting the total amount of feed left at the end of the test (46 days of age) from the total amount of feed provided to each bird at the beginning of the test (29 days of age). To measure the broiler's FE, residual feed consumption (RFC) was calculated using the following equation: where FC represents the feed consumption of each bird; Level represents the fixed effects of row location (top or bottom level) on FC; Row (Level) represents the fixed effects of row nested within row location; BW 29 is the initial (29-day) body weight; BW 46 is the ending (46day) body weight; c is the intercept; and b1 and b2 are the partial regression coefficients of FC on BW 29 and BW 46 .
After excluding outliers and erroneous data (3.3%) and data from birds with defects (1.2%; leg and wings problem, etc.), samples from clinically healthy chickens exhibiting the highest (n = 12) and lowest (n = 13) RFC from the six groups of 400 birds were selected for cDNA library preparation. Two samples from the high-FE group did not produce enough cDNA libraries, so samples from 23 birds, 10 high-and 13 low-FE, were used for further analysis. The protocols were submitted to, and the use of the collected data and samples for research was approved by, the University of Delaware Agricultural Animal Care and Use Committee.
RNA isolation
The frozen breast muscle samples were smashed into pieces by hammering. Pulverized tissues were stored at -80°C until RNA extraction. The total RNA was isolated from 70-100 mg of fragmented breast muscle tissues using a mirVana™ miRNA isolation kit (Ambion®; Austin, TX), according to the manufacturer's protocol. RNA quantity and quality were assessed using a NanoDrop ND-1000 spectrophotometer (NanoDrop Technologies; Wilmington, DE) and Agilent 2100 bioanalyzer (Agilent Technologies; Santa Clara, CA). The RNA integrity numbers of all the RNA samples were above 8.0.
RNA-seq library preparation and sequencing
In total, 23 cDNA libraries were constructed using an Illumina Truseq stranded RNA sample preparation kit following the manufacturer's instruction (Illumina Inc.; San Diego, CA). Briefly, polyA containing mRNA molecules were purified by oligo (dT) magnetic beads and subsequently fragmented. The purified RNA fragments were reversely transcribed into first-strand cDNA using SuperScript II reverse transcriptase (Invitrogen™; Austin, TX). The second-strand cDNA was synthesized using dUTP instead of dTTP, as a result, the second-strand cDNA was not amplified during PCR because the polymerase can't add nucleotide to dUTP. The doublestrand cDNA was adenylated at the 3' end and ligated to the Illumina indexing adapters. After PCR enrichment, cDNA quantity and quality were assessed using a Nano-Drop ND-1000 spectrophotometer and Agilent 2100 bioanalyzer. The averaged size of synthesized cDNA fragments was approximately 260 bp. cDNA libraries were normalized to 10 nM for each sample and then pooled together and sequenced on four lanes of an Illumina Hiseq 2000 sequencer at Delaware biotechnology institute, University of Delaware. Approximate 68 million fragments per sample were sequenced by 75-bp from both ends.
Mapping reads to the chicken reference genome
Before read alignment, the quality of raw sequence reads was checked using the FastQC program, and nucleotide calls with a quality score of 28 or higher were considered very good quality [22]. Sequencing reads from each sample were mapped to the chicken reference genome [Ensembl Galgal4 (GCA_000002315.2)] using the TopHat program [23]. Because only the strand generated during the first-strand synthesis was sequenced, "-library-type frfirststrand" was applied as one of the parameters in our read alignment using TopHat. Only one alignment for a given read was allowed in our analysis (i.e., -g 1), and both reads from a single sequence fragment were required to be mapped to the reference genome in a concordant manner (i.e., -no-discordant and -no-mixed). To summarize the alignments statistics, the resulting alignment files (SAM files) statistics were analyzed using SAMtools [24].
Differential expression analysis
Cuffdiff, a companion tool of Cufflinks (v 2.1.1), was used to quantify the gene expression levels and to perform a differential expression test [25]. The fragment counts were normalized via a geometric method, as described previously [26]. Genes with a false discovery rate of less than 5% (i.e., FDR < 0.05) were considered significant.
Nanostring nCounter® gene expression assay
The gene expression data was verified by NanoString nCounter® technology, as described previously [27]. Briefly, 23 RNA samples were submitted to NanoString, Inc. (Seattle, WA USA) for gene expression assay. With 12 housekeeping genes, 192 endogenous transcripts were selected across multiple on-going RNA-seq projects in our laboratory as target sequences to be measured. Designs of specific probes for target sequences were provided by NanoString [27] and were screened to avoid areas of high SNP density. A total of 100 ng of each RNA sample were hybridized to the CodeSet®, which was composed of both capture and reporter probes [27]. After 16 hours incubation, the samples were transferred to the nCounter® Prep Station and Digital Analyzer for transcript quantification. Positive control normalization factors and reference genes were used to normalize the raw data for biological analysis [27]. Log2 ratios of gene expression levels between high-and low-FE groups were calculated to compare with the corresponding log2 ratio values from RNA-seq analysis.
Ingenuity pathway analysis
Genes differentially expressed (FDR < 0.05) between highand low-FE birds were included in pathway and function analysis using Ingenuity pathway analysis (IPA; Ingenuity® Systems, http://www.ingenuity.com). The functional and canonical pathway analysis was used to identify the significant biological functions and pathways. Functions and pathways with P-value < 0.05 (Fischer's exact test) were considered to be statistically significant. IPA's upstream regulator analysis function was used to identify potential transcriptional regulators that could explain the observed changes in gene expression between high-and low-FE chickens. The activation z-score was calculated to predict activation or inhibition of transcriptional regulators based on published findings accessible through the Ingenuity knowledge base. Regulators with z-score greater than 2 or less than -2 were considered to be significantly activated or inhibited.
Phenotype measurements
A summary of the phenotype measurements from 23 high-FE (n = 10) and low-FE (n = 13) chickens is presented in Table 1. Although the initial bird weights (BW 29 ) are not significantly different between these two groups (P = 0.661), the mean body weight of high-FE birds is significantly heavier than that of low-FE birds at the end of the test (P < 0.05), and the high-FE chickens consumed significantly less feed than low-FE birds during the test (P < 0.01). Consequently, the difference in mean RFC values between high-and low-FE chickens is highly significant (P < 0.001). The mean breast muscle weight and breast muscle percentage of the high-FE birds are significantly higher than those of low-FE birds (P < 0.05).
Transcriptional profile of chicken breast muscle
A total of 23 cDNA libraries were constructed using RNA samples of breast muscle tissues from 10 high-and 13 low-FE chickens and sequenced for 75 cycles from both ends on four lanes. In total, about 1.573 billion of 75-base sequence reads are obtained with an average of 393 million raw reads per lane. No significant difference in the number of reads between these four lanes is observed. The total number of reads for one sample ranges from 50 million to 88 million, with an average of 68 million reads per sample. Based on quality check reports, the averaged quality score of sequence reads is high, approximately 38, with the average GC content ranging from 49% to 51%. On average, 80% of the paired-end reads are properly mapped to the chicken reference genome (Ensembl Galgal4). The summary of alignment for all samples is shown in Additional file 1. The relative expression of a gene is normalized as fragments per kilobase of transcript per million mapped fragments (FPKM), which is proportional to the number of cDNA fragments originated from the gene transcript. The lowest limit of gene expression value is set to be 0.1 FPKM in at least one of the 23 samples. According to this limit, 14,148 genes are identified as being expressed in the breast muscle tissues. To assess the consistency of the gene expression levels between different samples, the Pearson's correlation coefficient was calculated for each pairwise combination of samples [28]. The averaged pairwise correlation coefficient between samples is 0.794, reflecting pretty consistent gene expression profiles.
Gene differential expression analysis
Differentially expressed genes were detected by Cuffdiff, an internal program of Cufflinks. Of 17,107 genes in the Ensemble database (Ensembl Galgal4), 1,059 were identified as significant genes with different expression levels between high-and low-FE chickens (q-value < 0.05) (Additional files 2 and 3). All of this group of 1,059 genes have a fold change greater than 1.3, and 642 genes (60.6%) have a fold change above 1.5. Among the 1,059 differentially expressed genes, 327 and 732 genes are down-and up-regulated in high-FE birds, respectively (Additional file 4). This relative imbalance in the number of down-and up-regulated genes is likely due to the increased breast muscle regeneration and inflammatory response in the high-FE chickens (discussed below). Since muscle development and inflammatory response require higher levels of activators such as growth factors, hormones and cytokines, the gene expression may be positively regulated by these activators in the breast muscle of the high-FE birds.
Confirmation of RNA-seq data
To verify the gene expression data obtained from RNAseq analysis, we selected 192 target genes (71 significant and 121 non-significant) along with 12 housekeeping genes for the NanoString nCounter® assay. Comparison of the normalized counts from NanoString with FPKM values derived from RNA-seq shows high concordance, with pair-wise Pearson's correlation coefficients ranging from 0.70 to 0.98. The Pearson's correlation coefficients of fold change in gene expression levels [log2 ratio(high-FE/low-FE)] between NanoString and RNA-seq results are also high: 0.7532 for all genes and 0.8332 for the 71 significantly differentially expressed genes ( Figure 1). The correlation of log2 fold-change between two analyses is notably affected by lowly expressed genes, and increases by excluding genes with low expression levels ( Figure 2). This can explain why the significantly differentially expressed genes show greater correlation compared with all the selected genes, because the FPKM values of the significant target genes are equal or greater than 0.4, whereas 23 genes out of the 121 non-significant target genes have an FPKM value less than 0.4.
Overview of IPA analysis
To fully interpret the biological implications of the results from the differential expression analysis, all significant genes with their respective log2 fold-change were submitted for Ingenuity® Pathway Analysis. The top 10 up-regulated and top 10 down-regulated genes in high-FE chickens are listed in Additional file 5. A summary of the IPA analysis, including top five biological functions and canonical pathways, are presented in Table 2. Generally, most of the differentially expressed genes are related to immune response and metabolic processes. Genes up-regulated in high-FE birds are associated with cellular function, movement, growth and proliferation, cell-to-cell signaling and interaction and cell death and survival. In contrast, genes down-regulated in high-FE birds are associated with metabolic processes including lipid, nucleic acid and carbohydrate metabolism, molecular transport and small molecule biochemistry (Table 3). Differing from the results of earlier work on chicken FE conducted using 44 K oligo microarray [16,17], genes involved in muscle fiber development, cytoskeletal organization and stress response are found to be up-regulated in the high-FE chickens (rather than in the low-FE chickens) in the current study. The discrepancy is probably from the different genetic composition of broiler chickens between two studies. Birds in previous studies are from a male broiler pure line that has been observed greater reactive oxygen species (ROS) production in the low-FE chickens than the high-FE birds [15], which is reverse of our findings that indicate ROS production is increased in the high-FE birds. Since ROS can act as a second messenger and mediate gene expression in the cell through signal transduction, differential gene expression is likely driven, in part, by the inherent differences that are modulated by ROS-mediated mechanisms. Further comparison of broiler chickens used in our study with pure line chickens used in these previous FE studies will be explained below.
Upstream regulator analysis through IPA predicted the cascade of upstream transcriptional regulators that can potentially explain the differences in gene expression profile between high-and low-FE chickens. A summary of the upstream regulators identified by IPA is presented Figure 1 Correlation of log2 fold-change between RNA-seq and NanoString for significantly differentially expressed target genes. Pearson's correlation of log2 fold-changes in gene expression levels, i.e. "log2 ratio (high-FE/low-FE)", between results from RNA-seq and NanoString nCounter® technology for the 71 significantly differentially expressed target genes.
in Additional file 6. A total of 27 transcriptional regulators are predicted to be activated or inhibited (24 activated and 3 inhibited) in the high-FE chickens, of which 24 regulators are considered to be significant with P-value < 0.05 (21 activated and 3 inhibited).
Increased muscle growth and remodeling in high-FE chickens
Of all differentially expressed genes, 32 are associated with muscle development (Additional file 7), supporting the increased breast muscle weight in the high-FE birds. Among them, both hepatocyte growth factor (HGF) and insulin-like growth factor 2 (IGF2) encode key growth factors that have autocrine or paracrine effects on chicken skeletal muscle development and regeneration [29]. HGF can not only activate the proliferation of quiescent muscle satellite cells, it also can induce the migration of activated satellite cells to the injured sites [30]. IGF2 acts as a crucial regulator in muscle regeneration by stimulating muscle cell differentiation as well as inducing muscle cell hypertrophy [31]. Other muscle growth-related genes that are up-regulated in the high-FE chickens include myogenin (MYOG), cysteine and glycine-rich protein 3 (CSRP3), myoferlin (MYOF), glypican 1 (GPC1), protein tyrosine phosphatase, receptor type, A (PTPRA) and gap junction protein (GJA1). As a member of myogenic regulatory factors (MRFs), MYOG is essential for the fusion of myoblasts into myotubes during muscle growth and regeneration [32]. The CSRP3 gene encodes muscle LIM protein, which is able to increase the activity of MRFs and plays a critical role in enhancing myogenesis [33]. The MYOF-encoded protein is a fundamental modulator for myoblast fusion, highly expressed during muscle repair and regeneration [34]. Figure 2 The correlation of log2 fold-change between RNA-seq and NanoString increased with gene expression levels. Pearson's correlation of log2 fold-changes, i.e. "log2 ratio (high-FE/low-FE)", between RNA-seq and NanoString is computed for different sets of target genes that are selected based on RNA-seq gene expression levels (FPKM value). X-axis represents the minimum FPKM cutoff used for gene filtering. Y-axis represents the Pearson's correlation of log2 fold-change between RNA-seq and Nanostring. The stimulatory effects of GPC1 on muscle satellite cell differentiation and myotube formation was reported in turkeys [35]. The protein encoded by the PTPRA gene is a signaling molecule that was found to increase myogenesis of rat muscle L6 cells [36]. The GJA1-encoded protein is a major component in gap junctions and required for myogenesis and regeneration [37]. Collectively, the up-regulation of genes that can positively regulate muscle growth indicates that muscle growth and development is elevated in the high-FE chickens.
In addition to genes involved in muscle development, genes associated with muscle hypertrophy, including F-box protein 32 (FBXO32; fold change = −1.879), F-box protein 40 (FBXO40; fold change = −1.879), F-box protein 9 (FBXO9; fold change = −1.347), forkhead box O3 (FOXO3; fold change = −1.540) and myotrophin (MTPN; fold change = 1.426), are found differentially expressed in the breast muscle of chickens with high versus low FE. MTPN, a well-known positive growth factor in promoting muscle growth [38], is up-regulated in high-FE chickens. The increased MTPN expression may indicate that myocyte growth and protein synthesis are augmented in the breast muscle of high-FE birds, accordingly, contributing to breast muscle hypertrophy in these chickens. Furthermore, the down-regulation of FOXO3 and F-box family proteins in high-FE chickens further explains muscle growth differences between high-and low-FE chickens. Protein encoded by FOXO3 is a master regulator of both autophagy-lysosome and ubiquitinproteasomal pathways, promoting protein degradation and thereby contributing to muscle atrophy [39]. Proteins from the F-box family mediate the interaction between substrates and ubiquitin-conjugating enzymes, which facilitate proteolysis in diverse tissues [40]. Of them, the FBXO32-encoded protein, known as atrogin 1, is a well-recognized muscle-specific ubiquitin ligase leading to muscle atrophy in a wide range of diseases [41][42][43]. The FBXO40-encoded protein also has been proposed to play a role in muscle atrophy in mammals [44]. Thus, the decreased expression of atrophy-related genes in breast muscle of high-FE birds suggests that muscle protein loss is reduced in high-FE chickens in contrast to low-FE birds. The transcription of these genes is regulated by the PI3K/Akt signaling pathway, which will be discussed later. Taken together, the upregulation of MTPN and down-regulation of FOXO3 and FBXO family genes in the high-FE chickens suggest that birds with high FE may have elevated protein synthesis and decreased protein degradation in their breast muscle.
Genes associated with extracellular matrix (ECM) remodeling are also up-regulated in the high-FE birds. The ECM of skeletal muscle serves as a scaffold for maintaining the structure of muscle and guiding new fiber formation [45]. Muscle regeneration is frequently accompanied by the degradation of ECM because it facilitates satellite cell migration to specific sites for proliferation and fusion into myotubes [32,46]. Therefore, the up-regulation of genes involved in ECM remodeling implies that muscle remodeling is increased in the breast muscle of high-FE chickens. Matrix metalloproteinases (MMPs) are the main endopeptidases responsible for degrading all kinds of ECM, consequently, playing an important role in mediating muscle cell migration and regeneration [47,48]. As presented in Additional file 8, six genes from the MMP family are differentially expressed in our study, all of which are up-regulated in high-FE birds. Of the proteins encoded by these genes, MMP1 and MMP13 belong to MMP collagenases that are capable of cleaving interstitial collagen types I, II and III [49]. Through an in vitro wound-healing assay, MMP1 was able to promote myoblast migration and differentiation by increasing the expression of N-cadherin and β-catenin or pre-MMP-2 and TIMP [50]. MMP13 also plays a role in muscle regeneration, expressing in all muscle cells during muscle regeneration, and its expression level is positively correlated with the extent of muscle damage [51]. MMP9 is a gelatinase that also relates to muscle regeneration [49]. Evidence showed that the expression levels of MMP9 were greatly increased in response to inflammation and the activation of satellite cells in injured muscle [52][53][54]. However, contrary to the positive function of MMP1 on muscle regeneration, a recent study revealed that MMP9 could lead to muscle cell necrosis, inflammation and fibrosis [55]. Collectively, the up-regulation of MMPs in the high-FE birds suggests an augmented muscle remodeling in these birds compared with the low-FE chickens.
The IPA upstream regulator analysis provides additional support to our conclusions regarding muscle development and remodeling in the high-FE birds. According to the predictions from IPA, several transcriptional factors involved in muscle development are activated in high-FE chickens. As a regulator of postnatal muscle growth, JunB is predicted as being activated in the breast muscle of the high-FE birds (P-value = 1.2E-03, z-score = 2.200). JunB can stimulate myosin expression to elevate protein synthesis, accordingly, maintaining muscle mass and promoting muscle hypertrophy [56]. Additionally, JunB can suppress the transcription of FOXO3 and thereby inhibit protein degradation in muscle [57]; therefore, activation of JunB in the high-FE birds may be one of the causes of the down-regulation of FOXO3 as well as of FBXO32 in breast muscle of these birds. In addition, through the IPA analysis, JunB is predicted to activate the transcription of MMP1, MMP9, MMP13, fibronectin 1 (FN1), heme oxygenase 1 (HMOX1), neutrophil cytosolic factor 2 (NCF2) and interleukin 1 receptor-like 1 (IL1RL1) ( Figure 3A). As discussed above, FN1, MMP1 and MMP13 are all positively correlated with muscular satellite cell proliferation. Thus, JunB may also have increased myogenesis through activating transcription of these genes in the high-FE birds.
Apart from JunB, a main transcriptional factor in the formation of mature sarcomeres, myocyte-specific enhancer factor 2C (MEF2C), is predicted as being activated in breast muscle of high-FE birds [58,59]. Protein encoded by MEF2C is a member of the myocyte enhancer factor 2 (MEF2) family, which directly cooperates with MRFs and enhances skeletal muscle development [60]. In the present study, MEF2C is predicted to be an activated upstream regulator that increases the transcription of GJA1, MMP13, MYOG, myozenin 2 (MYOZ2; fold change = 2.400) and ATPase, Ca++ transporting, cardiac muscle, slow twitch 2 (ATP2A2) (fold change = 1.390) ( Figure 3B). GJA1, MMP13 and MYOG are all involved in myogenesis and exert positive effects on skeletal muscle growth and regeneration, which has been discussed above. Thus, MEF2C's activation may augment the muscle development in high-FE birds. Moreover, MYOZ2 is also predicted as being up-regulated by MEF2C. The MYOZ2encoded protein belongs to a family of calcineurininteracting proteins that modulates specific skeletal muscle signaling pathways through suppressing calcineurin [61]. It has been shown that MYOZ2 plays a role in regulating myocyte differentiation and promoting slowoxidative fibers growth [62]. Collectively, MYOZ2 may be more active in breast muscle of high-FE chickens and, consequently, mediates some biological pathways and leads to muscle remodeling in these birds.
Growth hormone (GH) and IGFs/PI3K/Akt signaling pathway over-represented in the differentially expressed genes Through the IPA canonical pathway analysis, several critical pathways in the regulation of body and muscle growth are over-represented among the differentially expressed genes. One of these pathways is the GH signaling pathway, enriched by 10 genes in our dataset (P-value = 3.25E-04; ratio = 1.32E-01) ( Figure 4). As a key mediator of body size, GH has an anabolic effect on skeletal muscle development [63]. Through binding to growth hormone receptor (GHR) in muscles, biologically active GH can activate nuclear receptor STAT5 and thereby induce the synthesis and secretion of IGF-1 as well as IGF-2 (fold change = 1.657) [64]. Furthermore, GH can stimulate signaling molecules including PI3K [phosphatidylinositol-4,5-bisphosphate 3-kinase, catalytic subunit beta (PIK3CB; fold change = 1.663); phosphatidylinositol-4,5-bisphosphate 3-kinase, catalytic subunit delta (PIK3CD; fold change = 1.709); phosphoinositide-3-kinase, regulatory subunit 5 (PIK3R5; fold change = 1.564)] and protein kinase C (PKC) [Protein kinase C delta type (PRKCD; fold change = 1.558)], leading to the activation of Akt/PKB signaling pathway and STAT5 [65]. Both the PI3K/Akt/PKB pathway and IGFs are crucial contributors to muscle hypertrophy, which will be discussed later. Most of the mapped genes (8 of 10) are upregulated in the high-FE chickens, indicating that the GH signaling pathway is more activated in the breast muscle of the high-FE birds compared with the low-FE birds. The two down-regulated genes [GHR and insulin-like growth factor binding protein (IGFBP3)] also fit this assumption. Evidence from the literature indicates that the expression of GHR is inversely correlated with the concentration of GH [66,67]. Thus, the down-regulation of GHR gene expression in the high-FE birds may be due to relatively high circulating GH levels in these birds. In spite of the stimulatory effects of GH on IGFBP3 transcription, there may be other modulators inhibiting the expression of IGFBP3 in high-FE chickens, consequently exerting an inhibitory effect on IGF-1 function [68].
Another important finding is that the IGFs/PI3K/Akt signaling pathway is over-represented by the differentially expressed genes. The IGFs/PI3K/Akt signaling pathway plays a key role in the regulation of muscle growth and muscle hypertrophy in a variety of organisms [69][70][71]. Nine differentially expressed genes are mapped to the IGFs/PI3K/Akt pathway ( Figure 5). Of these, PIK3CD (fold change = 1.709), PIK3CB (fold change = 1.663) and PIK3R5 (fold change = 1.564) are up-regulated in the high-FE chickens, implying that the PI3K complex is more active in the breast muscle of these birds. The up-regulated members of the PI3K complex are predicted to increase PI3K-Akt cascade activity in the high-FE birds by IPA ( Figure 5). Activated PI3K can induce the phosphorylation of phosphatidylinositol 4,5-bisphosphate (PIP2) to generate phosphatidylinositol-3,4,5-trisphosphate (PIP3). PIP3 acts as a docking site for phosphoinositide-dependent kinase 1 (PDK1) and Akt and subsequently contributes to the activation of Akt [72,73]. On the basis of the IPA prediction, the activated Akt translocates into the nucleus and then inhibits the transcription of the forkhead box O (FOXO) family, which is consistent with the downregulation of the forkhead box O3 (FOXO3) gene (fold change = −1.540) in our results [69]. As mentioned above, FOXO3 can promote protein degradation and muscle atrophy [39]. Hence, considering the expression profile of the mapped genes, the protein degradation process is predicted to be reduced in the breast muscle of the high-FE chickens as a result of PI3K/Akt pathway activation. In addition, because activation of Akt can up-regulate the transcription of ATP citrate lyase (ACLY) through suppressing the activity of glycogen synthase kinase (Gsk3), the increased expression of ACLY in the high-FE birds (fold change = 1.345) lends more support to the assumption that the PI3K/Akt pathway is activated in the high-FE chickens. Apart from repressing protein degradation, the activated PI3K/Akt pathway can promote protein synthesis in muscle via inhibiting Gsk3 [69], which is also inferred in our analysis. Therefore, the up-regulated IGFs/ PI3K/Akt pathway suggests increased protein synthesis as well as decreased protein degradation in the breast muscle of the high-FE birds, explaining in part why high-FE chickens have more breast muscle than do low-FE birds. Previous studies on chicken FE also found that gene encoding PI3K was up-regulated in the high-FE chickens and a list of differentially expressed genes associated with the Akt/mTOR pathways, which strengthened our conclusion [17,74].
Inflammatory response in the breast muscle of high-FE chickens
In the present study, a large number of the differentially expressed genes (136 genes) are involved in inflammatory response. Most of these genes (124 genes), including genes encoding for interleukin 8 (IL-8) and chemokine (C-X-C motif) ligand 14 (CXCL14), are expressed greater in the high-FE chickens. Although the cellular source of IL-8 and CXCL14 remains unknown in the current study, both not only exert direct effects on immune cell recruitment but also act as paracrine or endocrine factors in skeletal muscle. IL-8 has been recently classified as a myokine that can promote angiogenesis within the muscle [75,76]. CXCL14 is encoded by an obesity-induced gene in mice that inhibits the insulin-induced glucose uptake in cultured myocytes [77]. In addition, the gene encoding for corticotropin-releasing hormone (CRH) is also upregulated in the high-FE birds (fold change = 2.824). Previous studies have demonstrated that CRH is secreted from nerve terminals and epithelial cells at inflammation sites and has a local proinflammatory effect on resident immune cells [78]. Therefore, it is likely that the elevated transcription of CRH functions to augment an immune response in the breast muscle of high-FE chickens. Apart from its immunomodulatory role, an increase in CRH may have a positive impact on thermogenesis of skeletal muscle in high-FE birds [79].
A series of genes encoding for cytokine receptors are also up-regulated in the high-FE chickens, further indicating an augmented immune response in the breast muscle of the high-FE chickens. These genes include chemokine (C-C motif ) receptor 2 (CCR2), chemokine (C-C motif ) receptor 5 (CCR5), interleukin 17 receptor A (IL17RA), interleukin 18 receptor 1(IL18R1), interleukin 1 receptor, type I (IL1R1), interleukin 1 receptor, type II (IL1R2), interleukin 1 receptor-like 1 (IL1RL1), interleukin 2 receptor, gamma (IL2RG) and interleukin 5 receptor, alpha (IL5RA). Among them, CCR2 was found to be expressed in infiltrating macrophages and playing a crucial role in muscle regeneration [80]. This gene can recruit macrophages to injured muscle, which then produces a high level of IGF-I to promote muscle regeneration [81]. Therefore, the up-regulation of CCR2 suggests that, compared with the low-FE chickens, macrophage infiltration and muscle regeneration are increased in the breast muscle of the high-FE birds.
The IPA canonical pathway analysis also supports our hypothesis regarding augmented immune response and active recruitment of immune cells to the breast muscle of the high-FE chickens. Several over-represented pathways Figure 4 Growth hormone signaling pathway analysis using Ingenuity Molecule Activity Predictor (MAP). MAP predicts the upstream and downstream effects of the mapped genes on growth hormone signaling pathway and hypothesizes the overall states of this pathway. Red and green symbols indicate genes up-and down-regulated in high-FE chickens, respectively. Orange and blue nodes indicate genes predicted to be activated or inhibited in the high-FE chickens, respectively. The color intensity is proportional to the degree of fold change. Edges between the nodes are colored orange when leading to the activation of downstream genes, blue when inhibiting downstream genes. Yellow edges indicate that the states of downstream genes are inconsistent with the prediction based on previous findings. involved in inflammatory response are identified in our analysis (Additional file 9). Given that nearly all genes mapped to these pathways are up-regulated in the high-FE chickens, we conclude that these immune-related pathways are activated in the breast muscle of the high-FE chickens. According to the predictions from IPA, a number of transcription factors associated with inflammatory response are also activated in the breast muscle of the high-FE chickens: v-ets erythroblastosis virus E26 oncogene homolog 1 (ETS1), spleen focus forming virus (SFFV), spi-1 proto-oncogene (SPI1), X-box binding protein 1 (XBP1) and runt-related transcription factor 1(RUNX1).
Muscle inflammation is a key step in muscle remodeling, which can clean disrupted muscle cells and promote muscle regeneration by releasing growth factors [82]. A variety of circumstances (e.g., muscle injury, exercise and obesity) can activate transcription factors NF-kB and c-Jun/FOS in muscle cells, resulting in the expression and secretion of several factors [83]. These factors, including cytokines and other non-protein mediators, can either directly attract circulating immune cells to the muscle or activate resident immune cells, providing chemotactic signals for recruitment [84]. As a consequence, a number of immune cells are recruited to the Figure 5 IGFs/PI3K/Akt signaling pathway analysis using Ingenuity Molecule Activity Predictor (MAP). MAP predicts the upstream and downstream effects of the mapped genes on IGFs/PI3K/Akt signaling pathway and hypothesizes the overall states of this pathway. Red and green symbols indicate genes up-and down-regulated in high-FE chickens, respectively. Orange and blue nodes indicate genes predicted to be activated or inhibited in the high-FE chickens, respectively. The color intensity is proportional to the degree of fold change. Edges between the nodes are colored orange when leading to the activation of downstream genes, blue when inhibiting downstream genes. Yellow edges indicate that the states of downstream genes are inconsistent with the prediction based on previous findings. muscle, phagocytizing the cellular debris and producing cytokines affecting muscle cells [83,85]. The IPA upstream regulator analysis predicts that transcription factors JUN and FOS are activated in the breast muscle of the high-FE birds ( Figure 6). Protein encoded by JUN and FOS are components of activator protein 1 (AP-1), which is an important transcription factor responding to various physiological and pathological stimuli [86]. Overall, our results suggest that, compared with the low-FE birds, the high-FE birds experienced a more intense muscle restructuring and higher inflammatory responses in the breast muscle.
Free radical scavenging enriched in the differentially expressed genes between high-and low-FE chickens Several differentially expressed genes in our dataset are involved in the production of ROS. Genes encoding for ROS-generating enzymes, including cytochrome b-245, beta polypeptide (CYBB) [fold-change = 2.08] and NADPH oxidase organizer 1 (NOXO1) [fold-change = 2.38], are all up-regulated in the high-FE birds, suggesting that ROS production is increased in the breast muscle of these birds compared with the low-FE birds. CYBB, also known as NADPH oxidase 2 (NOX2), is a major enzyme responsible for superoxide production in the sarcoplasmic reticulum [87]. NOXO1, a positive mediator of NOX1 and NOX3, initiates the activity of NOX1 and NOX3 for generating ROS [88]. Moreover, the down-regulation of uncoupling protein 3 (UCP3) [fold-change = -1.67] in the high-FE birds may indicate that mitochondria from the breast muscle of the high-FE chickens have higher electron transport chain coupling compared with that from low-FE chickens. This assumption is consistent with previous findings [15,89]. Because UCP3-mediated uncoupling can attenuate the production of ROS [90], the downregulation of UCP3 in the high-FE birds may also suggest a higher production of ROS from the mitochondria of the breast muscle of these birds. Collectively, our data suggest that, compared with the low-FE birds, ROS is produced at a higher level in the breast muscle of the high-FE chickens.
The IPA downstream effect analysis supports our hypothesis regarding increased ROS production in the breast muscle of high FE-chickens. Processes, including metabolism of reactive oxygen species (P-value = 5.77E-07), synthesis of reactive oxygen species (P-value = 1.75E-06), production of reactive oxygen species (P-value = 4.92E-06) and production of superoxide (P-value = 2.05E-03), are predicted to be increased in the high-FE chickens. In addition, the NRF2-mediated oxidative stress response pathway is over-represented among the differentially expressed genes, with 17 genes (P-value = 6.96E-04; ratio = 0.089) mapped to this pathway ( Figure 7A). Nuclear factor (erythroid-derived 2)-like 2 (NRF2), also known as NFE2L2, is a key transcription factor in cells that responds to a range of oxidative and xenobiotic stresses [91]. Upon exposure of cells to various stimuli such as ROS and electrophilic compounds, quiescent NRF2 in cytoplasm is activated through phosphatidylinositol 3kinase (PI3K), RAS and PKC signaling pathways [92]. By phosphorylation or binding to actin, activated NRF2 translocates into the nucleus and binds to the antioxidant response elements, initiating the transcription of a number of genes encoding for antioxidants and ROS detoxifying enzymes [93]. is an extracellular protective enzyme against not only ROS but also inflammation, thus playing a role in tissue recovery [94]. HMOX1 is increased in the condition of oxidative stress and has an effect on protecting cells against ROS and inflammation [95]. The TXN-encoded protein is involved in a range of redox reactions and can decrease the quantity of ROS [96]. The up-regulation of TXN, SOD3 and HMOX1 indicates that an NRF2-mediated antioxidant response is activated in the breast muscle of the high-FE chickens. Additionally, three members from the glutathione s-transferase (GST) group, encoded by GSTO1, GSTA1 and MGST1, are all up-regulated in the high-FE birds. GST is known for its function in detoxification of xenobiotics as well as endogenous metabolites [97]. The increased expression of the GST superfamily also suggests that responses to oxidative stress are elevated in the breast muscle of the high-FE chickens. Although few genes in the NRF2 signaling pathway, including AOX1, DNAJA1 and DNAJC1, are downregulated in the high-FE chickens, overall there are 14 up-regulated genes mapped to this pathway, indicating that NRF2-mediated oxidative stress response is augmented in the breast muscle of the high-FE birds. Moreover, NRF2 (NFE2L2), a transcription factor, is predicted to be activated in the high-FE chickens (P-value = 1.94E-05; z-score = 2.036) ( Figure 7B). Taken together, our results suggest a higher level of ROS generated in the breast muscle of high-FE chickens.
However, in contrast to our findings, Bottje et al. (2002) reported higher amounts of ROS produced in the breast muscle of their low-FE birds [98]. This inconsistency is likely caused by the difference in broiler chickens between two studies. Male breeders, presumably with relatively low breast muscle yield, were studied in the Bottje et al. (2002) research [15], whereas we study broiler chickens from a commercial cross with high breast muscle yield. The ancestors of this cross have been intensively selected for the disproportionate growth of breast muscle, and the resulting higher levels of variation in breast muscle in the broiler cross may be responsible for a significant part of the variation in FE in this cross compared to the male breeder strain in the study by Bottje et al. [15]. In regard to broiler chickens in the current study, intensive inflammatory response is possibly a major source of increased ROS in the breast muscle of the high-FE chickens. ROS-generating enzymes, such as NOX in muscle cells, can be stimulated through extracellular inflammatory cytokines including interleukin (IL)-1, IL-6 and IL-8 in a ligand-induced pattern [99,100]. Furthermore, the implied infiltrating immune cells in the breast muscle of high-FE birds may be another cause for increased ROS. It is well known that immune cells produce a large amount of ROS to support their functions during inflammation [101]. Hence, in our study, strong indications for elevated ROS production in the breast muscle of the high-FE chickens are likely due to augmented inflammatory response, whereas the higher level of ROS observed in the study by Bottje et al. (2002) is possibly from mitochondria of breast muscle cells. Further study of genes associated with free radical scavenging may support our assumption. Indeed, in our study, a large part of these genes (Additional file 10) are also related to inflammatory response (P-value = 1.03E-23-5.15E-06), suggesting that production of ROS in the high-FE birds is closely associated with an increased immune response in the breast muscle.
Notably, growth factors including HGF, IGF-1 and fibroblast growth factor (FGF)-2 are also found to be able to induce intracellular generation of ROS in different types of cells [99]. As mentioned above, the breast muscle of high-FE birds have higher expression of HGF and IGF-2, which may play a role in stimulating ROS production in these birds. Moreover, such generated ROS exerts insulin-mimicking effects on the insulin/IGFs signaling pathway, which has shown to be a second messenger in insulin/ IGFs signal transduction under physiological conditions [102]. Therefore, in the breast muscle of the high-FE birds, the insulin/IGFs receptor signaling pathway may be activated, in part, because of increased ROS production.
Higher ROS production may also lead to an increase in intracellular calcium concentration. It has been found that ROS mediates the influx of extracellular Ca 2+ and mobilization of intracellular Ca 2+ stores [103][104][105]. In the present study, genes involved in calcium transport [solute carrier family 8, member B1 (SLC8B1), phospholipase C, beta 2 (PLCB2) and ATPase, Ca ++ transporting, cardiac muscle, slow twitch 2 (ATP2A2)] are all upregulated in the high-FE birds, indicating increased calcium mobilization in the breast muscle of these birds. ATP2A2 encodes sarcoplasmic reticulum Ca 2+ -ATPase isoform 2 (SERCA2), which is an important pump responsible for muscle relaxation through transporting Ca 2+ from the cytosol into the sarcoplasmic reticulum lumen in muscle cells [106]. Because more SERCA2 are needed to maintain calcium homeostasis when high Ca 2+ levels are present in cytosols, the up-regulation of ATP2A2 in the high-FE birds may imply a high level of cytosolic Ca 2+ in the breast muscle of these chickens compared with the low-FE birds.
Transcriptional regulation of hypoxia-inducible factor-1α (HIF1α) Hypoxia-inducible factor-1α (HIF1α) is a key transcription factor that mediates cell adaption to hypoxia through regulation of a variety of gene expression [107]. Although HIF1α mRNA is constantly expressed in cells under both normoxic and hypoxic conditions, the HIF1α protein has a very short half-life in normoxia because of degradation through the ubiquitin-proteasome system [107]. During hypoxia, HIF1α degradation is repressed. As a result, HIF1α translocates into the nucleus and activates downstream genes in response to low O 2 tension [107]. In our study, HIF1α mRNA, aryl-hydrocarbon receptor nuclear translocator 2 (HIF2β) mRNA as well as HIF1α inhibitor hypoxia inducible factor 1, alpha subunit inhibitor (HIF1AN) mRNA are differentially expressed in the breast muscle between high-and low-FE chickens. HIF1α and HIF2β are up-regulated in the high-FE birds (fold change = 1.341 and 1.42, respectively), whereas HIF1α inhibitor HIF1AN is down-regulated in these birds (See figure on previous page.) Figure 6 Upstream regulator JUN and FOS. A. Transcription factor JUN is predicted to be activated in the high-FE chickens with P-value = 1.70E-08 and Z-score = 2.923. B. FOS is predicted to be activated in the high-FE chickens with P-value = 7.64E-07 and Z-score = 2.277. Edges connecting the nodes are colored with orange when upstream regulators have activating effects on their target genes, blue when upstream regulators inhibit their downstream genes. Yellow edges indicate that the states of downstream genes are inconsistent with the prediction based on previous findings.
(fold change = −1.343). Although the up-regulated HIF1α and HIF2β mRNA can't represent increased amounts of stabilized HIF1α protein in the breast muscle of the high-FE chickens, decreased expression of HIF1AN may imply that HIF1α activity is increased in the breast muscle of the high-FE chickens compared with the low-FE birds. This assumption is supported by the expression of HIF1α downstream genes. As a transcription factor, HIF1α is predicted to be activated in the high-FE birds through the IPA upstream regulator analysis (P-value = 3.85E-06; z-score = 2.332; Figure 8). Indeed, a large number of the HIF1α target genes are up-regulated in the high-FE birds, indicating the activation of HIF1α in these birds.
Moreover, the HIF1α signaling pathway is overrepresented among significantly differentially expressed genes (P-value = 7.58E-04; ratio = 1.39E-01). In response to hypoxia or a variety of peptide stimulators under normoxic conditions, PI3K/Akt and MAPK signaling pathways are activated to induce the accumulation of HIF1α in human cells [108,109]. Consequently, the accumulated HIF1α is translocated to the nucleus to modulate the transcription of genes involved in angiogenesis, glucose metabolism, matrix metabolism, erythropoiesis and apoptosis [107]. In our study, with increased expression of PIK3CB, PIK3CD, PIK3R5 and muscle RAS oncogene homolog (MRAS), both the Akt/PI3K and MAPK signaling pathways are predicted to be activated in the high-FE chickens. The activated Akt/PI3K and MAPK signaling pathways may stimulate the induction of HIF1α, as reflected by the up-regulation of HIF1α and its downstream genes [glucose transporter type 3 (GLUT3), glucose transporter-like protein 5 (GLUT5), matrix metallopeptidase 1 (MMP1), matrix metallopeptidase 7 (MMP7), matrix metallopeptidase 9 (MMP9), matrix metallopeptidase 13 (MMP13), matrix metallopeptidase 27 (MMP27), matrix metallopeptidase 28 (MMP28) and lactate dehydrogenase B (LDHB)] in the high-FE birds. Based on the gene expression profile, we conclude that, compared with (See figure on previous page.) Figure 7 NRF2-mediated oxidative stress response. A. The Keap1-NRF2 pathway from IPA software. Canonical pathway analysis identified that the Keap1-NRF2 pathway was statistically significant with P-value = 6.96E-04. Red and green symbols indicate genes up-and down-regulated in the high-FE chickens, respectively. The color intensity is proportional to the degree of fold change. B. NRF2 (NFE2L2) is predicted to be activated in the high-FE chickens by Ingenuity Upstream Regulator Analysis.
Figure 8
Upstream regulator HIF1α and its target genes. Transcription factor HIF1α is predicted to be activated in the high-FE chickens by Ingenuity Upstream Regulator Analysis. the low-FE birds, the activity of HIF1α signaling pathway is increased in the breast muscle of the high-FE birds.
Although it is unclear from our results whether hypoxia and/or mediators such as IGFs induced HIF1α activation in the breast muscle of the high-FE birds, we would like to speculate here about potential mechanisms underlying this activation. It is widely accepted that inflammation and hypoxia are closely interdependent in a wide array of physiological and pathological conditions [110][111][112][113][114]. Inflammation is frequently accompanied with hypoxia because of the high oxygen consumption of infiltrating immune cells [112]. Assuming an increased inflammatory response in the breast muscle of the high-FE birds, we speculate that the up-regulation of HIF1α is partly caused by an inflammation-induced hypoxia. Alternatively, the up-regulation of HIF1α may be caused by excessive muscle remodeling, which may be the result of selection for breast muscle proportion. Elevated muscle growth and rearrangements may have led to the reconstruction of vasculature, consequently reducing the blood flow and resulting in oxygen deficiency in the breast muscle of the high-FE birds [115]. Furthermore, insulin and IGFs have shown to be modulators of HIF1α induction during both normoxia and hypoxia [109]. Given that IGF2 is up-regulated in the breast muscle of the high-FE birds, this growth factor may also have contributed to the activation of HIF1α.
Finally, the activation of HIF1α may also be partly due to a higher production of ROS in the breast muscle of the high-FE chickens. Studies have found that ROS are essential for the stabilization of HIF1-DNA, thereby triggering HIF1α-induced transcription [116,117]. It was also proposed that cellular ROS-producing proteins could sense changes in cellular oxygen concentration [118]. Evidence indicated that low oxygen tension inhibited mitochondrial electron transport and therefore increased ROS production. The generated ROS then acted as a second messenger that contributed to HIF1α activation [119]. Thus, the ROS production may have been increased in the breast muscle of the high-FE chickens partly because of a relatively low oxygen concentration within this tissue, which in turn may have played a role in HIF1α activation.
Conclusions
The current study provides a global view of gene expression differences in the breast muscle of broiler chickens with extremely high and low FE from a population of a modern commercial high-meat-yield broiler cross. To our knowledge, this study reports for first time the RNA-seq analysis of a trait of selection and breeding importance in chickens. We identify 1,059 genes significantly differentially expressed in the breast muscle between high-and low-FE chickens based on the RNA-seq experiment. Furthermore, we achieve a large-scale validation of our RNA-seq experiment by quantifying the expression of a large number of target genes (192 transcripts + 12 house-keeping genes) using a high-sensitive non-PCRbased method, i.e. NanoString nCounter® Technology [27]. Function and pathway analysis of the differentially expressed genes sheds light on some of the underlying mechanisms that regulate chicken FE. Birds with high FE exhibit higher expression of genes involved in muscle growth, development and remodeling, which may explain why these birds have more breast muscle than do the low-FE chickens. Pathway analysis shows that anabolic pathways, including growth hormone signaling and IGFs/ PI3K/Akt signaling pathways, are more activated in the high-FE birds, which may have not only led to the increased muscle growth in the high-FE chickens but also contributed to the feed conversion advantages of these birds. Our results also suggest that transcriptional factors JunB and MEF2C play crucial roles in regulating muscle growth and remodeling in high-FE chickens.
Furthermore, most of the genes up-regulated in the high-FE birds are associated with inflammatory response and oxidative stress, suggesting augmented inflammation and oxidative stress in the breast muscle of these birds. Our results also show increased activity of HIF1α, which may be caused by a lower oxygen environment in the breast muscle of high-FE chickens. Although no clinical symptoms of sickness or muscle damage were observed in the birds used in the current study, some of the molecular changes in the high-FE chickens may be hypothesized to lead to recently reported muscle quality issues in modern broiler chickens such as white striping and wooden breast [120][121][122]. These disorders have been reported to be more frequent in birds with high breast muscle weight and high FE, suggesting that the susceptibility may be primarily induced by breeding for these traits. Further investigation (e.g., histological and protein analysis) would be helpful for examining inflammation and oxidative stress in the breast muscle of high-FE and high-breast-muscle-yield birds.
Availability of supporting data
The data supporting the results of this article are included within the article and its additional files. Readers may contact the corresponding author for additional information.
Additional files
Additional file 1: Figure S1. The number of properly mapped, improperly mapped and unmapped reads is shown for each sample. Unmapped reads are reads not mapped onto the reference genome; Properly mapped reads are paired-end reads mapped to the reference genome and complying with the parameters (−g 1 -r 110 -no-discordant -no-mixed); Improperly mapped reads are reads mapped to the reference genome but not complying with the parameters. | 2016-05-12T22:15:10.714Z | 2015-03-17T00:00:00.000 | {
"year": 2015,
"sha1": "472cd67cc3a6af030b1ee5b65bc7e2500e97d332",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-015-1364-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "472cd67cc3a6af030b1ee5b65bc7e2500e97d332",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
225051155 | pes2o/s2orc | v3-fos-license | Development and validation of prognostic nomogram for lung cancer patients below the age of 45 years
This study aimed to establish a nomogram for the prognostic prediction of patients with early-onset lung cancer (EOLC) in both overall survival (OS) and cancer-specific survival (CSS). EOLC patients diagnosed between 2004 and 2015 were retrieved from the Surveillance, Epidemiology, and End Results (SEER) database and further divided into training and validation sets randomly. The prognostic nomobgram for predicting 3-, 5- and 10-years OS and CSS was established based on the relative clinical variables determined by the multivariate Cox analysis results. Furthermore, the predictive performance of nomogram was assessed by concordance index (C-index), calibration curve, receiver operating characteristic (ROC) curve and decision curve analysis (DCA) curve. A total of 1,822 EOLC patients were selected and randomized into a training cohort (1,275, 70%) and a validation cohort (547, 30%). The nomograms were established based on the statistical results of Cox analysis. In training set, the C-indexes for OS and CSS prediction were 0.797 (95% confidence interval [CI]: 0.773-0.818) and 0.794 (95% CI: 0.771-0.816). Significant agreement in the calibration curves was noticed in the nomogram models. The results of ROC and DCA indicated nomograms possessed better predict performance compared with TNM-stage and SEER-stage. And the area under the curve (AUC) of the nomogram for OS and CSS prediction in ROC analysis were 0.766 (95% CI: 0.745-0.787) and 0.782 (95% CI: 0.760-0.804) respectively. The prognostic nomogram provided an accurate prediction of 3-, 5-, and 10-year OS and CSS of EOLC patients which contributed clinicians to optimize individualized treatment plans.
INTRODUCTION
www.bjbms.org to death associated with cancer, excluding other causes. The cutoff point in this study was December 31, 2016. Construction and validation of nomogram Kaplan-Meier curve and log-rank test were performed to investigated the OS and CSS of EOLC patients. Univariate and multivariate regression analyses were used to evaluate the prognostic factors in patients with EOLC. The Cox proportional hazards model was used as the basis for the construction and verification of nomograms. R software version 3.5.1 (http:// www.R-project.org) was performed for establishing nomograms. Concordance index (C-index) and calibration curve were performed to evaluate the performance and accuracy of nomograms. The C-index value ranges from 0.50 to 1.00 and shows a positive correlation with the predicted performance of the model. It indicates that and the models accompanied by perfect discrimination ability when the value is 1.00. Moreover, when the calibration curve is applied to a perfectly calibrated model, the prediction will fall on the diagonal 45° in the figure.
In addition, receiver operating characteristic (ROC) curves and decision curve analysis (DCA) were conducted to assess the predicted performance of nanograms, TNM stage, and SEER stage. The statistical software package for social science software (version 20.0; SPSS, Chicago, USA) was applied for all statistical analyses. The results were considered statistically significant as P-value < 0.05 (two-sided).
Demographic and pathologic characteristics
The flow process diagram for retrieving patients is shown in Figure 1. Among all 1,822 patients, there were 1068 males (58.6%), 943 patients (51.8%) aged >43 years, and 1381 white patients (75.8%). In addition, the majority of patients in N stage were in N0 stage (1087; 59.7%), whereas 1548 (85.0%) were in M0 stage, according to laboratory examinations and postoperative pathological results. Non-small cell LC (NSCLC) was the most prevalent type of pathology in patients with EOLC, accounting for 67.9% (1237) of patients. The most common primary site of tumor in eligible EOLC patients was the upper lobe (925; 50.8%), followed by the lower lobe (578; 31.7%). The treatment protocol for patients included chemotherapy (874; 48.0%) and radiotherapy (508; 27.9%). The demographic and pathologic characteristics of the patients with EOLC are shown in Table 1.
Identification of prognostic factors of OS and CSS
Univariate and multivariate regression analyses were performed to investigate the independent prognostic factors for OS and CSS in patients with EOLC. For OS and CSS, gender, age, race, grade, TNM stage, tumor primary site, SEER stage, has limitations and is deficient in predicting prognosis accurately.
The nomogram has been acceptance in the last decade as a unique, reliable method for predicting tumor prognosis [7]. It has been applied in the prognosis prediction of many cancers including gastric cancer, breast cancer, testicular cancer, and so on [8][9][10][11]. As a prognostic model, the nomogram assesses significant related risk factors for the prediction. Specifically, the nomogram can produce accurate predictions for overall survival (OS) and cancer-specific survival (CSS) in patients, due to the multiple clinical variables in the calculation. In this study, we utilized nomograms to predict 3-, 5-, and 10-year OS and CSS in patients with EOLC. (IV) the unknown TNM stage; (V) patients without surgery. All the eligible EOLC patients included in this study were randomly assigned into the training and validation sets. Local ethics approval or statements were not required because the clinical data used in this study were obtained from the public-access SEER database and thus, the requirement for informed consent was waived.
Study variables
Clinical variables included in this study contained gender, age, race, grade, TNM stage (AJCC, 7th ed.), tumor primary site, SEER stage, chemotherapy and radiotherapy. The age of eligible EOLC patients was divided into three groups (<35, 35-43 and >43; Fig. S1) according to the optimal cut-off value calculated by X-tile software version 3.6.1 (Yale University School of Medicine, US). The tumor primary site contained the following six sites: main bronchus (C34.0), upper lobe (C34.1), middle lobe (C34.2), lower lobe (C34.3), overlapping lesion of lung (C34.8) and not otherwise specified (NOS; C34.9). Moreover, SEER stage comprises three categories: localized, regional, and distant. OS is defined as the time from diagnosis to any cause leading to death or to the date on which data were censored. Moreover, the CSS time analyzed in this study was the survival time from diagnosis www.bjbms.org chemotherapy, and radiotherapy were the prognostic factors according to the univariate analysis. The multivariate analysis was further applied in our study, and it was found that the three variables (gender, chemotherapy, and radiotherapy) were excluded from the prognostic factors (Table 2). Moreover, the results of multivariate analysis also indicated that age, race, grade, TNM stage, tumor primary site, and SEER stage were independent prognostic factors impacting the CSS in patients with EOLC (Table 3). In addition, we further analyzed prognostic factors in patients with EOLC with NSCLC for their maximum percentage of histological type. The results of multivariate analysis indicated that age, race, grade, TNM stage, and chemotherapy were prognostic factors for OS in patients with EOLC with NSCLC, which lost the chemotherapy for CSS (Table S1).
Construction and verification of Nomograms
The clinical variables included in the construction of nomograms were based on the multivariate Cox regression results. The prognostic nomogram for 3-, 5-, and 10-year OS ( Figure 2A) comprised age, race, grade, TNM stage, tumor primary site, and SEER stage as independent prognostic factors, and each variable corresponded to a point according to HR. Moreover, the establishment of a prognostic nomogram for CSS ( Figure 2B) included age, race, grade, TNM stage, and www.bjbms.org tumor primary site as the variables. Simultaneously, the prognostic nomograms for OS ( Figure S2A) and CSS ( Figure S2B) of patients with EOLC with NSCLC were established according to the Cox regression results. www.bjbms.org The time-dependent ROC curves for OS and CSS were plotted to assess the predictive performance of nomograms in different sets. In the training set, the AUC of the nomograms for OS ( Figure 3A) and CSS ( Figure 3B) was0.766 (95% CI: 0.745-0.787) and 0.782 (95% CI: 0.760-0.804), respectively (Table 4), which were significantly larger than values for TNM stage and SEER stage. The results in validation set showed the same conclusion; the AUC of nomograms were 0.768 (95% CI: 0.738-0.798) for OS ( Figure 3C) and 0.780 (95% CI: 0.748-0.812) for CSS ( Figure 3D). Simultaneously, the DCA was applied to verify the clinical utility of nomograms. The results indicated that the nomogram showed comparable clinical applicability for predicting OS and CSS as TNM stage and SEER stage, not only in training set ( Figure 4A and B) but also in validation set ( Figure 4C and D).
In addition, the concordance index (C-index) was conducted in this study to verify the nomogram. There were significant differences among nomogram, TNM stage, and SEER stage for OS and CSS (Table 5). We therefore used the calibration curve method to compare nomograms with the perfect curves. The results show that the 3-, 5-, and 10-year OS ( Figure 5A, C, and E) and CSS ( Figure 5B, D, and F) nomograms in the training set possessed excellent consistency with actual observation, which was also found in the validation set ( Figure S3). The above results indicated that there was good agreement between the predictions of the nomograms and the actual observations in both the training set and the validation set.
DISCUSSION
At present, the research on patients with EOLC (aged <45 years old) attracted widespread attention due to the rapid increase in LC morbidity and mortality worldwide. It was strongly suggested that genomic mutation was an important predisposing factor for EOLC [12]. Patients with EOLC usually have poor survival outcomes and a higher proportion of family history with other types of cancers [13,14]. In practice, accurately predicting the prognosis of patients with EOLC and formulating individualized treatments are conducive to improving the survival rate. However, the current pathological staging of tumors based on imaging examinations do not meet the requirements for accurate prognosis prediction of patients with EOLC. There is an urgent need for a reliable system to comprehensively consider multiple prognostic factors in patients with EOLC to accurately predict survival time.
This study focused on the prognosis prediction for patients with EOLC based on the construction of nomograms. First, we established prognostic nomograms for 3-, 5-, and 10-year OS and CSS in patients with EOLC. The clinical variables included in the establishment were determined by the results of Cox regression and comprised age, race, grade, TNM stage, tumor primary site, and SEER stage. In addition, the clinical utility and predictive performance of nomograms were verified by ROC curve, DCA curve, and C-index, indicating that efficacy was better than of TNM stage. Furthermore, the accuracy of predicting 3-, 5-, and 10-year OS and CSS was evaluated by the calibration curve, which showed excellent agreement between the nomogram and the actual observation results. In practice, the AUC in ROC analysis and the C-index were generally higher than 0.760 and 0.790, respectively, for all nomograms, which confirmed the promising predictive ability of nomograms. The results of the DCA curve also supported the good clinical practical value of the nomogram. Nomograms integrated the biological results into a mathematical model to establish a comprehensive consideration of various clinical characteristics and pathological variables of patients with cancer and then graphically displayed the possibility of clinical results. Nomograms were reported to be more accurate than existing models in predicting patient prognosis [15]. Recently, an increased number of nomograms comprising various clinical variables have been used to predict the prognosis of patients with LC [16][17][18][19]. Liang et al [18] analyzed NSCLC patient data in multiple clinical centers and established the nomogram for postoperative survival prediction. As a multicenter study, it provided patients with resected NSCLC with an accurate individualized prediction of OS and assisted clinicians in decision making. Similarly, Zheng et al [16] developed the nomogram for predicting prognoses in LC with bone metastasis and comprehensively analyzed the independent prognostic factors, which included age, gender, histological types, grade, and others.
In this study, the following clinical variables, including age, race, grade, TNM stage, tumor primary site, and SEER stage, were the independent risk factors that impacted the prognosis of patients with EOLC. Many studies reported age and race as risk factors for the prognosis of various cancers [20,21]. Genetic differences among races as a significant risk factor for tumor prognosis has also been widely recognized [22,23]. Michele et al [24] found that first-degree relatives of patients with EOLC in black races were more susceptible to developing LC, which indicates significant differences among races. The grade, primary site, and metastasis of the tumor also significantly affects the prognosis of patients [25]. The pathological grade of a tumor was positively correlated with the degree of malignancy and invasion [26]. It has been suggested that cancer cells in high-grade tumors were insensitive to treatment [27], which adversely affects the prognosis of patients. Tumor site of cancers is as important a factor affecting the prognosis of patients [28,29]. For patients with LC, the primary site of tumor in the right and left lower lobe or in the right middle and left lingual lobe is more susceptible to mediastinal lymph node tumor metastasis [30]. Moreover, lymph node metastasis or distant tumor metastasis represents a poor prognosis and short survival time for patients. In our study, the same results were supported by statistical analysis.
Currently, TNM stage, determined by laboratory results and postoperative pathological examination, is the most widely accepted tumor staging system. In practice, clinicians would judge TNM stage based on individual characteristics of the tumor (T), node (N), and metastasis (M) [6]. Chen et al [31] verified the prognostic value of the TNM Classification of Malignant Tumours. 8th Edition TNM staging system for patients with LC and found recurrence-free survival could also be predicted through TNM stage. However, the TNM stage has limitations and could not provide clinicians with individualized prognosis prediction. As shown in this study, patient prognosis was also closely related to a variety of clinical variables except TNM staging, and accurate prediction relied on the comprehensive consideration of all independent risk factors. We successfully established an effective nomogram based on age, race, grade, TNM stage, tumor primary site, and SEER stage, which has been proven a better predictive tool than TNM stage alone. The construction of nomograms would be useful in helping to develop personalized treatment for patients with EOLC.
There are still some limitations to our study. First, the SEER database as a retrospective database includes biases in data collection due to manual recording and other reasons. Second, the clinical data were incomplete; for example, the SEER database failed to record the genetic changes in the patients. Third, the analyzed data did not represent other regions and required external verification. Therefore, it is necessary to conduct multicenter prospective clinical trials to verify the accuracy of nomograms.
CONCLUSIONS
We established prognostic nomograms for 3-, 5-, and 10-year OS and CSS in EOLC patients based on a large amount of clinical data, and this prognostic nomogram has good predictive ability. The models could help clinicians prepare personalized treatment for patients with EOLC. | 2020-10-24T13:05:44.887Z | 2020-10-14T00:00:00.000 | {
"year": 2021,
"sha1": "b61091e0af03291be38d3c34f992742c3483090a",
"oa_license": "CCBY",
"oa_url": "https://www.bjbms.org/ojs/index.php/bjbms/article/download/5079/2124",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8748c9bcf64c8a5ef755a9d0bfee57ce9c0ef6c9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1507972 | pes2o/s2orc | v3-fos-license | Association between bone mineral density and type 2 diabetes mellitus: a meta-analysis of observational studies
Type 2 diabetes mellitus (T2DM) influences bone metabolism, but the relation of T2DM with bone mineral density (BMD) remains inconsistent across studies. The objective of this study was to perform a meta-analysis and meta-regression of the literature to estimate the difference in BMD (g/cm2) between diabetic and non-diabetic populations, and to investigate potential underlying mechanisms. A literature search was performed in PubMed and Ovid extracting data from articles prior to May 2010. Eligible studies were those where the association between T2DM and BMD measured by dual energy X-ray absorptiometry was evaluated using a cross-sectional, cohort or case–control design, including both healthy controls and subjects with T2DM. The analysis was done on 15 observational studies (3,437 diabetics and 19,139 controls). Meta-analysis showed that BMD in diabetics was significantly higher, with pooled mean differences of 0.04 (95% CI: 0.02, 0.05) at the femoral neck, 0.06 (95% CI: 0.04, 0.08) at the hip and 0.06 (95% CI: 0.04, 0.07) at the spine. The differences for forearm BMD were not significantly different between diabetics and non-diabetics. Sex-stratified analyses showed similar results in both genders. Substantial heterogeneity was found to originate from differences in study design and possibly diabetes definition. Also, by applying meta-regression we could establish that younger age, male gender, higher body mass index and higher HbA1C were positively associated with higher BMD levels in diabetic individuals. We conclude that individuals with T2DM from both genders have higher BMD levels, but that multiple factors influence BMD in individuals with T2DM.
Introduction
Osteoporosis and diabetes are both common human diseases. Albright and Reifenstein [1] reported their coexistence in 1948, but hitherto the association between them remains unclear. Due to the different pathogenesis of type 1 and type 2 diabetes mellitus (T2DM), it is not surprising that there is no uniform entity of diabetic bone disease as such. While decreased bone mineral density (BMD) has consistently been observed in type 1 diabetes mellitus patients [2,3], studies on BMD investigated in T2DM showed contradictory results with higher, lower or similar values in comparison with healthy control subjects [4][5][6][7].
These inconsistent findings may be related to vast differences in study design, BMD measurement technology, differences in site of BMD examination, selection of patients, and presence or absence of complications.
It is well known that advanced age is a risk factor for bone loss and osteoporosis [8,9]. Some of the attributed mechanisms include increased production of inflammatory cytokines and cellular components, incremental osteoclast precursors generation and decreased bone preservation due to gonadal failure resulting in lower tissue production of sex steroids [10]. Advanced age is also associated with increased fall frequency, lack of exercise, use of drugs that negatively influence bone metabolism and renal function such as drugs prescribed for diabetes and hypertension.
Gender also appears to have an important effect on the relation between BMD and T2DM. Barrett-Connor [11] found that older women with T2DM had higher BMD levels at all sites compared to those with normal glucose tolerance, but this effect was not observed in men. It has also been suggested that obesity and hyperinsulinemia can lead to lower bone turnover in diabetic women [7,12], so that the adverse effects of estrogen deficiency on bone mass are attenuated and delayed after menopause.
Many studies have shown a difference in population characteristics between type 2 diabetic patients and healthy controls [6,11,13,14]. Diabetic study participants tend to have a higher body mass index (BMI) or weight, increased insulin levels, less physical exercise, higher alcohol consumption and they usually smoke more. The use of diuretics is more common in diabetes. These characteristics might influence bone metabolism independently of diabetes. Paradoxically, an increased risk of osteoporotic fracture in T2DM has been repeatedly demonstrated and this was independent of BMD [13,15]. This association with fracture adds uncertainty around the actual association between diabetes mellitus and BMD.
The aim of our study was to perform meta-analysis of published articles exploring differences between type 2 diabetics and healthy individuals in BMD levels measured at four anatomical sites. In addition, we evaluated factors influencing BMD variation like sex, age, BMI and glucose control (HbA 1c levels) for which a meta-regression was performed to evaluate potential mechanisms by which T2DM influences BMD variation.
Search strategy
A systemic search for all literature that was published in May 2010 or earlier was performed using Pubmed and Ovid online (1950 to present with daily update). The search used MeSH terms ''diabetes mellitus'' and (''osteoporosis'' OR ''bone density'' or ''bone mass'').
Study selection
Studies were considered eligible for the meta-analysis if (1) they evaluated the association between T2DM and BMD, (2) they were of a cross-sectional, cohort or case-control design, (3) they included healthy subjects without DM as controls, (4) they reported gender-stratified statistics on both individuals with and without T2DM, (5) BMD was measured by dual energy X-ray absorptiometry (DXA) and (6) BMD measurements were expressed as an absolute value in g/cm 2 . In the cases that more than one article presented data from the same study population, the study with more complete reporting of data was selected.
Studies in nonhuman populations, review articles, experimental studies, case reports or studies that lacked controls, studies on type 1 or other types of DM, studies that had no clear definition of T2DM, studies that measured BMD measured by computed tomography, ultrasound or single X-ray absorptiometry were all regarded as ineligible.
Only published results were used and papers in all languages were considered. We supplemented electronic searches by hand-searching reference lists of relevant articles and reviews. The abstracts and titles of primitive collections were initially browsed and all observational studies were extracted. Potentially relevant articles were then considered by double checkout. Disagreements were resolved by discussion between at least two reviewers.
Data
Quality-scoring varies in meta-analyses of observational studies and no criteria have been internationally accepted to date. Consequently, we appraised each article included in this analysis with the guidelines of the MOOSE group [16]. Some key points were: clear definition of study population, clear and internationally accepted criteria of diagnosing diabetes, description of the coefficient of variation for BMD measurements, consecutive selection of cases, random selection of controls and identification of important confounders. We required that at least 2 studies per site-specific BMD outcome should be available to perform a meta-analysis.
Mean and its standard deviation (SD) of BMD measurements at the calcaneus, femoral neck, total hip, spine and forearm in both diabetics and non-diabetics were extracted to explore the pooled mean difference estimation. If repeated measurements were available in cohort studies we extracted only the measurements at baseline (or the earliest available measurement) as being a cross-sectional study. The mean and standard deviation had to be unadjusted due to large variance of adjusted factors between different studies. If there were statistically significant age differences between patients and controls and the age-adjusted mean and deviation could be found, these data were used; if these were not found the study was excluded. In addition, we performed meta-analysis including the maximally adjusted estimates from studies where available. If sample size of either group in comparison was less than 30, it was not used in our analysis. Gender was considered to be a determinant for subgroup analysis.
If studies lacked SD estimates but provided P value, standard error (SE), confidence interval (CI) that related to the mean difference, we estimated SDs using the following methods [17]: 1. From SE to SD: the following formula was used: 2. From CI to SD: SE = (upper limit -lower limit)/3.92 (if 95% CI), then replaced in formula. 3. From P value to SD: the corresponding t-value according to P value was obtained from a table of the t-distribution with the degrees of freedom given by N case ? N control -2 (where N case , N control are the sample sizes); then, assuming SE ¼ MD t (where MD is mean difference between case and control); we finally replaced SE in the formula: (where SD is the average of the SDs of the case and control arms);
Analyses
The weighted mean difference estimates of BMD in g/cm 2 comparing diabetes with controls were calculated as Der-Simonian and Laird estimators using random effects models. As secondary analyses inverse variance fixed effect models were applied. Publication bias was tested using funnel plots. Tests for heterogeneity were performed by applying the Cochran Q test and estimating the degree of inconsistency index (I 2 ) [18]. Sources of heterogeneity were investigated by sensitivity analyses stratifying on study design, by excluding studies: on Asian populations, presenting large differences in BMI between cases and controls, and/or having BMD measurements assessed by different densitometers. All analyses were conducted with the use of Review Manager, version 5.0 (Revman, The Cochrane Collaboration; Oxford, UK) and Comprehensive Meta-analysis version 2 (Biostat, Inc., Englewood, USA).
To estimate the effects of gender, age, BMI and HbA 1C on the BMD measured at the different sites a meta-regression analysis was performed using STATA 11.0 (StataCorp LP, USA). Figure 1 shows a flow diagram describing the study selection process. The initial search yielded 1,161 research reports, of which 222 were excluded for having the same title or authors; 788 were excluded due to not eligible study design (including non-human studies, review articles, case reports, comment, letter, experimental study, and/or fracture-only outcome). Additional 109 studies were found irrelevant to the original research question and excluded because the disease of interest was either type 1 or gestational DM (81 studies); or for not measuring bone mass using DXA, i.e. by single X-ray absorptiometry, CT or ultrasound (28 studies). Of the 42 remaining studies, 11 either lacked non-diabetic controls at all or did not report means and standard deviations in non-diabetic controls [19][20][21][22][23][24][25][26][27][28][29] (n \ 30) in either group of comparison [30][31][32][33][34][35]. The study population of two studies was used in follow-up reports [4,36]. In three studies there was a big age difference between individuals with diabetes and those without diabetes, but the investigators did not adjust for it [37][38][39]. One study matched cases and controls by age and BMI and presented data only on post-matching [40]. The original articles of four articles could not be retrieved [41][42][43][44]. All of these aforementioned studies were excluded. One study cited as reference in one of the research reports was traced and satisfied the inclusion criteria [45]. In one research report the results of gender-specific BMD analyses was mentioned, but not listed in detail [14]. We contacted the researchers and were able to retrieve this information. The study of Perez et al. [46] found a significantly increased calcaneal BMD in female but not in males subjects with diabetes. No meta-analysis was attempted for this site since this was the only study that evaluated BMD at the calcaneus. Since no SD's for male comparison groups could be retrieved for the paper by Barrett-Connor et al. we were not able to include these results for men. As we extracted only a single measure and didn't examine repeated measurements, cohort studies were analyzed as cross-sectional using the baseline or earliest available measurement. A total of 15 observational studies (9 case-control, 6 crosssectional) were included in our meta-analysis (3,437 diabetics and 19,139 controls) [5-7, 11, 12, 14, 45, 47-54]. Table 1 indicates the quality evaluation of all studies. We did not observe indication of publication bias on the Funnel Plots (data not shown), with the effect magnitude of larger studies being closer to and smaller studies largely equally distributed at both sides of the summary estimate. Table 2 shows study population characteristics and the reported effect of covariates on the association between BMD and T2DM. Out of five studies performed in the US, one had included Mexican-American women [6] and one had white and black participants [51]. One study was done in Eastern Asia [7] and another two in Eastern Europe [53,54]. The remaining eight studies collected data in Western Europe and Oceania. Participants in all study populations were aged 25 years and over and approximately 70 % were middle-aged or older. In addition, Table 2 shows that the most common covariates considered by the studies were BMI or weight, cigarette smoking, alcohol use, physical activity, diuretic use, calcium intake, estrogen use (women), menopause status (women), age at menarche (women), insulin level, HbA 1C and alkaline phosphatase. Table 3 shows the population characteristics of the source studies by gender. Association between bone mineral density and type 2 diabetes mellitus 323 Table 4 presents BMD levels in diabetics and non-diabetics at four skeletal sites across the different studies, also including subgroup analysis by gender. At the femoral neck, all studies except for Yaturu et al. [5] and Majima [7] found a higher BMD in subjects with diabetes. At the total hip, all referred studies showed significantly higher BMD in diabetics. At the lumbar spine, almost all of the studies reported a higher BMD in diabetics. These differences were statistically significant in the vast majority. At the forearm there were no significant differences between diabetics and non-diabetics in all analyses. No major differences between genders were found.
Results
Some reports concluded that the association remained significant despite the fact that the effect size decreased remarkably after correcting for aforementioned covariates [6,11,12,14,48,54]. In others, the association disappeared or even shifted in the opposite direction after adjustment for covariates, particularly in the case of BMI or weight [5,49,51,52]. We performed meta-analysis for maximally adjusted estimates where available, which did not significantly alter previously calculated mean differences. Nearly all studies found that BMI was positively correlated with BMD. There was some evidence suggesting that other factors such as insulin levels also had a positive correlation with BMD [7]. In contrast, HbA 1c levels had positive [7], negative [51] or no correlation [50] with BMD. In a follow-up study, Schwartz [51] found that after adjustment for covariates white women with T2DM lost on average more BMD per year than those without DM. Table 5 shows meta-analysis results of pooled mean differences and corresponding 95% confidence intervals of BMD values between diabetic and non-diabetic individuals. In the pooled meta-analyses the differences were 0.04 (95% CI: 0.02, 0.05) at the femoral neck, 0.06 (95% CI: 0.04, 0.08) at the hip, 0.06 (95% CI: 0.04, 0.07) at the spine, and -0.003 (95% CI: -0.02, 0.02) at the forearm, respectively. In the sex-stratified analysis these differences were most pronounced for females, being 0.04 (95% CI: 0.03, 0.06), 0.07 (95% CI: 0.04, 0.11), 0.07 (95% CI: 0.05, 0.09), 0.01 (95% CI: -0.02, 0.03) at the femoral neck, hip, spine, and forearm, respectively. In males these differences were statistically significant at the hip 0.04 (95% CI: 0.01, 0.08) and spine 0.05 (95% CI: 0.02, 0.07). The metaanalysis result in males was non-significant at the femoral neck 0.03 (95% CI: 0.00, 0.05) and forearm -0.01 (95% CI: -0.04, 0.02). This information is displayed in more detail in the forest plots of Figs. 2, 3, 4, and 5.
The heterogeneity (Q) tests showed significant differences between individual studies (P \ 0.01) at all sites in the total group and sex-specific analyses (Table 5). Still, point estimates and statistical significance from fixed effects models were very similar to those derived from random effects models. We further performed sensitivity analyses to identify potential sources of the observed heterogeneity. Subgroup analyses per study design (casecontrol/cross-sectional) showed that case-control studies had effect estimates with larger variation around the pooled estimate thereby increasing the heterogeneity. For the femoral neck BMD analysis the largest source of heterogeneity was traced back to one study by Yaturu et al. [5]. This study include only men and observed a positive relation with lumbar spine and a negative one for femoral neck; after removing this study the I 2 statistic dropped from 81 to 57 %. Another study in Asians also displayed estimates in the opposite direction for different outcomes though not significant [7]. Removing seven studies with significantly different BMI between diabetes and non-diabetes [5,12,14,47,50,51,54] or six studies that did not use a densitometer manufactured by Hologic incorporation (USA) [5,12,14,48,50] from the analyses showed no significant influence on the observed heterogeneity, except for the femoral neck BMD analysis, but this was largely attributable to the large heterogeneity brought in by the Yaturu et al. study [5].
The results of a meta-regression on BMD by sex, age, BMI and glucose control (HbA 1c levels) is presented in Table 6 for individuals from the diabetic group of the studies. Being a woman was associated with significantly lower BMD levels at all four anatomical sites, as compared to men. Age was negatively associated with BMD at hip but positively at the lumbar spine. Higher BMI was a strong determinant of higher BMD at the femoral neck and lumbar spine, with no apparent effect on forearm BMD. Higher HbA 1C levels (reflecting lesser glucose control) resulted in higher BMD at the femoral neck and total hip.
Discussion
Our study provides insights into the inconsistently reported relationship between T2DM and BMD. In line with what is suggested by the majority of reviewed studies our metaanalysis concluded that overall individuals with T2DM have about 25-50 % SD higher BMD compared to nondiabetic control subjects.
In this study we found no strong evidence for skeletal site specificity of this association. Subjects with T2DM had elevated BMD at the femoral neck, hip, and spine. No major differences in BMD at the forearm were seen but there are no obvious biological reasons we can attribute to them. This lack of association with forearm BMD may be the consequence of limited sample size. We also found no strong evidence suggesting there is sex-specificity in the observed BMD differences between diabetics and nondiabetics. BMD differences seem larger in women than in men but power limitations can also play a role. We did find considerable heterogeneity influencing the association as reflected by a high I 2 statistic. This large heterogeneity could most probably stem from a large variation in types of study design, diagnostic definitions and individual characteristics that were not considered by each study. We did sensitivity analyses trying to find sources of heterogeneity and concluded that study design and Asian ethnicity are a likely, but not sufficient sources to explain the observed heterogeneity. In contrast, differences in DXA manufacturers and levels or correction for BMI do not seem to be an important source of heterogeneity.
Our study has limitations. We procured including all eligible studies to the best of our capacities but at least four SD written as NA if neither exact P value, SE or CI was available a Using the formula from P value to SD b Using the formula from SE to SD c Using the formula from CI to SD studies were not able to be traced back. Sensitivity analyses considering such studies did not essentially change our results or conclusions. Variation in the definition of T2DM was present across studies with some combining selfreports and blood glucose tests, while others only used blood glucose tests. Studies which relied either on selfreports, population screening or which used register data will be subject to potential disease misclassification bias. Similarly, differences in mode of diagnosis can affect the prevalence of disease across studies and, hence, influence the power for detecting BMD differences. Disease duration can also be an important confounder, but uniform assessment for this co-variable was not possible across studies. Another drawback is that not all studies reported on or adjusted for covariates. Yet another potential source for heterogeneity that we could not control for are differences in glucose control and prevalence of diabetic complications. Nevertheless, the meta-regression done for BMD on the group of diabetic individuals across studies shows that in addition to BMI, HbA 1C levels also has a significant positive effect on BMD measured at any site.
Since May 2010 about 134 articles have been published on the topic of which we could identify two that would have met our inclusion criteria [55,56]. These were studies based on Chinese populations showing opposite results with one concluding type 2 diabetics had higher BMD [55] while the other [56] concluded diabetics had lower BMD and higher risk of osteoporosis.
Mechanisms that might account for an association between T2DM and increasing BMD are plentiful and largely unclear. We discuss below from a clinical perspective the most important factors which can influence the relationship between T2DM and BMD.
Obesity
Historically, overweight and hyperinsulinemia have been postulated as two important features of T2DM which are positively correlated with BMD. Yet, we saw that in a considerable number of the included studies the correction for BMI did not essentially modify the association. There are several complex pathways by which obesity may influence the relation between diabetes and BMD. Body fatness may have an impact on the accuracy of DXA-based BMD measures as demonstrated in obese diabetic patients [57]. Yet, such measurement error should be negligible considering that this phenomenon can either under or overestimate the values and have been shown to have low impact on the accuracy of the BMD measurement [58]. On the other hand, adipose tissue releases a wide variety of adipokines that have been implicated either directly or indirectly in the regulation of bone remodeling [59]. Plasma leptin concentrations have been shown to be higher in diabetic men than in healthy controls [60]. Leptin induces bone growth by stimulating osteoblast proliferation and differentiation in vitro [61][62][63] and it has also been shown to inhibit osteoclastogenesis through reducing RANK/RANKligand production and increasing osteoprotegerin [64,65]. Other adipokines such as adiponectin and resistin are also expressed in osteoblasts and osteoclasts [66,67]. The effects of these adipokines on bone metabolism remain largely ambiguous but differentiation from mesenchymal progenitor cells to osteo-or adipocytes may play a role [67][68][69][70]. Some reports indicate that circulating adiponectin [71] and resistin levels [72] are reduced in diabetes in line with a recent report demonstrating that higher adiponectin levels are associated with lower BMD [73].
Hyperinsulinemia
Some of the reviewed studies indicated that insulin levels could mediate in part a positive association between T2DM and elevated BMD. Individuals with T2DM usually have an excess of insulin. Physiologically, insulin has an anabolic effect on bone due to its structural homology to IGF-1 by interacting with the IGF-1 receptor which is present on osteoblasts [74]. The IGF-1 signaling pathway is crucial for bone acquisition [75]: both human and mouse studies have demonstrated a significant positive association between IGF-1 and BMD [76,77]. From this perspective it can be hypothesized that hyperinsulinemia could have a mitogenic effect on osteoblasts and their differentiation by stimulating the IGF-1 signaling pathway. Some indirect influences of insulin on bone formation could possibly be mediated by osteogenic factors such as amylin, osteoprotegerin, sex steroids and sex hormone-binding globulin (SHBG).
Medication use
Thiazide use which is expected to be higher in diabetic individuals has also been associated with higher BMD at different skeletal sites [78,79]. Similarly, statin use (also more prevalent in diabetics) is also associated with higher BMD [80,81]. Nevertheless, several of the included studies controlled for medication use, and thus it is unlikely that this alone can explain the observed associations. On the other hand medication use can well be a source of the large heterogeneity observed in the meta-analysis.
Paradoxically increased fracture risk For many of the aforementioned mechanisms resulting in higher BMD it is rather difficult to fit their role in the paradoxically increased fracture risk. It has been well established that diabetic patients have impaired bone healing after fracture [82]. This probably indicates a compromise of both osteoclastic [82] and osteoblastic cell lineages [83], and possibly also on bone remodeling. Indeed, a recent study by Burghardt et al. [84] using highresolution peripheral quantitative computed tomography (HR-pQCT) reported up to twice the cortical porosity observed in type 2 diabetes patients as compared to controls. The results of this pilot investigation provide a potential explanation for the inability of standard BMD measures to explain the elevated fracture incidence in patients with T2DM presenting with higher BMD levels. Specifically, the findings suggest that T2DM may be associated with an inefficient redistribution of bone mass and insufficient compensation for increased body mass, which may result in impaired bending strength. In addition, bone strength might be compromised through different mechanisms, such as increased production of non-enzymatic cross-links within collagen fibers, accumulation of advanced glycation end products [85], higher serum glucose levels that can negatively influence bone matrix properties [86] or indirectly as a consequence of sarcopenia [87]. Finally, patients with diabetes have increased fall risk, which can arise as a consequence of sarcopenia, retinopathy and/or neuropathy. Very recently, it has been shown how Type 2 diabetes underestimates the risk of fracture at a given BMD level [88], reason why the diabetic status is needed to be considered in risk fracture algorithms [89,90].
Conclusion
Our meta-analysis showed that diabetic individuals have higher BMD levels than non-diabetics independent of the skeletal site of measurement, gender, age, BMI or medication use. In addition, by applying a meta-regression we could establish that younger age, male gender, higher BMI and higher HbA 1c are positively associated with higher BMD levels in diabetic individuals. The potential mechanisms underlying these associations remain complex suggesting that several influential factors need to be considered while interpreting the association between T2DM and BMD. Large prospective studies are needed to establish the mechanisms underlying this association, and most importantly the relationship with fracture risk, the most adverse consequence of osteoporosis. Values are regression coefficients ± SEM, * P value \ 0.05 | 2016-05-12T22:15:10.714Z | 2012-03-27T00:00:00.000 | {
"year": 2012,
"sha1": "c1529312d3ff7da940d4e40d11d6ea5b1267dcca",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10654-012-9674-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1529312d3ff7da940d4e40d11d6ea5b1267dcca",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17048533 | pes2o/s2orc | v3-fos-license | Nucleotide sequence analyses of the MRP1 gene in four populations suggest negative selection on its coding region
Background The MRP1 gene encodes the 190 kDa multidrug resistance-associated protein 1 (MRP1/ABCC1) and effluxes diverse drugs and xenobiotics. Sequence variations within this gene might account for differences in drug response in different individuals. To facilitate association studies of this gene with diseases and/or drug response, exons and flanking introns of MRP1 were screened for polymorphisms in 142 DNA samples from four different populations. Results Seventy-one polymorphisms, including 60 biallelic single nucleotide polymorphisms (SNPs), ten insertions/deletions (indel) and one short tandem repeat (STR) were identified. Thirty-four of these polymorphisms have not been previously reported. Interestingly, the STR polymorphism at the 5' untranslated region (5'UTR) occurs at high but different frequencies in the different populations. Frequencies of common polymorphisms in our populations were comparable to those of similar populations in HAPMAP or Perlegen. Nucleotide diversity indices indicated that the coding region of MRP1 may have undergone negative selection or recent population expansion. SNPs E10/1299 G>T (R433S) and E16/2012 G>T (G671V) which occur at low frequency in only one or two of four populations examined were predicted to be functionally deleterious and hence are likely to be under negative selection. Conclusion Through in silico approaches, we identified two rare SNPs that are potentially negatively selected. These SNPs may be useful for studies associating this gene with rare events including adverse drug reactions.
Background
The development of drug resistance poses a serious limitation to the effective treatment of cancer. Although sev-eral different drug resistance mechanisms have been described, members of the ABC transporter superfamily have generated great interest because of their contribution to multidrug resistance of tumor [1,2]. The 170 kDa Pglycoprotein, encoded by the MDR1 gene, was the first member of this family to be described [3]. Subsequently, the 190-kDa multidrug resistance-associated protein-1 (MRP1/ABCC1) was isolated from a multidrug resistance lung cancer cell line that does not express MDR1 [4]. Both these transporters have been implicated in the resistance of various cancers to chemotherapy. Although MRP1 is only 18% identical to MDR1 at the amino acid level, it transports several similar drugs as MDR1 including doxorubicin, vincristine and colchicine. However, while drugs transported by MDR1 are usually neutral or cationic, drugs effluxed by MRP1 are anionic, frequently conjugated with glutathione and other anions, or are co-trans-ported with glutathione [2]. MRP1 has also been implicated to play important roles in cellular anti-oxidative defense and inflammation [5,6].
Genetic polymorphisms in MDR1 have been associated with differences in MDR1 expression and function as well as drug response and disease susceptibilities [8][9][10]. SNPs within MDR1 that have been associated with functional differences were found to demonstrate evidence of recent positive selection [11]. However, less is known about the polymorphisms within MRP1. Although numerous SNPs have been identified within this gene.( [12][13][14][15][16][17], most of these studies were performed on a single population which was primarily either Chinese, Japanese or Caucasians in origin. Thus far, no association have been observed between the few SNPs at MRP1 and functional differences [12,[17][18][19] possibly because neither the functionally important SNP nor SNPs in LD with the functional SNP were examined. These studies which examined only a few of the many SNPs within MRP1, without knowledge of the functional SNP nor the LD or haplotype profile in that population may not have been powerful enough to identify any association. Recently, we found evidence of genomic signatures of recent positive selection in a SNP at the 5' flanking region (5'FR) of MRP1 in a Caucasian population and demonstrated that this SNP altered MRP1 promoter activity [20].
In the present study, we sequenced all the exons as well as the 5' and 3' flanking regions of MRP1 to comprehensively scan for polymorphisms in 142 DNA samples from four different populations, namely, the Chinese, Malays, Indians and Caucasians. Nucleotide diversity of the exonic polymorphisms was determined and the functional effects of the non-synonymous SNPs were predicted using three programs, SIFT, PolyPhen and PANTHER. We found that SNPs E10/1299G>T, which resulted in arginine-serine substitution at amino acid position 433 (R433S) and E16/2012G>T, which resulted in glycine-valine substitution at amino acid position 671 (G671V), may potentially adversely affect the function of MRP1. While these two SNPs, which have low minor allele frequencies (<3%), may not be useful for studies associating this gene with common diseases/drug response, it may, nonetheless, be useful for studies associ-ating this gene with rare events including adverse drug reactions (ADRs).
Profile of polymorphisms within MRP1 in the different populations
De novo sequencing of approximately 18 kb of genomic DNA at MRP1, including all the 31 exons as well as flanking regions, was performed in 142 healthy individuals from four different populations to identify polymorphisms at MRP1 in the different populations. A total of 71 polymorphisms were identified including 60 bi-allelic SNPs, ten indels and one short tandem repeat (STR) ( Figure 1, Tables 2, 3, 4). An examination of currently reported SNPs in the dbSNP Build 125 database [21] and published reports [12][13][14][15][16][17]22] revealed that 26 SNPs and 8 indels were not previously reported and hence represent novel polymorphisms.
Nineteen of the 60 SNPs identified were found in all four populations while 28 of these SNPs were population-specific with 22 of these population-specific SNPs occurring only once out of 284 chromosomes examined (singletons). While the STR and two indels were polymorphic in all the populations examined, the other seven indels were population-specific of which six were singletons.
None of the indels or STR identified occurred within exons ( Fig. 1, Tables 3 and 4). Eighteen of the 60 bi-allelic SNPs were found in exonic regions, six of which resulted in non-synonymous change ( Fig. 1, Table 2). These results suggest that polymorphisms at MRP1 are largely conservative since less than 10% of these polymorphisms (6/71) presented as non-synonymous changes which are potentially capable of disrupting the MRP1 protein structure/ function. Nonetheless, it is possible for synonymous or intronic SNPs to affect MRP1 expression or function through the alteration of the mRNA transcript stability or folding [23] thereby affecting downstream splicing [24,25], processing [26], translational control [27] or regulation [28]. Additionally, polymorphisms at the 5'UTR/promoter and 3'UTR may influence promoter activity and hence gene expression or mRNA transcript stability.
Interestingly, although no polymorphisms were identified at the 3'UTR region (exon 31), four polymorphisms, including the STR (Table 4) and one indel (Table 3) were found to reside at the 5'UTR/reported core promoter region [29] of MRP1. Three of these promoter polymorphisms were novel but population-specific with SNPs 5'UTR/-46C>T and 5'UTR/-51C>T occurring only in the Malay population and the polymorphism 5'UTR/-74 14 bp indel occurring only in the Indian population. Inser-tion/deletion polymorphisms in promoter regions have been correlated with the modulation of the expression of genes (e.g. matrix metalloproteinase I gene [30]). It is thus possible that the 14 bp indel polymorphism in the Indian population may influence the promoter activity and hence the expression of MRP1.
The STR polymorphism found at the 5'UTR/promoter region of MRP1 is a GCC trinucleotide repeat and 7-16 of such repeats were observed in the four populations ( Caucasians. Interestingly, while seven GCC repeats occurred at relatively high frequencies in the Indian and Caucasian populations (≥ 12%), this number of repeats was not observed in either the Chinese or Malay population. These observations highlight the differences in the distribution of the number of the MRP1 promoter GCC repeats in the different populations with the Indians and Caucasians being more similar to each other than to the Chinese and Malays. The number of STR repeats residing within or close to promoters has been found to modulate the promoter activity of genes [31][32][33][34]. Interestingly, differences in the CGG and GCC trinucleotide repeats at the 5'UTR/promoter region of the Fragile X mental retardation genes (FMR1 and FMR2, respectively) have been associated with differences in the methylation status of the promoter and expression of the genes [35]. Hence this common polymorphism at the 5'UTR/promoter region of MRP1 with distinctly different distribution of repeat numbers in the different population may have potential functional significance. Table 5A, only 19 and 14 polymorphisms that we identified were also genotyped in the HapMap and Perlegen projects. Curiously, 25/23 and 1/1 polymorphisms reported in HapMap and Perlegen, respectively, were found to be monomorphic in similar populations that we examined. Nonetheless, all the SNPs examined in the two databases that did not occur in our populations were found to be either monomorphic or of low frequency (<5%) in similar populations examined in the two databases (Table 5A). On the other hand, 41/ 41 and 46/46 polymorphisms that we identified were not examined in either the HapMap or the Perlegen project, respectively. While many of these polymorphisms were of low frequencies or were monomorphic in the two populations that were similar to the HapMap/Perlegen populations, 8 of these polymorphisms were found to be of relatively high frequencies (>5%) in at least one of the two populations. Some of the low frequency polymorphisms represent novel SNPs identified in this study.
Comparison of polymorphisms identified in this study with those reported in the HapMap and Perlegen databases
Polymorphisms in our study that was also genotyped in the HapMap and Perlegen projects were found to have similar frequencies in similar populations (Table 5B).
Paired T-test revealed no significant difference (P > 0.05) between allele frequencies in the respective populations from our study and those from the HapMap or Perlegen database (Table 5C). Interestingly, significant difference (P < 0.05) was observed between data obtained from the HapMap database and those from the Perlegen database especially for the Chinese population probably due to fewer samples being examined in the Perlegen database.
Nucleotide diversity at MRP1
The extent of variation at MRP1 was evaluated using two conventional measures of nucleotide diversity: π, the average heterozygosity per site and θ, the population mutation parameter [40]. Tajima's D statistic was also calculated to assess deviation from the neutral mutation model [41]. A positive Tajima's D value for a single gene is indicative of positive heterozygote advantage while a negative Tajima's D value for an individual gene suggests selection of a specific allele over the alternative allele(s) [42]. However, when a negative Tajima's D value is observed in most of the genes that were examined in a particular population, it is suggestive of a recent expansion in that population [42].
With all the exonic regions sequenced, the above nucleotide diversity statistics were determined for non-synonymous versus synonymous SNPs at MRP1 ( Table 6). The θ value for synonymous SNPs at MRP1 was found to be 16.15 × 10 4 (Table 6) which was comparable to mean θ values of other reported genes including 24 transporter genes (20.14 ± 4.10) × 10 4 [43], 75 candidate genes associated with blood pressure homeostasis (15.1 ± 3.6) × 10 4 [44] but slightly higher than the mean θ values of 106 random genes (10.03 ± 2.52) × 10 4 [45]. However, the θ value for non-synonymous SNPs (11.73 × 10 4 ) at MRP1 was much higher than mean θ values of the other reports (3.59 ± 0.90 to 5.7 ± 1.4) × 10 4 [43][44][45] probably due to the small size of the MRP1 exons in which the non-synonymous SNPs reside. Interestingly, while the π of synonymous SNPs (π s ) at MRP1 (12.62) was comparable to the reported mean π s values in other genes (9.73 ± 4.86 to 10.67 ± 5.07) × 10 4 [43,45], the π ns at MRP1 (0.94) was much lower than the mean reported π ns values for the other genes (2.20 ± 1.12 to 2.75 ± 1.31) × 10 4 [43,45]. This low π ns at MRP1 was also reported previously [43] with the reported π ns value (0.15) being much lower than the present observation (0.94). Notably, the π ns /π s at MRP1 was less than 1 (0.0743 in this study and 0.0110 in the previous study [43]), suggesting that this gene is likely to be under selective pressure. Importantly, the θ values for both synonymous and non-synonymous SNPs were greater than the corresponding π values, resulting in negative Tajima's D statistic which suggests that the coding region of MRP1 may have undergone negative selection or population expansion. It is more likely that MRP1 gene have undergone negative selection since the average total nucleotide diversity in the MRP1 gene (π total ) (9.25) was found to be greater than the amino acid diversity (π ns ) (0.94) [43].
SNPs E10/1299G>T and E16/2012 G>T are potentially deleterious
As nucleotide diversity statistics suggest that the coding region of MRP1 may be under negative selection, we thus further analyzed the exonic SNPs at MRP1 to evaluate if any of these SNPs may have deleterious effects on MRP1 structure/function.
Exonic SNPs, particularly non-synonymous SNPs, have the potential to alter the secondary/tertiary structure of proteins and/or affect the protein function. A total of 18 exonic SNPs were identified at this gene locus of which five have not been previously reported ( Fig. 2A, C). Most of the exonic SNPs occurred at low frequencies (<5%) in only one or two populations. While at least 30% of the synonymous SNPs at MRP1 occurred at greater than 5% frequency in all the four populations examined, all of the non-synonymous SNPs occurred at less than 3% in only one or at most two populations (Fig. 2C). This observation highlights the conservation of exonic polymorphisms at MRP1 and suggests that altering the nonsynonymous SNPs may have a deleterious effect and are likely to be selected against, resulting in their low frequencies.
To assess if any of the non-synonymous SNPs at MRP1 have potentially damaging effect on the protein structure/ function, the location of these six SNPs were displayed on the MRP1 protein topological image using the SOSUI and TOPO2 programs. As evident in figure 2B, none of the non-synonymous SNPs reside in the transmembrane regions although four of these SNPs reside near or within the nucleotide binding domain (NBD) of the MRP1 protein. Nonetheless, SNPe1 (SNP e2/218 C>T) and SNPe6 (SNP e10/1299G>T) reside near the transmembrane region, while SNPe12 (SNP E16/2012 G>T) reside on a conserved glycine residue near the conserved Walker A consensus motif of the NBD [12], suggesting that these SNPs may have functional significance. SNP e2/218 C>T was only found at less than 3% in the Chinese and Malay populations while SNP E10/1299 G>T occurred at less than 2% in the Caucasian population only and SNP E16/ 2012 G>T occurred at less than 3% in the Indian and Caucasian populations (Fig. 2C). The SNP frequencies of SNP E10/1299 G>T and SNP E16/2012 G>T in the Caucasian population were comparable to a previous report [12].
Three different algorithms, SIFT [46], Polymorphism Phenotyping (PolyPhen) [47]and PANTHER [48]were then utilized to predict the functional significance of the six non-synonymous SNPs. SIFT predicts the effect of amino acid substitutions based on the assumption that the important amino acid will be conserved in the protein family [46]. PolyPhen predicts the effect of the amino acid variant on the function or structure of the protein based on current knowledge of protein structure, interactions and evolution [47] while the PANTHER program predicts the effect of an amino acid substitution on the protein's function using amino acid substitution scores derived from an alignment of related protein sequences and statistics from hidden Markov models [48].
Interestingly, SNP E10/1299 G>T, which is located near the transmembrane domain, was predicted to be potentially deleterious by the PANTHER but not the SIFT or PolyPhen algorithms. This SNP was reported to affect the ability of MRP1 to confer drug resistance as well as to transport organic anions [49] suggesting that the PAN-THER program may be more accurate in predicting the functional impact of polymorphisms than SIFT or PolyPhen. This observation is similar to a previous report that utilizes both bioinformatics and biochemical approaches to compare the accuracy of the PolyPhen and PANTHER programs in predicting functionally deleterious polymorphisms in the ABCA1 gene [50]. They found that the PANTHER software is significantly (P < 0.05) more accurate in its prediction of the functional consequence of nonsynonymous SNPs. They also reported that the PANTHER program is capable of correctly predicting the functional impact of greater than 94% of the polymorphisms examined while PolyPhen is only ~88% accurate in predicting the functional impact of polymorphisms [50].
Significantly, all of the three different algorithms predicted that SNP E16/2012 G>T, which resides close to Walker A and results in G671V substitution, was likely to have a potentially deleterious effect on protein function (Fig. 2C). The significance of this polymorphism has also been demonstrated previously by Conrad et al. [12] who reported that the mRNA expression of peripheral lymphocytes from individuals carrying the SNP E16/2012 G>T polymorphism was lower than the average expression level. The lower expression of the MRP1 G671V transcript is suggestive of greater accumulation of MRP1 drugs in the cells which may lead to adverse drug reactions. Curiously, that report also found that the G671V polymorphism did not affect the transport of MRP1 substrates including leukotriene C 4 , 17β-estradiol 17β-(D)-glucuronide and estrone sulfate by membrane vesicles prepared from transiently transfected HEKSV293T cells [12]. Recently, the same group also reported similar MRP1 protein expression levels and transport properties in human embryonic kidney cells were transfected with MRP1 constructs carrying either glycine or valine at amino acid position 671 [51]. The observation that the G671V polymorphism did not affect MRP1 protein expression or transport ability of some MRP1 substrates in vitro [12,51] does not rule out the possibility of functional significance of this polymorphism in vivo especially since the same group reported decreased transcript expression in individuals carrying this polymorphism. It is still possible that this polymorphism affect the transport of other MRP1 substrates that has not been examined. It is also possible that although the SNP E16/2012 G>T polymorphism does not affect MRP1 transport ability, it may affect other yet-to-be-examined functional properties of the protein (e.g. drug resistance capability or cellular anti-oxidative defense or inflammation). It has been reported that an artificial mutation E1089Q created in MRP1 markedly affected the ability of MRP1 protein to confer resistance without affecting its ability to transport organic anions [52]. Hence, the SNP E16/2012 G>T polymorphism warrants further investigation.
Hence, the bioinformatics approach may be useful in facilitating the prediction of potentially functionally significant polymorphism so that future research may be directed to characterizing these polymorphisms.
Functional implications of polymorphisms at MRP1
The current detailed characterization of polymorphisms at MRP1 in four different ethnic populations highlights several characteristics about this gene that may facilitate more rational approaches to studies associating this gene with functional changes. We have previously reported that the diverse haplotypes and weak LD across MRP1 [20]could perhaps provide an explanation for the failure of previous studies to detect association between polymorphisms in this gene and functional differences [12,[17][18][19] and highlight the importance of fully characterizing the LD and haplotype profiles of the gene before embarking on association studies. Its LD and haplotype architecture suggest that it may be necessary to identify alternative approaches for association studies of this gene as it may not be feasible to utilize tag SNPs. A possible approach is to identify polymorphisms with potential functional significance before performing association studies, possibly by identifying those polymorphisms that may have been subjected to selection pressures.
We recently identified a high frequency SNP at the 5' flanking promoter region of MRP1 that demonstrated evidence of recent positive selection and affected the promoter activity of MRP1 [20]. In this report, through the sequencing of the MRP1 exonic and flanking regions, we identified a GCC-trinucleotide multi-allelic STR polymorphism residing within the 5'UTR/promoter region of MRP1 that was found at relatively high frequencies in all populations examined. Notably, the frequency distribution of the different number of STR alleles in the different population was found to be different (Table 4). Although it was previously reported that the 5'UTR/promoter region contains the GCC-triplet repeats that is absent in the rodent sequence and 7, 13 and/or 14 of these repeats were observed in different cell lines and PBMC from a single individual [7,53], no reports have yet examined the variation of this polymorphism in the different ethnic populations. This STR is approximately 296 bp from the SNP that we previously reported to show evidence of recent positive selection [20]. Given that the selection is recent it may be expected that it would be in strong LD with the positively selected SNP. Since STRs within/near promoters have been implicated to affect promoter activity and expression levels of the gene, it would be worthwhile to further examine the effect that this polymorphism together with the positively selected SNP have in influencing promoter activity and hence expression of MRP1.
Interestingly, while the promoter region of MRP1 may be under recent positive selective pressure [20], in this study we also found that the coding region of this gene may have undergone negative selection pressure as suggested by nucleotide diversity indices. Two coding SNPs E16/ 2012 G>T and E10/1299 G>T have been predicted by either the PANTHER program or all three programs (SIFT, PolyPhen and PANTHER) to have a deleterious effect on the structure/function of the protein. The significance of these SNPs for general association studies may be limited since these SNP occurs at very low frequencies (<3%) in only one or two of the four populations examined. Nonetheless, this SNP may be associated with rare events including ADR.
Conclusion
In summary, based on the "common disease-common variant" hypothesis, the previously reported common polymorphism within the promoter of MRP1 that showed evidence of recent positive selection [20]would be useful for association studies of common diseases/drug response. Nonetheless, the rare exonic SNP(s) in this gene that we demonstrate here to be likely to be under negative selection pressure may be useful for studies associating this gene with rare phenotypes including ADR, which has been listed as the top five leading causes of death in Western countries [54].
Study population
The populations examined include individuals residing in Singapore from the following ethnic groups: 36 Caucasians and Chinese as well as 35 Malays and Indians. Race and ethnic group were declared by the volunteers to be true to three generations. Informed consent from the volunteers and ethical approval from the National University Hospital and the Changi General Hospital Institutional Review Boards were obtained.
PCR and DNA sequencing
The MRP1 genomic DNA (NT_0101393.13) sequence was obtained from GenBank [55] and used as the reference sequence. For the sequencing of all 31 exons of MRP1, 30 pairs of primers (see Table 1) were designed using Vector NTI 7.0 software and utilized to amplify these exons. The amplicons spanned the entire exon as well as some flanking sequences, to ensure that the splice donor and acceptor sites were also included. The PCR reaction was performed in a 10 µl volume reaction containing 40 ng genomic DNA template from the above mentioned samples, 5 µl 2 × PCR master mix buffer (Qiagen, Valencia, CA, USA), with or without 1 µl Q-solution (depending on the GC content of the amplicon) as well as 0.20 µM/L of sense and anti-sense primers. PCR was carried out in a GeneAmp ® PCR System 9700 (Applied Biosystems, Foster City, CA) with the thermal cycling conditions as follows: an initial denaturation at 94°C for 15 min followed by 35 cycles at 94°C for 30 sec, temperature for the optimal annealing of each amplicon as specified (Table 1) for 90 sec, and extension at 72°C for 60 sec. This was then followed by a final elongation step at 72°C for 10 min. The PCR products obtained were then treated with exonuclease I and shrimp alkaline phosphatase (SAP, United States Biochemical). Sequencing reactions were performed using ABI PRISM Big Dye Terminator (V3.0) kit and the conditions for the sequencing reactions were (for all the exons except exon 1): 94°C for 15 min followed by 30 cycles at 96°C for 10 sec, 50°C for 5 sec and 60°C for 4 min. Due to the high GC content of exon 1, the sequencing conditions were modified as follows: 94°C for 15 min followed by 35 cycles at 98°C for 30 sec, 48°C for 10 sec and 60°C for 5 min. The final product was resolved by automated capillary electrophoresis on an ABI PRISM 3700 ® DNA analyzer (Applied Biosystems). The DNA sequence of each exon obtained experimentally was then aligned against the reference sequence (NT_0101393.13) using the Vector NTI 7.0 software to identify the polymorphic sites. Polymorphisms identified were verified through bidirectional re-sequencing of all samples whose chromatograms do not clearly display the polymorphism as well as randomly selected samples whose chromatograms clearly show the polymorphism.
Population genetic parameters
Two common parameters of nucleotide diversity were calculated: the neutral parameter (θ) which is the estimate of population mutation parameter based on the number of polymorphic sites in the samples [40] and nucleotide diversity (π) which is the direct estimate of heterozygosity per site, or the average proportion of nucleotides that differ between any randomly sampled pair of sequences [40]. Each of these two parameters was calculated for synonymous and nonsynonymous SNP sites. Tajima's D statistic was also calculated to assess deviations from the neutral mutation model [41].
In Silico characterization of polymorphisms in exons
The programs Sorting Intolerant From Tolerant (SIFT) [46], Polymorphism Phenotyping (PolyPhen) [47] and PANTHER [48] were utilized to evaluate the potential effect of amino acid substitutions resulting from the polymorphisms. Since the position of the non-synonymous polymorphic amino acid residue on the MRP1 protein may provide important clues with regards to its potential functionality, the SOSUI program [56] was utilized to predict the topology of the MRP1 protein and the TOPO2 program [57] was used to display the location of the SNPs on the MRP1 protein topological image.
Authors' contributions
ZW contributed to the design of the experiments, data analysis and the write-up of the manuscript. PHS carried out the sequencing experiments. HA and SR contributed to the critical review of the draft manuscript. SSC and EJDL contributed to the conception and critical review of the draft manuscript. CGL (corresponding author) conceived the study, and contributed to the design of experiments, coordination, critical evaluation of the data and analyses as well as the final writing of the manuscript. All the authors have given the final approval of the version to be published | 2017-06-20T17:18:11.271Z | 2006-05-10T00:00:00.000 | {
"year": 2006,
"sha1": "71b284bda775a91ecdac97982e1e1b1ac922b5b1",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-7-111",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "28ba42494e2de3e0e4bca5ba28c4a299f5cadf6a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
263801210 | pes2o/s2orc | v3-fos-license | Mutations that enhance evolvability may open doors to faster adaptation
A recent study demonstrated the existence of mutations that facilitate access to efficient evolutionary solutions. Here I discuss the implications of this finding and the potential to open a new chapter in the study of evolvability.
The concept of evolvability has a relatively recent and complex history.As a general idea, it appears in the scientific literature in the 1990s, peaking and plateauing in mentions in the early 2010s (Fig. 1) 1 .It is based on the question of whether the capacity to evolve can itself evolve.One of its chief challenges to broader adoption involves its many interpretations, which can confound how it is measured and operationalized 2 .One definition refers to the general ability of replicators to respond to the force of selection 3 .Another slightly more detailed definition suggests that evolvability involves the capacity of populations to generate adaptive variation that promotes evolution by natural selection 4 .Using these and many other framings, evolvability can be the product of the same information as any trait and is then subject to forces of evolution (e.g., mutation, migration, selection, drift).
Evolvability and its modern controversies
When considered this way, a library of questions surface: is evolvability a complex trait that varies (in magnitude or character) across and within taxa?Which sorts of ecologies select for or against evolvability?What is the molecular machinery that underlies it?When we consider the possible answers, the intrigue with the phenomenon becomes clear: evolvability can reframe many aspects of interpreting, measuring, and predicting evolution.This explains why it has been present in discussions surrounding the extended evolutionary synthesis, an attempt to integrate newer theoretical ideas (e.g., plasticity,, epigenetics, and others) into the central canon 5 .Further, it has implications for any field where understanding the capacity to evolve might be relevant, from biomedicine to the study of technological and cultural evolution.
While the concept remains popular among evolutionary theorists, we can fairly ask about its broader relevance.Arguably, evolvability has been trapped in a corner of evolutionary theory, where its potential to improve our understanding is greater than its impact on how evolutionary biologists approach critical questions.Relatedly, what practical problems does its appreciation help us solve or understand?To many in the field, the answers to these questions are frustratingly inadequate and have limited evolvability's inclusion in the center of canon in evolutionary biology, as was the hope of many who championed its implementation into the extended evolutionary synthesis.
In a new study published in Nature Communications, Andreas Wagner utilizes large data sets and computational tools to offer provocative ideas about the frequency, phenotypic effects, and evolutionary consequences of mutations that facilitate access to beneficial mutations and more efficient searches for fitness peaks 6 .Specifically, Wagner identifies "evolvability-enhancing" mutations that create a genetic background where subsequent mutations are more likely to be beneficial relative to mutations acquired on an ancestral background by virtue of their average mutational neighbor being of higher fitness (Fig. 2).In addition, such backgrounds facilitate the search for novel adaptations.
The search for mutations that promote evolvability
Wagner analyzed data from two different large data sets: All 20 amino acid variants at each of three sites (roughly 8000 variants) in a protein in the Escherichia coli toxin-antitoxin system, and 4000 variants of a transfer RNA (tRNA) from Saccharomyces cerevisiae across 10 nucleotides.Using two completely different biomolecules is critical because Wagner aimed to identify a true signature associated with increased evolvability rather than one peculiar to a certain information space.
In this way, Wagner was able to address the question of whether biological landscapes contain more evolvability-enhancing mutations than would be expected by chance.He found that both the protein and RNA empirical landscapes had more than their randomized counterpart, suggesting a pattern that is unique to biological systems.
Next, Wagner examined whether adaptive trajectories containing evolvability-enhancing mutations lead to higher fitness sections of fitness landscapes.Using stochastic simulations, Wagner found that adaptive walks that contained evolvability-enhancing mutations were associated with significantly higher fitness gain in both the protein and RNA fitness landscapes relative to those without evolvability-enhancing mutations.
What are the implications?One is that these evolvabilityenhancing mutations offer a means through which we can consider how evolvability is constructed bit by bit from the mutations that compose fitness landscapes.Another implication is that evolvability isn't only observed at the organismal or population scales but can be measured at the level of individual traits, genes, or mutations.
Furthermore, the study emphasizes the importance of both big data and classical conceptual instruments like the fitness landscape in providing mechanistic nuance to abstract concepts like evolvability.The use of fitness landscapes-an analogy for genotype-phenotype space where the forces of evolution move genotypes up "fitness peaks" and down "fitness valleys" 7 -as a model for studying evolvability is not new.A seminal study by Ancel and Fontana 8 explored RNA sequences to propose principles for how characteristics of high-dimensional spaces were models for basic questions in evolvability.In the last two decades, many others have followed suit.Moreover, while these studies were foundational, helping to build the modern field of evolvability as we know it, many aspects remained underexamined.For example, while studies have demonstrated evidence for how nature communications (2023) 14:6310 | 1 evolvability relates to biophysical features of proteins 9 , fewer have attempted to formalize differences in evolvability in terms of reproducible computational rules, analytical expressions, or metrics that quantify the evolvability of replicators.Wagner's latest work attempts to modernize the study of evolvability by highlighting how it can be wired into biological information spaces of various kinds.The author acknowledges that while biological fitness landscapes contain more evolvability-enhancing mutations than in silico randomized landscapes, that does not mean that their presence (in proteins and RNA) is driven by adaptive evolution.Said differently, the "enhancing" part of "evolvability-enhancing" should not be interpreted in a teleological or adaptationist sense: there is no evidence that they exist in order to enhance evolvability.Their existence might be an artifact of features of genotype-phenotype space, where epistasis (the nonlinear interaction between the effects of mutations) influences the shape and topography of fitness landscape and adaptive trajectories 7,10 .
In one sense, whether these evolvability-enhancing have an adaptive origin or not isn't so significant: even if they are artifacts, we should study them if they play a role in molecular evolution.On the other hand, if they are mere byproducts of some feature of how biomolecules are constructed, then some of the intrigue is diminished-they may be less relevant for the bigger question of how evolvability evolves.
More generally, one can reasonably ask whether the existence of evolvability-enhancing addresses any large gap or conflict in evolutionary biology?It is not so clear.Further, the invocation of a new term and description for a type of mutation in fitness landscapes should (eventually) be formalized in (and reconciled with) the grammars of theoretical population genetics, which contains many decades of work on mutation effects in populations 11 .
Applications to public health and bioengineering
These issues aside, Wagner's findings offer an important new lens on how genotype-phenotype space is constructed, and by extension, how evolution happens at the molecular level.The observed differences in the frequency and effect of evolvability-enhancing mutations in RNA and protein implore us to search for informational and biophysical explanations: are there features of spaces that facilitate more evolvability-enhancing mutations?Even more, one can ask whether evolvability-enhancing mutations can be engineered into replicators-biological, artificial, or cultural-towards controlling the pace and direction of evolution.
The most proximal application of evolvability-enhancing mutations might reside in the public health realm.For example, surveillance for troublesome mutations should not only include "escape" variants Fig. 2 | A simplified conceptual depiction of an example of evolvabilityenhancing mutations.In this scenario, with binary representation, [0] and [1] corresponding to the presence and absence of a mutation.In a standard hypergraph description of a combinatorially-complete set of mutations, an evolvabilityenhancing mutation is present in high-fitness genotypes (large grey circles).The mutation at the second locus (1*) is an evolvability-enhancing mutation because all subsequent mutations on a genetic background that contain it are relatively high fitness alleles, including the fitness peak (111).Weighted arrows correspond to accessible trajectories to the peak.Note that evolvability enhancing mutations need not increase fitness on their own, but rather, provide access to high fitness sections a fitness landscape.
Comment nature communications
(2023) 14:6310 | (for vaccines) or other variants of concern 12 , but also those mutations that facilitate the evolution of other more troublesome variants (as suggested by evolvability-enhancing mutations).These notions have resonance with recent developments in Vibrio cholerae, where certain mutations provide the genotypic context for virulence genes to express their deadly phenotypic effects 13 .And even closer to evolvabilityenhancing mutations are the existence of "epistatic ratchets 14 " and "pivot mutations" (the latter discussed in malaria) 15 .Both are examples of mutations that interact with other mutations (via epistasis) and constrain evolution.In light of this, the existence of evolvabilityenhancing mutations can contribute to the growing movement to describe the effects of disease-associated mutations with respect their performance across varied contexts, what phenotypic (clinical) outcomes they can foster, and the evolutionary consequences they facilitate.Further, this knowledge can be applied to bioengineering efforts to build biological systems to be more or less evolvable.
We should be encouraged by attempts to add more mechanistic detail to the study of evolvability, starting with the individual mutations that may give it a boost.This study highlights how large data and new technologies, in the context of theoretical insight, may walk us toward a more rigorous look at the many engines that drive how adaptive evolution happens in the manner that it does.
Fig. 1 |
Fig. 1 | "Evolvability" in the literature through time.The Web of Science database was searched on June 2, 2023, using the search term "evolvability."The initial search returned 3172 results.Results were then limited to articles, review articles, book chapters, book reviews, and books, reducing the number to 2562.The results were then downloaded using the "Analyze Results" feature of the Web of Science, which summarized the number of studies published yearly since 1988. | 2023-10-11T06:17:43.320Z | 2023-10-09T00:00:00.000 | {
"year": 2023,
"sha1": "60158f37df8ff6092441cbe9acd48cbacbecc21e",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-023-41914-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a6369e197e65665900ce4dfab67eea20b38dbe6b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221980483 | pes2o/s2orc | v3-fos-license | Pyroclastic Density Current Hazard Assessment and Modeling Uncertainties for Fuego Volcano, Guatemala
: On 3 June 2018, Fuego volcano experienced a VEI = 3 eruption, which produced a pyroclastic density current (PDC) that devastated the La R é union resort and the community of Los Lotes, resulting in over 100 deaths. To evaluate the potential hazard to the population centers surrounding Fuego associated with future PDC emplacement, we used an integrated remote sensing and flow modeling-based approach. The predominate PDC travel direction over the past 15 years was investigated using thermal infrared (TIR) data from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument validated with ground reports from the National Institute of Seismology, Volcanology, Meteorology, and Hydrology (INSIVUMEH), the government agency responsible for monitoring. Two di ff erent ASTER-derived digital elevation model (DEM) products with varying levels of noise were also used to assess the uncertainty in the VolcFlow model results. Our findings indicate that the recent historical PDC travel direction is dominantly toward the south and southwest. Population centers in this region of Fuego that are within ~2 km of one of the volcano’s radial barrancas are at the highest risk during future large eruptions that produce PDCs. The ASTER global DEM (GDEM) product has the least random noise and where used with the VolcFlow model, had a significant improvement on its accuracy. Results produced longer flow runout distances and therefore better conveys a more accurate perception of risk. Di ff erent PDC volumes were then modeled using the GDEM and VolcFlow to determine potential inundation areas in relation to local communities. volume. The PDCs tend to follow the topographic lows (barrancas) with only minor deviations. Population centers that could possibly be impacted by future larger PDC events are identified. The inundation PDC hazard map indicates that both Los Lotes and La R é union were at risk from a PDC with a volume similar to the estimate for the 3 June 2018.
Introduction
Explosive volcanic eruptions commonly produce one or more products (e.g., tephra fallout, pyroclastic flows, lava flows/domes, etc.) that pose risks to the lives and livelihood of local populations. These products can threaten thousands of lives in highly populated regions [1]. For example, pyroclastic density currents (PDCs) have caused~90,000 fatalities historically, the most of any volcanic hazard [2]. PDCs are high-temperature (200-800 • C), high-velocity (80-750 km/h) mixtures of rocks, ash and gas that typically have runout distances between 10-15 km, but in some cases, as long as 100s of kilometers [3,4]. PDCs are comprised of a dense and dilute component. The latter is not confined by topography and is known as either a pyroclastic surge (PS) or a dilute PDC [5,6]. The dense portion, also known as a pyroclastic flow (PF) or dense PDC, tends to follow existing topographic lows. The formation of PDCs commonly occurs as the result of either the collapse of a large convecting eruptive column, a "boiling-over" of PDC material over the crater rim, or the presence of a lava dome that decompresses from gravitational collapse, or serves to increase the magnitude of an explosive eruption following its emplacement.
Because of the extreme hazard potential of PDCs it is critical to investigate volcanoes that have a history of producing these events, one recent example is Fuego volcano in Guatemala. The June 2018 eruption of Fuego produced a large PDC, resulting in the mass casualty event. Approximately 50,000 people live within 10 km and~1,000,000 within 30 km of the summit of Fuego [7]. Numerous communities (e.g., El Rodeo, La Reina, Los Lotes) are located within~5 km and are at the highest risk. However, any individuals living within~8 km are at risk from a PDC due to the long runout distances at this volcano because of the volcanic geomorphology [8]. Historically, PDCs and other eruptive products have been confined to Fuego's large drainage ravines (barrancas) that are distributed radially around the summit in all but the northerly direction, as shown in Figure 1 [9]. However, if these flows leave the confines of the barrancas (as happened in 2018), the threat increases significantly to a larger area and more people.
Remote Sens. 2020, 12, x FOR PEER REVIEW 2 of 17 presence of a lava dome that decompresses from gravitational collapse, or serves to increase the magnitude of an explosive eruption following its emplacement. Because of the extreme hazard potential of PDCs it is critical to investigate volcanoes that have a history of producing these events, one recent example is Fuego volcano in Guatemala. The June 2018 eruption of Fuego produced a large PDC, resulting in the mass casualty event. Approximately 50,000 people live within 10 km and ~1,000,000 within 30 km of the summit of Fuego [7]. Numerous communities (e.g., El Rodeo, La Reina, Los Lotes) are located within ~5 km and are at the highest risk. However, any individuals living within ~8 km are at risk from a PDC due to the long runout distances at this volcano because of the volcanic geomorphology [8]. Historically, PDCs and other eruptive products have been confined to Fuego's large drainage ravines (barrancas) that are distributed radially around the summit in all but the northerly direction, as shown in Figure 1 [9]. However, if these flows leave the confines of the barrancas (as happened in 2018), the threat increases significantly to a larger area and more people. Over the past five decades, the quantity of data available on volcanic eruptions via satellitebased remote sensing has increased exponentially [10]. Multiple studies have made use of these data to investigate specific events or styles of activity (e.g., [11][12][13]). These studies have either examined volcanic activity globally or focused on a specific geographic region or individual volcano. Several more recent studies (e.g., [7,14,15]) have used remote sensing to investigate the eruptions, emissions, and eruptive history at Fuego, specifically, utilizing a variety of satellite remote sensing instruments, such as the Moderate Resolution Imaging Spectroradiometer (MODIS), the Infrared Atmospheric Sounding Interferometer (IASI), and CubeSats. These have helped to fill a knowledge gap that existed for this volcano; specifically, the increase of volcanic activity since 1999.
Our study uses data from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument. ASTER is one of the five instruments on the National Aeronautics and Space Administration (NASA) Terra satellite [16]. These data have been used to monitor and Over the past five decades, the quantity of data available on volcanic eruptions via satellite-based remote sensing has increased exponentially [10]. Multiple studies have made use of these data to investigate specific events or styles of activity (e.g., [11][12][13]). These studies have either examined volcanic activity globally or focused on a specific geographic region or individual volcano. Several more recent studies (e.g., [7,14,15]) have used remote sensing to investigate the eruptions, emissions, and eruptive history at Fuego, specifically, utilizing a variety of satellite remote sensing instruments, such as the Moderate Resolution Imaging Spectroradiometer (MODIS), the Infrared Atmospheric Sounding Interferometer (IASI), and CubeSats. These have helped to fill a knowledge gap that existed for this volcano; specifically, the increase of volcanic activity since 1999.
Our study uses data from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument. ASTER is one of the five instruments on the National Aeronautics and Space Administration (NASA) Terra satellite [16]. These data have been used to monitor and assess volcanic activity during numerous past eruptions (e.g., [11,12,17]). ASTER has a 15 m per pixel spatial resolution Remote Sens. 2020, 12, 2790 3 of 16 in the visible/near infrared (VNIR), which is also used to create digital elevation models (DEMs), and a 90 m resolution in the thermal infrared (TIR). The TIR sensor has a noise equivalent delta temperature of~0.15 • C, allowing derivation of accurate temperatures following atmospheric correction and temperature/emissivity separation of the radiance-at-sensor data (e.g., [18]). These data saturate at pixel-integrated brightness temperatures in excess of 100 • C.
The ASTER data are critical to assess the directional tendency of PDCs over the past 15 years. The volcanic process and product causing each thermal anomaly detected by ASTER were determined and then validated by comparing them to ground-based reports. This allowed the number and direction of all PDC events to be determined. The ASTER DEM data also provided the pre-flow topography for the subsequent modeling using VolcFlow. The ultimate goal of this study was not to refine a flow model that perfectly matched the 2018 deposit, but rather to assess how such numerical flow model freeware performs using free, globally-available DEMs of varying quality. This provides a basis from which future studies at other volcanoes could be assessed. We performed modeling at Fuego on two different ASTER-DEM products to quantify the DEM quality errors, as well as the most common direction of future events. The historical TIR data analysis, therefore, directly informed the flow modeling of future PDCs. The combination of remote sensing with a DEM uncertainty analysis is unique and important for the creation of a PDC-specific hazard map, which shows that the greatest threat is to the populations nearest the summit and those closest to the barrancas.
3 June 2018, Eruption
The 3 June 2018, eruption of Fuego produced a large explosive phase resulting in a column collapse and generation of a PDC, which also incorporated material from a partial collapse of the SE flank. Unlike previous PDCs that commonly remain confined to the barrancas, the volume of this PDC caused it to leave the confines of the Las Lajas barranca in two locations (white stars in Figure 1). The first at~5 km from the summit, and the second~3.5 km further downslope. The first exit point devastated the La Réunion Resort ( Figure 2). There were no reported fatalities at this location, because of the successful evacuation of the resort staff and guests prior to the PDC emplacement. The second exit point was~1.5 km to the north of the village of Los Lotes ( Figure 3). Unlike the La Réunion Resort, Los Lotes was not evacuated and experienced the majority of casualties. Official incident reports issued by the United Nations Office for the Coordination of Humanitarian Affairs (OCHA) determined 113 were killed from the event ( Table 1). News reports following the incident placed the death toll closer to 200. As of the final OCHA report covering 19-27 June 2018, there were still 197 people missing. However, the most recent study documents more than 400 deaths [19]. Official incident reports issued by the United Nations Office for the Coordination of Humanitarian Affairs (OCHA) determined 113 were killed from the event ( Table 1). News reports following the incident placed the death toll closer to 200. As of the final OCHA report covering 19-27 June 2018, there were still 197 people missing. However, the most recent study documents more than 400 deaths [19]. Table 1. Information from reports issued by OCHA, which were produced in collaboration with information provided by the National Coordinator for Disaster Reduction (CONRED), INSIVUMEH, the UN Emergency Technical Team (UNETE), the Red Cross, and other Non-Governmental Organizations. The first report was issued less than 24 h after the eruption.
Background
Fuego volcano is located in central Guatemala and has a well-defined summit crater at an elevation of 3763 m. It forms half of the Fuego-Acatenango volcanic complex and is located within the second of eight segments of the Central American Volcanic Arc [20][21][22]. Fuego has experienced five VEI = 4 eruptions in 1581- 2, 1717, 1880, 1932, and 1974 [8,23,24]. During its history Fuego has also had more than 60 subplinian eruptions and longer periods (months to years) of low-level strombolian volcanism [20]. This more common activity occurs many times daily and has little to no effect on the local people living in close proximity to the volcano. Since 1999, a new eruptive style began that consists of persistent strombolian activity, lava fountaining, and explosions [20]. It is punctuated by the occasional paroxysmal eruption of greater energy that can produce lava flows and PDCs [8]. (i) effusion of lava flows and an increase in summit activity, (ii) an intense paroxysmal eruption lasting 24-48 h that produces a sustained eruptive column and PDCs, followed by (iii) a decrease in activity [7]. Our study focuses on the PDCs that formed during one of these paroxysmal eruptions on 3 June 2018.
Methods
A combination of orbital ASTER TIR data from 31 January 2003, to 30 April 2018, and weekly reports from the National Institute of Seismology, Volcanology, Meteorology, and Hydrology (INSIVUMEH) were used to first assess the predominant direction of PDC travel. Thermal activity seen in the ASTER data was validated using the INSIVUMEH reports, wherein anything named as a PDC, a pyroclastic flow, or an incandescent avalanche was classified as a PDC for this study. Following this analysis, the VolcFlow numerical model [25,26] was paired with different ASTER DEM products to assess the resulting uncertainty in the model's results. In order to judge this uncertainty in the modeling outcomes, both the ASTER global DEM (GDEM) product, as well as an ASTER single-scene DEM, were used as the base for the VolcFlow modeling. The GDEM product is an average of all single-scene DEMs produced over the region from 2000 to 2011 [27]. The GDEM is cloud-free and the pixel to pixel noise, which is present in the single-scene DEMs, is greatly reduced. However, any topographic change occurring from 2000-2011 is averaged over the study period and potentially lost. Therefore, any single-scene DEM post-2011 will capture all significant topographic changes that may have occurred since the GDEM creation. Which flow model and which DEM to use with that model is a critical choice for any future hazard study of another volcano.
INSIVUMEH Reports
The National Institute of Seismology, Volcanology, Meteorology and Hydrology (INSIVUMEH) is the official scientific monitoring agency for Guatemala. It operates two volcano observatories for Fuego, both located on the SW flank approximately 9 km and 10 km from the summit ( Figure 1). INSIVUMEH has trained local observers who generate reports on the volcanic activity [28]. Guatemala is the only country in Central America that has such a monitoring program. These reports are vital not only to local authorities for accurate hazard assessment and mitigation, but also to international science investigators studying these volcanoes. An observer records the state of the volcano three times per day and sends that information to INSIVUMEH headquarters. Formal reports, compiled from these daily observations, also include the seismic activity over that period, any change in the daily eruptive style/frequency, as well as the intensity and direction of any volcanic events. Reports of Fuego's activity may also note if the events traveled down a specific barranca and any evacuations that were necessary, but they do not record the lengths or temperatures of any of the flow products emplaced.
ASTER Data Analysis
The ASTER surface kinetic temperature data product (AST_08) from 31 January 2003, to 30 April 2018, was used in this study. This dataset is produced by the ASTER temperature emissivity separation (TES) algorithm, which operates on the atmospherically-corrected ASTER TIR radiance-at-surface data product (AST_09T). The AST_08 data are accurate to ±1-2 • C and are at the same 90-m spatial resolution as all ASTER TIR data [18,29].
Initially, each scene was examined for the presence of regions that were thermally-elevated over the average temperature of the scene's background. The flow-scale morphology and temperature distribution of these regions were also noted (e.g., short, hotter, linear lava flows versus longer, lower temperature, more sinuous PDCs). Lava flows from Fuego tend to travel at most a few kilometers, have multiple saturated pixels (pixels with an integrated brightness temperature > 100 • C), have many more pixels with high (near 100 • C) temperatures, and travel in a relatively linear manner, confined to the upper barrancas. PDCs have longer runout distances, lower more diffuse temperature profiles, and travel sinuously following the length of the barrancas. We have limited our analysis to nighttime overpasses only to reduce complications commonly encountered in daytime TIR data, such Remote Sens. 2020, 12, 2790 6 of 16 as residual solar heating, topographic effects, and the increased chance of higher cloud percentages [11]. If a scene had what appeared to be a flow, the direction of travel, length, and maximum temperature were all noted for later comparison to the INSIVUMEH reports. Similarly, scenes that had thermally-elevated pixels at the summit were also compared to these reports. The reports were then used to confirm the activity type derived from the ASTER data that occurred on that date, the direction of travel (if a flow), and specifically, which barranca was occupied by that flow. With this compiled dataset, all PDC flow directions were determined over the last 15 years. In the case where an ASTER scene had cloud cover partially obscuring the volcanic activity, priority was given to the INSIVUMEH report.
The other ASTER data product used in this study is the DEM, which is produced from any daytime VNIR scene acquired. ASTER is able to acquire along-track stereo pair images of each individual VNIR scene by way of a nadir-viewing and backward-viewing telescope. The horizontal and vertical resolution of the individual scene DEMs is 30 m and~25 m, respectively [30]. The vertical accuracy is highly dependent on errors present in the scene. These individual scene DEMs can contain random pixel and scan line noise that can result in topographic errors up to~30 m [31] but with averages of 10 m [32]. Thick clouds when present can completely obscure a study area or add false topographic highs well outside of the norm for the region. However, the single-scene DEM is extremely useful for capturing the most-recent topography of a target, including any dynamic changes that may have recently occurred (e.g., landslides, lava dome formation, slope collapse). Global elevation datasets that are either captured at one time like the Shuttle Radar Topography Mission (SRTM) or created from a time average of data like the ASTER GDEM will miss or average out any topographic changes that occur outside the time window of the data acquisition. The ASTER GDEM version 2 product does, however, reduce or eliminate most of the random noise errors associated with single-scene DEMs by averaging the cloud-free portions of individual ASTER scenes from 2000-2011. Version 2 also fills holes and other errors that were present in version 1 [33,34]. The vertical accuracy of the GDEM is estimated to improve to~20 m [34]. However, in many locations, the accuracy is better than 10 m [34]. In order to assess the VolcFlow model's uncertainty with respect to the two ASTER DEM products, the same modeling was performed using the GDEM, as well as a relatively cloud-free, single-scene DEM from 24 January 2018.
VolcFlow Model
Several computational models have been developed to investigate PDCs, including VolcFlow [25,26], Flow3D [35], and Titan2D [36,37]. For our investigation, we chose VolcFlow, which runs inside the MATLAB programing environment, because of its extensive history in volcanic flow modeling. The VolcFlow model is freely available for download online while MATLAB requires a license to operate. VolcFlow has been used to model debris avalanches at Socompa, Chile [25], as well as PDCs at Tungurahua, Ecuador [26], and Merapi, Indonesia [38], to name a few.
VolcFlow solves depth-averaged equations of mass and momentum using a topography-linked coordinate system in time and space, similar to how TITAN2D operates [39]. It utilizes the mass and momentum conservation equations counterbalanced by retarding stresses (e.g., viscous, frictional, and turbulent stresses) that slow down the flow, as well as the gravity, rheology, and local topography (obtained from the user input DEM). The model starts a flow simulation at a fixed location and with an initial velocity of zero. The specific equations and mathematical methods used in VolcFlow are described in Kelfoun and Druitt (2005) and Kelfoun et al. (2009). The present version of VolcFlow models only the dense portion of PDCs that is confined to a channel and, therefore, most influenced by the topography [35]. Because the PDC that occurred on 3 June 2018, is best classified as a dense PDC, the use of the VolcFlow model was deemed appropriate.
The modeling of the PDC is performed using two stages. The first runs the VolcFlow model to determine the PDC travel paths. The second runs the model in an iterative manner with each iteration modeling a different eruptive volume, thereby creating a PDC hazard map as a function of eruption magnitude/erupted volume. For this study, 20, 30, and 40 (×10 6 ) m 3 were used to create an Remote Sens. 2020, 12, 2790 7 of 16 approximate hazard map for the volcano. These values are based on a range of potential volumes reported for Fuego [7,8,40]. Naismith et al. (2019) estimated the 3 June 2018, pyroclastic eruptive volume travelling down Las Lajas was between 20 and 30 (×10 6 ) m 3 , which is also consistent with the past eruptive history of Fuego [8]. We, therefore, used 30 × 10 6 m 3 for the single VolcFlow model run. The other variables required for the model are shown in Table 2 and were kept consistent with previous investigations of PDC flows using the model [25,26]. Several model iterations were performed that varied the input parameters seen in Table 2. We also compared the modeled flow travel direction to the ASTER data, historical trends, as well as the approximate runout distance of the 3 June 2018, PDC.
Results
For the study period of 31 January 2003, to 30 April 2018, 193 nighttime ASTER TIR scenes were found. Of those,~57% (109) had thermally-elevated regions above the scene's background temperature, whereas the others either had cloud cover that obscured the summit or had no detectable activity. Within the subset that had detected activity, 47 were single to multipixel summit-confined hot spots, and 62 showed evidence of a flow. Figure 4 shows examples of four volcanic events: a lava flow, a summit thermal anomaly, and two positively identified PDCs confirmed using INSIVUMEH reports.
After the analysis of each scene and comparison with the INSIVUMEH reports, a total of 54 PDCs were confirmed during the~15-year study period. The average flow length measured from the ASTER data was~5 km. Their directional distribution is dominantly southward as might be expected based on the topography of the Fuego-Acatenango complex ( Figure 5). The largest number of events (16) traveled southwest down the Taniluya and Cenizas barrancas. Due to the spatial resolution of the ASTER TIR data, it is not always possible to differentiate between these two barrancas; in which case, the INSIVUMEH reports were used for validation. The remaining PDCs were nearly equally distributed to the northwest, south, and southeast, with very little activity in the Honda barranca to the northeast.
The VolcFlow model runs correlate with the historical reports and findings of previous studies, which indicate PDC events generally remain inside the barrancas as they were emplaced [9]. This is expected because PDCs at Fuego are generally more PF-dominated as opposed to PS-dominated. The observation that most PDC's from Fuego are dominated by dense material is also supported by a lack of PS deposits seen in the remote sensing data or reported by INSIVUMEH.
VolcFlow was first run using the 30 × 10 6 m 3 volume estimate of the 3 June 2018, PDC and using the ASTER GDEM product for the topographic base. A starting location/elevation was chosen to coincide with the head of the Las Lajas barranca, rather than directly at the summit. The PDC flow coverage for this initial result is shorter and distributed in more barrancas than what was observed following the June 2018 eruption (Figure 6). Furthermore, the model was unable to replicate the PDC leaving the barranca at the two locations during that event.
Remote Sens. 2020, 12, x FOR PEER REVIEW 8 of 17 confined hot spots, and 62 showed evidence of a flow. Figure 4 shows examples of four volcanic events: a lava flow, a summit thermal anomaly, and two positively identified PDCs confirmed using INSIVUMEH reports. After the analysis of each scene and comparison with the INSIVUMEH reports, a total of 54 PDCs were confirmed during the ~15-year study period. The average flow length measured from the ASTER data was ~5 km. Their directional distribution is dominantly southward as might be expected based on the topography of the Fuego-Acatenango complex ( Figure 5). The largest number of events (16) traveled southwest down the Taniluya and Cenizas barrancas. Due to the spatial resolution of the ASTER TIR data, it is not always possible to differentiate between these two barrancas; in which case, the INSIVUMEH reports were used for validation. The remaining PDCs were nearly equally distributed to the northwest, south, and southeast, with very little activity in the Honda barranca to the northeast. The VolcFlow model runs correlate with the historical reports and findings of previous studies, which indicate PDC events generally remain inside the barrancas as they were emplaced [9]. This is expected because PDCs at Fuego are generally more PF-dominated as opposed to PS-dominated. The observation that most PDC's from Fuego are dominated by dense material is also supported by a lack of PS deposits seen in the remote sensing data or reported by INSIVUMEH.
VolcFlow was first run using the 30 × 10 6 m 3 volume estimate of the 3 June 2018, PDC and using the ASTER GDEM product for the topographic base. A starting location/elevation was chosen to VolcFlow was first run using the 30 × 10 6 m 3 volume estimate of the 3 June 2018, PDC and using the ASTER GDEM product for the topographic base. A starting location/elevation was chosen to coincide with the head of the Las Lajas barranca, rather than directly at the summit. The PDC flow coverage for this initial result is shorter and distributed in more barrancas than what was observed following the June 2018 eruption (Figure 6). Furthermore, the model was unable to replicate the PDC leaving the barranca at the two locations during that event. A single-scene DEM was chosen to assess how the higher amounts of pixel to pixel noise (compared to the GDEM) would affect the model results. By the ASTER day overpass time (~10:45 am local time), higher percentages of clouds are commonly encountered over the mountainous regions of Guatemala, even during the winter dry season. The formation of these orographic clouds typically begins about an hour earlier. Therefore, it is difficult to find scenes that are cloud-free. The scene closest to the eruption date with the least cloud cover was acquired on 24 January 2018 (Figure 6b). The DEM generated from this scene has the typical minor topographic errors caused by the pixel to pixel noise, as well as dense clouds north of the summit and much smaller clouds elsewhere (Figure 7). The presence of dense clouds to the north create a large false topographic high over Acatenango, which did not affect the VolcFlow results. The pixel noise, however, did result in modeled flow paths and lengths that were inconsistent compared to the GDEM model run. Use of the DEM with VolcFlow resulted in~28% less area covered by the deposit and flow lengths that were~54% shorter than the results using GDEM.
To quantify the impact of this DEM topographic error on the model, the single-scene DEM data were subtracted from the GDEM data ( Figure 7). The minor cloud cover around the summit created positive topographic errors as high as 475 m above the baseline, whereas shadows from these clouds caused negative errors as low as 238 m below. To determine the average difference between the GDEM and the single-scene DEM, four 1 km 2 regions of interest (ROI) were chosen in areas of relatively consistent terrain that were free of clouds and cloud shadows. The results are shown in Table 3. Table 3. Elevation values extracted for each of the ROIs shown in Figure 7. to pixel noise, as well as dense clouds north of the summit and much smaller clouds elsewhere ( Figure 7). The presence of dense clouds to the north create a large false topographic high over Acatenango, which did not affect the VolcFlow results. The pixel noise, however, did result in modeled flow paths and lengths that were inconsistent compared to the GDEM model run. Use of the DEM with VolcFlow resulted in ~28% less area covered by the deposit and flow lengths that were ~54% shorter than the results using GDEM. To quantify the impact of this DEM topographic error on the model, the single-scene DEM data were subtracted from the GDEM data (Figure 7). The minor cloud cover around the summit created positive topographic errors as high as 475 m above the baseline, whereas shadows from these clouds caused negative errors as low as 238 m below. To determine the average difference between the GDEM and the single-scene DEM, four 1 km 2 regions of interest (ROI) were chosen in areas of relatively consistent terrain that were free of clouds and cloud shadows. The results are shown in Table 3. The pixel/scan line noise present in the single-scene DEM did cause artificial elevation changes that resulted in modeled flow paths that were shorter and with different routings. Although the average elevation change in the ROIs is within the estimated error, the GDEM contains less random noise and more clearly defines the barrancas. The minor amount of cloud cover in the single-scene DEM, however, did not have a significant impact on the individual model run. This despite the clouds/shadows producing the extremes in the difference image, which are significantly outside ASTER DEM error estimates [30].
VolcFlow was next run using the GDEM for three increasing volumes to produce a generalized hazard map (Figure 8). For this facet of the analysis, the summit location and elevation were used for the model starting point. The hazard map tends to follow topographic lows as expected. Slight deviations from these occur where the forward momentum of the modeled flow approaches zero and/or where the flows extend beyond the confines of a topographic low. Model results with higher eruptive volumes do increase the potential risk to communities further from the summit of Fuego, as would be expected. The maximum modeled runout distance for a 40 × 10 6 m 3 volume flow is~10 km from the summit, for example. However, even the smallest volume produces modeled flows that reach the two population centers (Los Lotes and the La Réunion resort) that were directly impacted by the 3 June 2018, eruption, confirming the usefulness of this simplified approach (Figure 8). and/or where the flows extend beyond the confines of a topographic low. Model results with higher eruptive volumes do increase the potential risk to communities further from the summit of Fuego, as would be expected. The maximum modeled runout distance for a 40 × 10 6 m 3 volume flow is ~10 km from the summit, for example. However, even the smallest volume produces modeled flows that reach the two population centers (Los Lotes and the La Réunion resort) that were directly impacted by the 3 June 2018, eruption, confirming the usefulness of this simplified approach (Figure 8).
Discussion
Satellite-based remote sensing is an effective method for observing synoptic volcanic activity, including eruptions and eruptive precursors (e.g., [12,17,41]). However, sensors with the higher spatial resolution needed to resolve smaller-scale details commonly have poorer temporal resolution, which can result in events being missed. In contrast, those with a higher temporal cadence, such as MODIS or the Advanced Very-High-Resolution Radiometer (AVHRR), needed for capturing all phases of an eruption, have much lower spatial resolution that precludes resolving details, such as accurate flow dimensions and direction. Direct ground-based observations (where possible) of volcanic deposits following an eruption do provide a much greater understanding of the event (e.g., [42,43]). However, where continuous ground observations are not feasible, reliance on local monitoring groups/agencies and activity reports are helpful. Having ground reports available from INSIVUMEH to validate the remote sensing data used here, for example, was important for this study. However, both the ASTER data and INSIVUMEH reports do have limitations.
ASTER has an average temporal resolution of~8 days for Fuego, which is higher than the reported value of~16 days [16]. The higher volume of ASTER TIR data was possible because of the presence of the ASTER Urgent Request Protocol (URP) program [44,45]. The URP integrates ASTER into a sensor web architecture, being triggered by thermal detections from MODIS. These triggers, which happen continuously during a larger eruption, produce dozens more scheduling requests for ASTER data. By providing higher temporal, high spatial resolution TIR data, the URP program becomes an important tool for monitoring volcanic trends. However, even with this improved temporal cadence, volcanic events can still be missed entirely and/or only partially captured if an eruptive product (e.g., lava flow or PDC deposit) cools to become undetectable in the TIR wavelength region. We believe this influenced our PDC measured lengths somewhat resulting in slightly shorter average values.
The ability to remotely observe these volcanic products in the TIR data at a high spatial resolution is critical to this study. Without the high spatial resolution and accurate radiometric resolution of the ASTER TIR data, it would be impossible to determine the flow direction and deposit type produced from each eruption. The only other publicly available TIR data near this spatial scale and acquired over a long time period are the data from the Landsat satellites. In the past, the Landsat TIR data were not routinely acquired at night, although this is becoming more common for volcanic and fire activity. The Landsat TIR data do not have the ability to point off-nadir as ASTER TIR does, which decreases the time between observations. Furthermore, the limit of one or two TIR spectral bands, depending on the Landsat TIR instrument design and/or the health of that instrument, fundamentally reduces the accuracy of derived temperatures from those data. Despite those limitations, a long-term TIR study, such as this one, conducted at other volcanoes, could be augmented by Landsat data over the same time period studied or extended earlier in time using Landsat TIR data acquired before 2000.
The Naismith et al. (2019) study concluded that starting in 2015 Fuego entered a new period of increased activity that produces more lava flows and PDCs [7]. This was partially concluded using the MIROVA monitoring system, an automatic volcanic hotspot detection system based on the 1 km TIR data from MODIS [46]. Based on our analysis of the ASTER archive, we see a~50% rise in the number of scenes containing elevated thermal data (e.g., lava flows, PDCs, and lahars) beginning in 2015, which supports the findings of Naismith et al. (2019).
Satellite remote sensing of volcanic activity is not without limitations and whenever possible should be collaborated with ground reports. However, INSIVUMEH reports also have limitations. Nine events identified as potential lava flows or PDCs were observed with ASTER but had no corresponding INSIVUMEH report. This could be due to a number of factors, such as the summit being obscured by clouds or the event being too small for ground observers to see. Furthermore, on the morning of the 2018 eruption, authorities announced that the volcano "looked fine". These human-induced limitations emphasize the usefulness of data synthesis between satellite and human monitoring at Fuego, each at different time and spatial scales, to provide a much more accurate chronology of the volcanic activity over the study period. The nearly 15 years of ASTER TIR data of Fuego volcano compared to and validated by the INSIVUMEH ground reports indicate that the most common PDC flow direction is to the south, closely followed by flows to the southwest and southeast. This information, combined with continued TIR data and DEM collections, can be used in future developments of disaster response and mitigation plans for the local monitoring agencies.
VolcFlow can model PDCs, avalanches, and lava flows assuming the material property values (e.g., density and cohesion) are available and accurate. These properties are difficult to calculate directly from remote sensing data. For this study, values from prior VolcFlow modeling studies of PDCs were used; however, these could be refined with laboratory analysis of samples from the June 3 flow. The estimated volume of the PDC used in the study came from Naismith et al. (2019) and is consistent with previous eruptions of Fuego [8]. However, a thorough field campaign and more accurate pre-eruption DEM would provide more accurate constraints on the volume of material erupted. A second important model input for VolcFlow is the pre-flow DEM [35]. For this study, two versions of DEMs generated by the ASTER instrument were used. The stated vertical accuracy of the ASTER GDEM product is 20 m, better than a single-scene DEM accuracy of 30 m. The GDEM achieves this improved accuracy and reduction in pixel to pixel noise through averaging large numbers of single-scene DEMs acquired over time. This can create constraints for flow modeling. First, version 2 of the GDEM uses scenes from 2000-2011. Therefore, any topographic changes within this time period are lost. However, if no major topographic changes have occurred, as with Fuego, its use is appropriate. A version 3 GDEM was recently released which uses data from 2000-2013. It was released too late to be incorporated into this study, however the differences between it and version 2 at Fuego were assessed to be negligible. Version 3 contains only minor fixes, such as filling of holes. Second, to account for any significant topographic change, a model requires the most recent (and most accurate) DEM. The use of a single-scene DEM closest in time to an event is potentially more appropriate than using the GDEM. However, as seen by our analysis of the single-scene DEM, the amount of noise and topographic anomalies can adversely impact the modeling in certain locations (Figure 6b).
The ultimate goal of this study was not to create a model output that precisely matched the June 2018 deposit, nor assess whether VolcFlow is the appropriate flow model for this particular PDC emplacement. To achieve a more accurate fit with the VolcFlow model, its input parameters could be varied iteratively, and the subsequent output assessed. Rather, we wanted to examine the accuracy of the different ASTER-derived DEMs and their appropriateness in future modeling efforts. For this purpose, the VolcFlow model was chosen and run with a set of standard input parameters used in prior studies and the results compared using those two DEM products.
VolcFlow modeling using the GDEM was able to replicate the flow length, however there were notable differences. These are likely due to the constraints of the model to fully replicate all components of a PDC, the chosen starting location for the modeling, as well as the resolution of the DEM itself. For example, the model results show the deposit being emplaced into several barrancas, yet nearly all the material was concentrated into one barranca during the 2018 eruption. This higher volume emplaced into one barranca is likely the reason that the PDC overran the barranca walls in two locations downslope. Progressive infilling of the barranca by PDC material changed the depth in near real time, which was not resolved in the DEM. In addition, initiating the model at the summit (with a starting velocity of 0) is likely not accurate if the PDC initiated at a higher elevation from partial column collapse. The modeled results are therefore shorter (as we see in our results). Model results using the single-scene DEM presented other problems. Noise variations in the data resulted in minor random elevation anomalies from one pixel to the next that directly impacted the flow's progression in the modeling. This produced results showing the PDC material dispersed laterally over a larger area higher on the slopes before becoming confined to multiple barrancas. The PDCs, therefore, did not travel as far as the population centers affected by the 3 June 2018, eruption. Although not a factor for this study, more extreme topography errors caused by the presence of cloud and cloud shadow in a scene would also affect the model output even more and should be considered where using individual ASTER scene DEMs for future studies. If, for example, there has been significant topographic changes at the volcano under study, the GDEM would not be an appropriate choice for a flow model. Although not employed here, some of the random noise present in these DEMs can be lessened with image smoothing and noise removal algorithms making the single scene DEM potentially more useful.
Volcano-wide hazard modeling of PDC-forming eruptions shows that population centers in close proximity to the barrancas to the southeast of Fuego (e.g., Los Lotes, San Jose, and El Rodeo) are at higher risk during larger paroxysmal eruptions. During the 3 June 2018, eruption, a PDC formed, flowed down Las Lajas barranca, and later exited it at two locations destroying both the La Réunion resort and the community of Los Lotes. These two locations are within the modeled flow hazard map for an eruption similar in size to the 3 June 2018, event. Because of the barranca-exiting risk of future large PDCs, any population center within 2 km of a barranca does have a higher risk during eruptions. However, it is outside the current ability of the VolcFlow model to recreate these more complex barranca-departure events. The most likely scenario is that these barrancas become infilled down-valley either by ongoing erosion during the rainy seasons or from the deposition of new material during the early stages of an eruption, thus changing the depth profile. Modeling future hazards at Fuego will benefit from more accurate and more frequently updated DEMs. For example, dedicated field campaigns following eruptions and flow emplacement using either drone-based LIDAR or structure from motion images could create updated DEMs over the areas of change, which could be used to augment the larger ASTER GDEM for more accurate modeling of future flow hazards.
Conclusions
Orbital ASTER TIR data refined using summary reports from INSIVUMEH provided detailed information on the recent PDC travel directions at Fuego volcano. Although the reports provided insights into the deposit type and specific barranca occupied, the resolution of the ASTER data over 15 years, resolved the activity and specific flow events. This study would have been less accurate without either the reports or the high spatial resolution TIR data acquired with improved temporal frequency enabled by the ASTER URP Program.
Despite the June 2018 PDC-forming eruption at Fuego occurring seven years after the data used to create the GDEM, it was found to be more appropriate and accurate for modeling the general trends of the flow produced by that eruption. Any topographic changes that did occur from 2011 to 2018 were small enough in comparison to the increased noise present in the single-scene DEM. However, this may not be the case at other volcanoes, especially those with significant topographic changes since 2011. A similar approach is easily adaptable to other volcanoes where the observer reporting may or may not be available and/or incorporating TIR data from Landsat. For any modeling effort integrated into those studies, the choice of flow model and the DEM are important factors. The ability of ASTER to provide several DEM products globally and over time make it the best option for most cases. However, the differences in the DEM products (e.g., the single-scene versus the GDEM) and the impact on the flow model's uncertainty must be considered.
Despite the lack of a precise match between the model output and the June 2018 deposit, the historical tendency of flow directions as seen in the ASTER data analysis supports the VolcFlow model results, which identifies the risk to the surrounding population centers. The conditions leading to the two barranca-departure events were beyond the modeling and the DEM resolution. These events are what caused a majority of the fatalities and therefore require further analysis with more sophisticated modeling and progressively updated, higher-resolution DEMs. Despite this limitation, the generalized PDC hazard map with a PDC volume similar to that of the June 2018 volume identifies the population centers of La Réunion and Los Lotes as at-risk areas. With Fuego possibly now in a period of increased volcanic activity, the results of this study are important to potentially help mitigate future PDC disasters. As ASTER ages, similarly-conducted studies will benefit from a continuation of future TIR data with specifications equal to or improved from that of ASTER. | 2020-09-03T09:12:53.190Z | 2020-08-27T00:00:00.000 | {
"year": 2020,
"sha1": "c6383e5d743fa6bd66c08e8adeee9867c1aeb0c8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/12/17/2790/pdf?version=1598600128",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "fca429bac0d3df32d31cb382011bc2f6e20084b8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Geology"
]
} |
166742311 | pes2o/s2orc | v3-fos-license | Maternal Pain in Miracle Scenes as a Part of Catholic Propaganda in the Late Medieval and Early Modern Period
This paper analyses the iconography of the scenes of pain of non-Biblical mothers for their deceased children, which was used as one of the mechanisms of Catholic propaganda in the struggle against the Schism and infidels and exercised through sermons and propagation of the faith in the power of the Eucharist. Local examples (from Split, Dubrovnik and Kotor) are examined in a broader European context. Firstly, an example from the cathedral of Split is analysed. The Miracle in the fiery furnace (better known as The Miracle of the Jewish boy of Bourges ) was painted around 1635-1640 by the Venetian painter Matteo Ponzone as a part of a larger cycle composed of ten canvases, which is now placed above the main altar. It is a depiction of a medieval legend, which can be encountered in the collections of Exempla , didactic tales for use in sermons, and has thus had several literary and visual interpretations which are analysed in this paper. The second example refers to the miracles of Saint Vincent Ferrer. While preparing for canonization, Pietro Ranzano of Palermo wrote the vita of Vincent Fer-rer, (in which) the centerpiece of which is occupied by two consecutive and psychologically powerful stories. The first story deals with a mother who had killed, chopped-up and cooked her own child (a version of the story of Mary of Bethezuba by Flavius Jospehus), thereby appalling the child’s father upon his return from Ferrer’s sermon. The topic of the second miracle is a grieving mother who is carrying her dead child in her arms and seeking help from Ferrer. In both cases the powerful Dominican preacher has restored the children to life. Their stories became instantly popular and received their iconographic expression with an explicit depiction of parental pain.
Miracle in the fiery furnace or the Miracle of the Jewish boy (from Bourges) in the cathedral of Split
The first example is a painting known in Croatian historiography as the Čudo u užarenoj peći (Miracle in the fiery furnace), 1 painted by the Venetian painter Matteo Ponzone (Venice, 1583-1663/1675) around 1635-1640, as a part of a cycle of ten canvases, which can currently be found on the vault above the main altar of the cathedral in Split. 2 The left side of the painting features an open, vaulted furnace in which a fire is blazing.In it, a boy wearing a dark green tunic is sitting calmly, while the lower part of his body is covered by a blue mantle.His hands are crossed on his chest, with the Holy Eucharist painted in the middle of it.The right half of the painting is occupied by a heterogeneous group of people expressing amazement and astonishment by means of vigorous gestures and commenting on the event amongst themselves.The foreground is dominated by a kneeling woman wearing a white shirt and a bright red dress, staring at the child in horror with both of her arms open (fig.1).
It is a depiction of a medieval legend which was rather popular at the time, and which generated several literary and, hence, also painted versions of the event, whose various renderings often bore local features.Even though the legend could have originated earlier, the first known and preserved version was the one recorded by Evagrius Scholasticus (around 536 -600) in his Historia Ecclesiastica, 3 while the first Latin version of the story, based on Evagrius' , was written by St Gregory of Tours (Clermont, November 30, 538 -Tours, November 17, 594) in his book De gloria martyrum. 4While Evagrius Scholasticus sets the narrative in Constantinople, Gregory of Tours is not specific about the setting, placing it somewhere "in the East".Here we shall present Tours' entire legend translated into English: "The son of a Jewish glass-worker was studying and learning the alphabet with Christian boys.One day while the ritual of Mass was being celebrated in the Church of the Blessed Mary, the Jewish boy approached with the other young boys to partake of the glorious body and blood of the Lord.After receiving the holy (Eucharist), he happily returned to his father´s house.His father was working, and between embraces and kisses the boy mentioned what he had so happily received.Then his father, an enemy of Christ and Lord and his laws, said, 'If you have communicated with these boys and forgotten your ancestral worship, then to avenge this insult to the law of Moses I will step forward against you as a merciless murderer' .And he seized the boy and threw him into the mouth of a raging furnace; he was persistent and added wood so the furnace would burn hotter.But that compassion that had once sprinkled the dew of a cloud on the three Hebrew boys who had been thrown into a Chaldaean furnace (cf.Daniel 3:8-30) was not lacking.For it did not allow this boy, even though lying on a pile of coals in the middle of the fire, to be consumed in the least.When his mother heard that the father had evidently decided to incinerate their son, she hurried to save him.But when she saw the fire leaping from the open mouth of the furnace and flames raging here and there, she threw her barrette to the ground.Her hair was dishevelled; she wailed that she was in misery and filled the city with her cries.When the Christians learned what had been done, they all rushed to such an evil sight; after the flames had been beaten back from the mouth of the furnace, they found the boy reclining as if on very soft feathers.When they pulled him out, they were all astonished that he was unhurt.The place was filled with shouts, and so everyone blessed God.Then they shouted that they should throw the instigator of this crime into these flames.Once he was thrown in, the fire burned him so completely that somehow scarcely a tiny piece of his bones was left.When the Christians asked the young boy what sort of shield he had had in the flames, he said, 'The woman who was sitting on the throne in that church where I received the bread from the table and who was cradling a young boy in her lap covered me with her cloak, so that the fire did not devour me' .There is, hence, no doubt that the blessed Mary had appeared to him.Then, having acknowledged the Catholic faith, the young boy believed in the name of the Father and Son and the Holy Spirit.After he and his mother had been been baptized in the waters of salvation they were reborn.In that city many Jews were saved by this example". 5he story was modified from the 9 th century onwards, resulting in numerous versions, 6 and particularly important for the dissemination of the legend in the centuries to come was its inclusion in the collections of Marian stories and miracles.Thus, for example, in his collection Miracula Sanctae Dei Genetricis Virginis Mariae, Nigel of Canterbury (also known as Nigel of Longchamp and Nigel Wireker, c. 1130-c.1200)set the story in Pisa, emphasizing the Eucharistic moment within the miracle and granting the Jewish mother a prominent role by stressing her reaction to the father's violent act. 7In this short overview of the story's reception, the Les Miracles de Notre-Dame collection, is especially significant, compiled around 1223-1227 by Gautier de Coinci (1177-1236).He, however, moved the setting to the French town of Bourges, which is why the legend is best known as The miracle of the Jewish boy of Bourges.In Coinci's version of the story, the mother's reaction to the father's violent act was one of despair; she wailed crying out for help and tore her hair out. 8Instead of Christians, numerous Jews appear as witnesses to the miracle, and they subsequently converted to Christianity.De Coinci's version was immensely popular and influential, which is evidenced by dozens of preserved manuscripts, 9 and it was also the first version of the story that received a visual narrative form. 10Therefore, we will use four selected examples to analyse the manners in which different illuminators shaped the legend of the Jewish boy.The authors will focus on decoding the visual lexis determining the identity of the protagonists, their mood and emotions, with special emphasis on the depiction of the boy's mother.
The first example is kept at the Bibliothèque nationale de France under catalogue number BnF, fr.22928 (f.75r) and can be broadly dated to the 14 th century (1301-1400).The illuminator portrayed the legend of the Jewish boy in a sequence of four images (fig.2).The first depicts the Jewish boy receiving Communion.The priest is offering the child the Eucharist, which he accepts with his mouth open and hands clasped in prayer.Behind the priest is the altar featuring the Virgin and Child in her arms.Next is the scene depicting the father's violent act of throwing the child into the burning furnace, and the reaction of the appalled mother gathering a crowd of people.The following image shows the vengeful act of the mob pushing the Jewish father into the blazing furnace, and the last scene is the one in which the boy recounts the miraculous event of the Virgin Mary intervening in his salvation.Especially interesting for our topic is the second scene, which synthesizes the two successive events that can be read from left to right.The illuminator clearly portrays the father's negative personality by means of external features: depicted in profile, he has a stern facial expression, and his pointy headgear and a yellow badge, which also characterizes the mother and the child, undoubtedly representing attributes whose aim is to indicate his negative role and distinguish Jews from Christians.The child turns his head to face his father, while his hands are again clasped in prayer.According to our knowledge, the clasped hands gesture was also known to the ancient world.Captives, submitting to their conqueror, would stretch out their hands to be bound.Originally only barbarians were represented in this shameful posture, 11 while in the medieval world the gesture is associated with a ritual of vassals placing their hands in the hands of their liege lords while swearing their loyalty. 12It was only during the late Middle Ages that the gesture was introduced in Christian iconography, where it is usually encountered in the scenes of donors kneeling before the saints.Due to the close connection between a prayer and a request to a divinity, in Central Europe the gesture has assumed the meaning of imploration, as concluded by Ernst Gombrich. 13Therefore, there are two ways in which we can interpret it in this context: as the boy's plea to his father to refrain from his vicious intent, and his conversion to Christianity, resulting from the previous scene.The mother has turned her back to the scene and is standing in front of the gathered crowd with both hands in the air and her palms outstretched, thus indicating the urgency of the situation as well as her own misery. 14Namely, here her grief is not expressed by means of facial gestures and expressions, but only by the movement of her body, i.e. her hands.A similar rendering of the mother's reaction was depicted by Fauvel Master in 1327, in the Jewish boy from Bourges scene, which can be read in the reverse order from the one in the previous example, and which can be found in a manuscript kept at the National Library of the Netherlands under catalogue number The Hague, KB, 71 A 24 f.10v (fig.3).The illuminator has summarized the legend into a single scene, focusing on the father's murderous act and the mother's reaction.Here, however, the mother is turning away from the scene in which the father is pushing the child into the furnace, and she is depicted in a semi-profile, with both of her arms straight up in the air with palms stretched out and turned upwards.The gesture of arms lifted in the air with the palms stretched out has been used to express pain ever since ancient times.Probably the most famous example from that period is the depiction of Dido's death from the Aeneid in the so-called Vatican Vergil, in which a handmaiden standing behind the pyre lifts her hands in lamentation, 15 and there are countless similar examples in medieval painting, especially in the depictions of Biblical scenes of the Massacre of the Innocents and the Lamentation of Christ. 16he illuminations of a manuscript written between 1328 and 1332 have been attributed to Jean Pucelle. 17As with the manuscript from Hague, the illuminator had encapsulated the legend of the Jewish boy into one scene, focusing on the father's murderous act and the mother's reaction (fig.4).In the process of transforming words into a painting, the illuminator provided an extremely dramatic and faithful depiction of the mother's mental state.In doing so, Pucelle used body language and introduced a facial expression as well: the mother is pulling at her hair while her face is contorted with pain.The gesture in which her eyebrows are turned downwards, which is further complemented by the movement of her hands pulling at her hair, represents a strong emotional reaction: an explosion of grief at its peak.As Moshe Barasch explains, these gestures are traditional mourning gestures connected with self-inflicted injury as an expression of grief, and they have also been adopted from ancient and early Byzantine sources. 18ucelle's rendering of the mother's dramatic gesture of grief is most likely influenced by Italian, especially Sienese painting, as are several other examples from his opus, considering that it was believed that he had stayed in Florence and Siena around 1320. 19 It was precisely during the second decade of the 14 th century (sometime around 1324) that Simone Martini painted the polyptych of the Blessed Agostino Novello for the Sant'Agostino church in Siena.Along with the central figure of the beatus, Martini's polyptych features four smaller panels depicting his miracles.Especially relevant for our topic is the lower panel on the left-hand side of the painting, which depicts the Balcony Miracle (fig.5).It is a visualisation of Novello's posthumous miracle, based on the text of an unknown Florentine author. 20The anonymous author relates how a Sienese boy fell off a balcony that needed repair, and how a large plank from the balcony fell down on top of him.His mother, anticipating the worst, prayed to Agostino Novello, who faithfully intervened by causing the plank to hover in mid-air, thereby enabling the boy to escape unharmed.However, as emphasised by Cathleen Sara Hoeniger, the task of invoking the beatus was assigned to a young man in the background of the painting, whom we recognize by his hands clasped in prayer, while in response to the shock of her son's fall, the mother in Simone's story pulls at her hair and wails with her mouth fully open. 21The mourning gesture could have been copied by Simone Martini from the Assisi fresco en-titled The Death of the Child from Sessa, attributed to Giotto's workshop and dated at around 1300-1310 (fig.6).It is a visual representation of another miraculous story set in the town of Sessa where a house had fallen down, killing a young man.In this fresco, a number of female mourners adopt gestures of self-inlicted injury, including a woman in the background shown pulling at her hair and wailing. 22Subsequently, at the invocation of the name of the Holy Father Francis, the boy not only revived, but even appeared to be unharmed. 23mong the manuscripts depicting the legend of the Jewish boy, we would also like to emphasise the grisaille illumination painted by Jean Miélot in the Vie et miracles de Notre Dame collection, which he compiled for his patron Philip the Good, Duke of Burgundy, around 1456 (fig.7).The legend is depicted by means of one painting containing several scenes.The scene on the left offers an open view of the interior of the church in which the boy is receiving the Eucharist.On the right we can see the father to whom the painter attributed a money lender's purse, unmistakeably marking him as a Jew throwing the boy into the furnace.Again, we find the mother's reaction interesting: her facial expression is that of anguish -her eyebrows are turned downwards, tears are rolling down her cheeks, but there are no dramatic gestures of self-inflicted injury.She is kneeling before a church in which the boy is receiving (has received) Communion, with her hands folded in prayer.In this context, the gesture can be interpreted in several ways: as a plea for help, a sign of her submission, and as a hint to the outcome of the story, i.e. intimation of her conversion to Christianity. 24he Marian story had gradually spread beyond the monastic context and became an Exemplum in the collections of Exempla, short moralistic narratives incorporated into preaching with the aim of holding the attention of the audience and making the sermon's lesson memorable, and with a clear purpose of enlightening and converting as many 'new members' as possible. 25In this context it is necessary to mention the late medieval collections such as the Alphabetum narrationum, 26 which was compiled by the Dominican priest Arnoldus of Liége around 1308-1310. 27The compilation was extremely popular, which is evidenced by more than fifty preserved copies, 28 and our theme can be found under the title Eukaristia sumpta ab infideli a combustione eum protexit. 29nlike the extremely popular written and narrated version of the story, there are only a few examples in which the theme of the Jewish boy appears in monumental fresco paintings or easel paintings.We are personally familiar with examples of fresco paintings in the Eton College Chapel in Windsor and the Chapel of the Blessed Virgin Mary in Winchester Cathedral, 30 but the figure of the mother has been omitted from both of these examples in order to place more emphasis on the Virgin's role as a saviour.A remarkable example is the fresco painted by Ugolino di Prete Ilario on the northern wall of the Chapel of the Corporal in the Cathedral of Orvieto 31 between 1357 and 1364, in which the painter had divided the story chronologically in three sequences (fig.8).
After the invention of the printing press, the legend started to appear in numerous book editions, among which we would like to highlight the following title: Miracoli della Gloriosa Vergine Maria Nostra Signora, Tratti da diversi Catholici, & approvati Auttori dal R.P. Don Silvano Razzi, Monacho Camaldolense, which has been printed in several consecutive editions by publishers in Florence, Brescia, Venice, Mantua, Rome, Treviso and Viterbo.The story can be found under the title Un Putto hebreo in compagnia di alcuni fanciulli Christiani prende la Santa communione, e per ció, messo dal padre in una fornace ardente, é liberato dalla Vergine.
As we have seen from the selected examples, the majority of painted versions of the story focus on the father's murderous act and the mother's response to it.In them, the painters have clearly indicated the father's negative personality by attributing some of the recognizable features to him, which had by then become attributes of Jews. 32Apart from the example from Split, we are not familiar with any other painting depicting the legend of the Jewish boy which features only the child's mother, while the father is omitted from the scene.
The author of the iconographic programme of Ponzone's cycle from Split has incorporated the legend of the Jewish boy into the much broader context of a ten-painting polyptych, which most likely served as the pala feriale for the large silver Gothic altarpiece of Iohannes Gerardini of Pesaro, which was placed on the main altar of the Cathedral of Split, and temporarily next to the altar of St. Domnius during the Baroque modifications in the cathedral. 33The polyptych most likely consisted of two horizontal sequences of symmetrically arranged paint-ings: its lower part comprised a series of scenes from the Old Testament, with the central axis featuring a painting of the Last Supper, while its upper part comprised a series of five paintings of the Eucharist exempla, amongst which the Miracle in the fiery furnace (The Miracle of the Jewish boy from Bourges) painting was most likely the fourth painting in the upper sequence.In this way, the selected subjects from the Old Testament would have prefigured the doctrine of transubstantiation and consequently a series of Eucharistic exempla -illustrative, didactic stories whose task was to teach and consolidate the mystery of the Eucharist by mediating sermons and visually rendered miracles.This revival of medieval themes and concepts in the post-Tridentine Restoration of the Church, which among other things uses as a method of communication the story of the intense pain experienced by the Jewish mother, had as an aim of encouraging piety among the believers and of propagating conversion of the schismatics and infidels.This example is directly linked to a crisis within the Catholic Church and the emergence of heresy and Protestantism within the metropolitan Church of Split. 34The dogmatic issue of transubstantiation, especially in relation to Luther and the Reformers who rejected the belief that Christ was present in the Eucharist in the Catholic sense, was solved by resorting to an 'external enemy' .This was aptly found in the Jews in the collections of exempla and their visual narratives, which in the 17 th century became a metonymy for the Protestant identity, since both rejected belief in transubstantiation and every manifestation of the actual presence of Christ's body in the host.
The example from Split is neither an exception nor a coincidence as we shall see in the following examples in which the miracle stories and their visual renderings containing the iconography of pain were used to promote the ideas of Catholicism.
Two miracle scenes of resuscitation of children in the context of the spreading of the Saint Vincent Ferrer cult in Dubrovnik and Kotor
The topic of the second case study belongs to the same circle of ideas, but the actors are different.It involves the activities of Vincent Ferrer during the Western Schism, as well as the canonization and expansion of his cult in the time after the union between the Latin and Greek churches at the Council of Basel -Ferrara -Florence (1431-1449).
It will elaborate how and why official hagiography and newly established iconography represented pain in the two famous miracles of Saint Vincent Ferrer.While preparing for the canonization of Ferrer in 1455, the humanist and Dominican from Palermo, Pietro Ranzano (1426/27-1492/93) wrote the vita of the Dominican apocalyptic preacher who had great success in conversio Judaeorum, Saracenorum, et aliorum. 35In his miracles, two consecutive and psychologically powerful stories hold the focal point.The first story, in which Ranzano recounts the miracles of Ferrer, imparts the tale of a mother who had killed and cooked her own baby.In short, there was a young woman, a woman with many virtues when her madness was not upon her.Her husband listened to Ferrer's preaching, became close with him and invited him to be their guest.In the meantime, the mother suffered from temporary insanity, butchering and partially cooking her child to prepare a meal for the guests.Realizing what had happened, the father was terrified: Me miserum! and in tears (cum multa lacrymans) turned to Ferrer, who then revived the child. 36The narrative had another, post mortem, version which had originally appeared in the preparatory material for the canonization of the new Dominican preacher (the Brittany and the Naples inquests), and then in the vita of Ferrer written by Castiglione: there was a pregnant mother with a desire to eat meat and she cut her child in two parts. 37This version was iconographically represented in the Griffoni polyptych painted by Ercole de' Roberti (1472-1473). 38The setting with the theme of infanticide is presented with no strong gestures expressing pain.The mother is sitting on the floor of the house and quietly expressing sorrow with her head in her hands.The entire scene is moved to the depths and behind open doors; the dead child is barely visible on the table.The father comes out of the house and carries the murdered child to Ferrer's tomb in Vannes, in front of which he prays.In the next scene, the baby, restored to life, is depicted standing on a tomb that has the form of a ciborium and thus shows a clear Eucharist allusion. 39he full iconography of Ranzano's story is presented on the Saint Vincent Ferrer altarpiece, painted by the Erri family workshop in the 1460s, commissioned by the Dominican friars in Modena. 40Ranzano's story of the miraculous healing of a child (he shifted the time of the miracle after Vincent's death to the saint's own lifetime) is presented in full detail, from the process of killing the child and preparing lunch, to Ferrer, who comes to the house as a guest with other Dominicans, then prays and revives the child (to life) to the joy of the father (fig.9).In the hagiography Ranzano makes no mention of the mother's remorse or her pain.In the iconography that soon followed, the same thing can be seen: the mother with long, blond, loose hair, habuit dementiae intervallum, shows no pain or any other kind of emotion while dismembering her child. 41Instead, the story and iconography have a strong effect on the emotions of the public.Here we come up with several key questions on this topic: Why does Pietro Ranzano set the story of infanticide as the central wonder of the new Dominican saint?How was pain used to achieve the goals of the Dominican mission?Let us start from the author of the story according to which the iconography came about -Ranzano was a humanist as well as being very familiar with history, and thus his choice of the variation of the motif of Mary of Bethezuba's cannibalism from Flavius Josephus's History of the Jewish War is not surprising.Mary of Bethezuba was a symbol of the unnatural cruelty of the Jews.In the late 13 th century, the story was included into popular vernacular written works, as well as into sermons, exempla, and homilies. 42Ranzano transformed the Jewish infanticide story to the miraculous act of reviving a baby by a powerful Dominican preacher. 43We need to reiterate that Christian authorities increasingly vilified Jews with accusations of host desecration or the ritualistic murder of Christian children. 44Ranzano sets the miracle at the time immediately after Ferrer held a sermon and after his vigorous conversion of Jews, indicating the need for the repentance of the infidel.One detail on Erri`s altarpiece indicates the origin of the narrative, or rather its anti-Jewish connotation.Namely, there is a representation of a monkey sitting on the window, which cannot be explained as a mere display of domestic life, 45 but rather his presence seems to be the key to understanding what is presented in the scene.A monkey was regarded not only as a symbol of bestial behaviour beyond the norms of society, but was also a symbol of a naturae degenerantis homo -one not humanized by baptism in the specific anti-Jewish context.The visual message of this scene becomes even clearer since the monkey looks towards the mother who kills her child, emphasizing the medieval belief that neither monkeys nor Jews were human in either deed or heart, as they acted inhumanly against their own flesh and blood. 46This meaning is enhanced by the Oriental carpet on which the monkey sits.It is a famous pattern of the dragon and the phoenix (as the same one from the mid-15 th century from Anatolia in the Berlin Museum). 47By comparing the composition of this scene with the miraculous healing of the sick Ferrer 48 from the same polyptych (fig.10), we can certainly strengthen the assumption that the presence of the monkey should additionally emphasize and explain the story.Namely, it is almost identical to the composition with a very similar architecture as the scene, but shown here is a Dominican reading a book with his habit thrown over the wall (instead of a monkey sitting on an oriental rug).This iconographic motif emphasizes that this is a sacred place, the Dominican convent, in which Christ miraculously healed Ferrer by touching his cheek.Conversely, the monkey as an anti-Jewish symbol is sitting on the window of a house staring at the horrific scene of the murder of a child.Ranzano built Ferrer's character with a subtle combination of two themes: the conversion of the infidel and the ability of Ferrer to lead the public to a painful confrontation with sin and ad lacrymandum. 49The penitential process of curing from sin was always tearful.Tears of contrition were seen as the road to salvation, a sign of sincerity and are praised especially in the exempla. 50Vincent Ferrer himself, in his Tractatus vitae spiritualis, wrote about "dolore acutissimo e amarissimo che ti faccia piangere ed eplorare i tuoi peccati". 51ery important for the propaganda Vicenziana immediately after canonization was the Dominican observant Frater Iohannes de Pistorio, optimus et elegantissimus predicator. 52In 1463, Ranzano sent him a piece from the hagiography, as well as a poem he wrote in honour of Ferrer to help him spread the cult of Ferrer.Especially intriguing for our topic is the fact that it was said that Giovanni da Pistoia was the miraculously resuscitated child from the post mortem version of Ferrer's famous miracle. 53Giovanni strongly embraced the basics of the belief of the apocalyptic preacher Ferrer, according to which the second coming of Christ is to follow only when the Jews and all the other unbelievers and schismatics are converted.He carried out such a mission in Sicily where he demanded that the Jews attend his conversionary sermons. 54Dominican Serafino Razzi pointed out how Giovanni spread the cult of his beloved predecessor in Kotor and Dubrovnik as well.These missions had some success, as evidenced by the polyptychs of which only the wooden Ferrer sculptures have been preserved.The Kotor one dates back to 1495, according to the year of the contract, in which Božidar Vlatković made a commitment to the Dominican convent in Kotor that it should be made according to the model -the painting on the altar of St. Vincent in Dubrovnik, which was commissioned in 1487 by Stjepan Zornelić-Ugrinović and Marin (di Lovro) Dobričević.Both altar paintings had the predella with the scenes from the hystorie di san Vincentio. 55In Dubrovnik, we have a written trace of the existence of an iconography hitherto unmentioned in historiography, the variation of the topic of the second Ferrer miracle which Pietro Ranzano placed immediately after the story of the insane mother.This story belongs to the usual repertoire of saintly deeds.A grieving mother (Mulier tantarum lacrymarum) is carrying her dead child in her arms and seeking help from Ferrer, who has later restored the child to life. 56Unlike the insane mother, the mother in this story (and every other mother with a sick child from the vita of Ferrer) is presented as kneeling and sometimes crushed with pain holding her child in her arms in front of the preacher (the polyptych by an anonymous artist painted for the Dominican church in Castelvetrano, early 16 th century (fig.11); Degli Erri polyptych from Modena circa the 1460s; or Sebastiano Devita for the Dominican church in Split from the 18 th century).Namely, Ambrogio de Gozze (Gučetić) (1563-1632), the Dominican from Dubrovnik and the bishop of Mercana and Trebigna, wrote about this painting. 57Since these works written by Gučetić were not preserved, we gain knowledge about this episode from the writing of the Italian Dominican, Antonio Teoli, in his work on the Ferrer cult from 1735. 58According to Teoli, Bishop Gučetić recorded the following story: in the mid-16 th century, Pietro Bicich (Petar Bičić), then a young boy, fell seriously ill.Approaching death, his dolente madre made a vow before the Ferrer statue in the Dominican church in Dubrovnik asking the saint to help her son survive.However, shortly after her prayer, the boy died.This, however, in no way diminished the faith of the mother, crushed with pain.While carrying the body of the child to bury him in a Dominican church, full of tears and full of faith she threw herself before the statue of Ferrer begging him to restore her son's life ("piena di fiducia si prostrò avanti la sopraccennata statua di S. Vincenzo pregandolo a restituirle la vita del figliuolo a lui invotito").While the mother prayed and the friars began the funeral rite, the boy opened his eyes to the amazement of all and the great joy of the mother, thus magnifying the power of St Vincenzo ("magnificicando tutti la potenza della intercessione di S. Vincenzo").Petar Bičić lived until 1611 and after his death the bishop recorded a remembrance of this miracle.According to Teoli's writing, this miracle was given its iconographic representation -in the picture that was in the niche of the altar dedicated to Ferrer, about which Teoli gives a detailed ekphrasis: the mother is expressing pain with her arms stretched wide in front of a statue, and the boy is raised from the grave surrounded by Dominicans ("Un quadro in cui viene con tutta particolarità dipinto; vedendosi la statua sull'altare e la madre colle braccia aperte davanti, la bara col fanciullo che si alza vivo e le trocie tenute da persone vestite di sacco come si accompagnano i defonti, la sepoltura aperta e cose simili").Then Teoli explains how this miracle has enhanced the devotion to Ferrer, and that he was venerated as a protector of children.Also, after this miracle, the citizens of Dubrovnik began to bestow the statue of Ferrer (before which the mother prayed) with a number of bequeathed gifts: "Le grazie poi che tuttavia ricevono i ragusei nel ricorrere al santo avanti la detta sua statua sono quasi quotidiane; e massimamente grandi sono quelle che Iddio ivi opera per i meriti di questo suo taumaturgo a pro de` fanciulli; tantochè comunamente è chiamato: San Vincenzo de fanciulli.... E presentamente veggonsi attorno alla prodigiosa statua moltitiudine di ricchi voti che giornalmente si portano da devoti.Evvi memoria nello archivio di quel convento che anticamente vi erano tanti voti di argento, che tutta la cappella veniva da essi coperta, ed i padri fecero del medesimo argento una gran croce due turriboli, ed altre argenterie per la Chiesa". 59or an understanding of the religious and historical context in which the cult of Vincent Ferrer spread in Dubrovnik (and Kotor, since the Kotor Dominicans were under the custody of Dominicans from Dubrovnik) it should be emphasized that the spread of the cult was an integral part of the Dominican Observant mission.The first one was in the second half of the 15 th century with the preaching of Giovanni da Pistoia, and the second was by the end of the 16 th century with Serafino Razzi.This Dominican friar from San Marco in Florence was sent with the task of "completely carrying out a mission" in the places surrounded by schismatics (Serbian and Greek Orthodox Christians) and Muslims: "lo Schisma de Serviani, et di Greci, et dalla terra la setta di Maoma".In the same context, the ambition of Bishop Ambrogio Gozze should be seen as emphasizing the might of Ferrer, who was the symbol of the Dominican fight against infidels.Namely, Gozze briefly taught theology in Naples where the cult of Ferrer was particularly powerful, and in 1609 he was consecrated in Rome by Pope Paul V, as the bishop of Mercana and Trebigna, the territory where numerous schismatics (Orthodox) and Muslims (Turks) lived, with the task of actively implementing the Propaganda Fide ideas in this particularly vulnerable area. 60
Conclusion
The analysis of three different, emotionally powerful, inspiring as well as morally uplifting stories which share the theme of presenting maternal pain and/or causing the observer painful emotions have depicted one of the methods which the church used for propagating Catholic ideas.By using a visual narrative of pain iconography as a powerful didactic means in the service of sermons and hagiography, the Church endeavoured to attract the attention of the populace with the aim of reinforcing the Catholic faith when it swayed and of enticing the faithful to observe religious practices by spreading cults of new saints which brought together the thaumaturgic and transformation powers of the non-believers.Research of the local ecclesiastical, historical and political context (in Split, Dubrovnik and Kotor) constitutes the framework for shedding light on our case studies, which are also examined within a broader European context.The results of the analysis have created the need for further research, which would entail expanding the area of researching the iconographic and literary motifs in which the imperative of trust in the strength of the Catholic Church represents an alternation of emotional and ideological messages that can be understood only within a clearly defined context. | 2019-05-28T13:09:59.683Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "cd498298d16fa4d82a6a8c5683e7e0868861250e",
"oa_license": "CCBYNC",
"oa_url": "http://dais.sanu.ac.rs/bitstream/id/55735/bitstream_55735.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "81e294a8fe4cdfa72b8d7724c197b43ea1d6f952",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"History"
]
} |
239616698 | pes2o/s2orc | v3-fos-license | A Reference-Sampling Based Calibration-Free Fractional-N PLL with a PI-Linked Sampling Clock Generator
Sampling-based PLLs have become a new research trend due to the possibility of removing the frequency divider (FDIV) from the feedback path, where the FDIV increases the contribution of in-band noise by the factor of dividing ratio square (N2). Between two possible sampling methods, sub-sampling and reference-sampling, the latter provides a relatively wide locking range, as the slower input reference signal is sampled with the faster VCO output signal. However, removal of FDIV makes the PLL not feasible to implement fractional-N operation based on varying divider ratios through random sequence generators, such as a Delta-Sigma-Modulator (DSM). To address the above design challenges, we propose a reference-sampling-based calibration-free fractional-N PLL (RSFPLL) with a phase-interpolator-linked sampling clock generator (PSCG). The proposed RSFPLL achieves fractional-N operations through phase-interpolator (PI)-based multi-phase generation instead of a typical frequency divider or digital-to-time converter (DTC). In addition, to alleviate the power burden arising from VCO-rated sampling, a flexible mask window generation method has been used that only passes a few sampling clocks near the point of interest. The prototype PLL system is designed with a 65 nm CMOS process with a chip size of 0.42 mm2. It achieves 322 fs rms jitter, −240.7 dB figure-of-merit (FoM), and −44.06 dBc fractional spurs with 8.17 mW power consumption.
Introduction
Recently, communication-based industries such as home IoT, 5G communications, autonomous vehicles, and mobile high-speed interfaces are growing rapidly [1][2][3][4]. Phase locked loop (PLL)-based clock generators are of particular interest in such applications, where the key characteristics are fine frequency resolution, excellent noise performance, low power consumption, and small chip area.
The most common frequency synthesizer for these applications is the PLL. A basic block diagram of the classical charging pump PLL (CPLL) is shown in Figure 1a If N is an integer, the output frequency is an integer multiple of the input signal, called an integer-N PLL. Even though it has versatile usage, the output signal is only changed by an integer multiple of the REF signal, which is sometimes not acceptable for certain applications that require high-frequency resolution. To address this limited resolution problem, fractional-N PLL has been introduced where the output signal changes with a fractional portion of the f REF . The block diagram of a typical fractional-N PLL is approximately the same as an integer-N PLL. An integer type has a fixed dividing ratio N, whereas a fractional-N type has varying divide ratios (N, N + 1) through a control signal. Figure 1b shows the general waveform of the REF and FDB signals. Both signals have rail-to-rail swings and are used to determine phase and frequency differences between the two signals via Phase-Frequency Detector (PFD).
Even with an enhanced frequency resolution from the fractional-N PLL structure, one of the disadvantages of classical PLLs is that the in-band noise of this type of PLL is increasing by the factor of dividing ratio square (N 2 ) [5][6][7][8]. These disadvantages push the research community to consider a different phase-frequency comparison method based on sampling, i.e., without using FDIV units. Sampling-based PLLs can be categorized into two groups, sub-sampling PLLs (SSPLLs) [9][10][11] and reference-sampling PLLs (RSPLLs) [12][13][14], depending on the signal used as the sampling clock and the signal to be sampled. For SSPLL, the relatively low-frequency input reference signal sub-samples the fast VCO output signal. On the other hand, the fast VCO output signal samples the slow input reference signal in RSPLL. Even though our proposal refers to the basic structure of RSPLL, it would be beneficial to introduce the various SSPLL and RSPLL types, and the circuit techniques used for each PLL in the succeeding chapter, to help in understanding the details used in our design.
Sub-Sampling PLL and Reference-Sampling PLL
The basic block diagram of SSPLL is shown in Figure 2a. The sub-sampling PLL uses a phase detector (PD) that sub-samples the high-frequency VCO output with a relatively slow REF signal. The PLL structure can improve the in-band noise characteristics by removing the divider. On top of these noise improvements, fractional-N operations based on digitaltime converters (DTC) have been proposed to achieve fine frequency resolution [15][16][17][18][19]. SSPLL uses a DTC to insert different delays into the REF signal to mimic the frequency difference between the REF signal and the FDB signal. Figure 2b illustrates the basic concept of fractional-N operations in SSPLL. The DTC provides various delays for each rising edge of the REF signal. Therefore, the two signals become fractional multiple frequency relationships. However, there are two issues with the DTC-based fractional-N implementation that arise from the genuine properties of the DTC unit. First, DTC gain is very sensitive to the variations in PVTs and also has nonlinearity problems [17]. Hence, an additional calibration logic is typically required which makes the design expensive. The second issue comes from the required resolution of the DTC in SSPLL. Here, the DTC unit is supposed to cover a short delay range to maintain the lock within the linear region of the VCO output signal.
The linear region for locking is shown in Figure 3a. The fast-frequency sine wave represents the VCO output signal to be sampled. The sampling pulse must then sample the sine wave within T VCO /2 in the figure. Since the VCO output range is typically a multi-GHz, one cycle period of the VCO signal is very short, and the DTC needs to vary the delay with a fine step within this region for fractional-N operation. This implies that sometimes high-resolution (<1 ps) DTCs are required which is quite challenging. Furthermore, a DTC with a finer step naturally occupies a large silicon area when considering the total delay range need to cover. To alleviate the constraints of DTCs on SSPLL systems, a phaseinterpolator (PI)-assisted SSPLL system is proposed in [18].
Here, PI adds various delays to the VCO output signal, so that the frequency of the FDB_PI signal changes over time. Furthermore, after PI units, [20] uses Linear Slope Generator (LSG) to linearize the sampling region. The LSG tilts the slope of the incoming signal from the PI to provide a wider linear region to sample with the REF_D signal. To alleviate the limitations of the sub-sampling structures, a reference-sampling PLL (RSPLL) is proposed [12]. As the PLL name indicates, the RSPLL uses the VCO output signal as a sampling clock, and the buffered VCO output signal samples the input reference signal to determine the lock condition. A typical block diagram of the RSPLL is shown in Figure 4 with two input waveforms for the PD unit. This structure offers multiple advantages. First, similar to SSPLL, RSPLLs do not require a frequency divider, which can improve in-band noise characteristics. In addition, the reference sine wave can be used without going through the buffering phase, reducing the noise contribution of the input buffer. The potential power overhead that arises from VCO rate clock sampling can be solved by simple digital logic that passes only a few VCO output pulses for sampling. The sample edge selection circuit (SESCi) unit in Figure 4a selects sampling pulses (VCO output clock) near the zero-crossing point of the input sine wave using a mask pulse. Therefore, the waveform of the sampling clock in Figure 4b is enabled only for a few portions of the one cycle period of the input sine wave.
Another benefit of RSPLL comes from a wide locking range. Compared to SSPLL, RSPLL samples input sine wave signals that are typically much wider (N times) than VCO output signals. The input signal also has a very linear slope near the zero intersection, providing a sufficient locking range. Related waveforms can be seen in Figure 3b.
Finally, it would be beneficial to mention sampling error reduction compared to subsampling methods. Here, the sampling error (ε err ) can be defined as the voltage difference between the value of a sine wave and the ideal linear line at a certain time T VCO as shown in Figure 3c. Then, the phase error, φ err can be calculated as where φ err,SSPD and φ err,RSPD represents the phase error of sub-sampling PD (SSPD) and reference-sampling PD (RSPD), respectively [15]. T VCO and T REF are periods of VCO and reference signals, respectively. Once we compare Equations (1) and (2), the phase error of the RSPD is much more insensitive to ε err from the fact that there is no coefficient N in the phase error (φ err ). Although the basic concept and low-noise characteristics of the reference-sampling structure were introduced in [12], fractional-N operations are not yet included, most likely due to design complexity for combinations of digital sampling logic and fractional-N features. Instead, a fractional-N operation of the RSPLL was recently proposed in [21]. In [21], the fractional-N function is achieved using the traditional clock counter of the feedback path instead of the SESCi logic. The counter here basically generates sampling signals for fractional-N operations at specific intervals based on fractional code reception. However, the counter generates periodic spurs and quantization errors. Instead of using Delta-Sigma-Modulator (DSM) to address this issue, the RSPLL in [21] exploits capacitorbased digital-to-analog converter (CDAC) and necessity calibration logic to counteract periodic spurs generation. This additional DAC and its logic units occupy a large silicon area, not to mention design complexity.
Following the above discussion, here we propose a fractional-N RSPLL (RSFPLL) that has the following properties: (1) Adopt PI-based multi-phase generation to perform efficient fractional-N operations, but do not use DTCs that require significant design effort. Removing the DTC simplifies the design, and the design shows robust operation against possible environmental changes; (2) Adaptive mask window method was proposed to selectively pass only the VCO output pulses of interest as the sampling clocks.
The rest of this article is organized as follows. Section 3 introduces the proposed reference-sampling PLLs. Section 4 describes the noise analysis of the proposed system, and Section 5 discusses the measurement results. Finally, Section 6 shows the conclusions.
Architecture
This chapter first introduces the overall structure and basic behavior of the proposed RSFPLL. This is followed by a description of the subblocks.
Architecture Overview
A block diagram of the proposed reference-sampling fractional-N PLL (RSFPLL) is shown in Figure 5. In the forward path, reference-sampling PD (RSPD), third-order loop filter (LPF), and VCO are connected in series, similar to the conventional RSPLL in [12]. However, along the feedback path of the proposed DTC free fractional-N RSPLL, there are phase-interpolator/multi-modulus divider (PI/MMDIV) and PI-linked sampling clock generator (PSCG). When the differential VCO output (OUTP, OUTN) is presented to the MMDIV unit after the buffer stage, the IQ divider inside the MMDIV unit generates a four-phase signal. Then, a dual modular multi-phase divider adds an additional phase to generate a fiverotating phase [22]. This 5-phase signal is interpolated into 32 phases through a consecutive 3-bit pipelined PI. Among 32 phase signals, only 3 adjacent signals are selected through DSM code [20] and sent to the following PSCG unit. The PSCG device then creates a sampling window that adaptively changes the activation time so that only the required VCO output pulses pass selectively as the sampling pulses. Sampling pulses generated by the PSCG are used to sample input sine wave inside the reference-sampling phase detector (RSPD), where the sampled output, namely the DC level, is used to adjust the control voltage of the VCO through the LPF unit.
Pipelined Phase-Interpolator with Constant Charge Technique
In the proposed sampling-based PLL structure, PI can provide output signals with multiple phases out of VCO signal, one of which can be arbitrarily selected to perform fractional-N operation without a DTC unit. The PI in our system receives a 5-phase rotation signal from MMDIV and generates 32 interpolated phases using a 3-stage PI.
In conventional sampling-based fractional PLLs, tournament-type PIs are typically used, and the PI Cells required for N-stage structures are 2 N−1 . Since it might consume large power and area, a pipelined PI using only N + 1 PI Cells for the same PI stages has been proposed [23]. Figure 6a,b show a block diagram for two PI types, each of which consists of three stages. Here the PI Cell receives two input signals with adjacent phases and generates three output signals. The first and last output signals from the PI Cell have the same phase with the two input signals, and the signal at the middle has a phase in between two input signals. For this operation, the PI Cell consists of three separate unit PIs (inset of Figure 6b), each of which receives two inputs and generates an output signal on the center. When comparing two PI structures, two types of PI receive two adjacent phase signals (P 11 , P 12 ) and generate eight phase signals from the output. Compared to the tournamenttype PI that generates all the 8-phase signals, the pipelined PI generates only signals of interest by appropriately selecting two adjacent signals from the previous stage through a 3 × 2 multiplexer. This is how pipelined PI has a small number of PI Cells inside, and we adopted this type of PI for the proposed design.
As mentioned, the unit PI receives two input signals and generates an output signal of the phase centered between the two inputs. The conventional unit PI schematic is shown in Figure 7a, which is just another illustration of two inverters with current sources at the top and bottom, and the outputs are tied together to mix the signals. The operation of the conventional unit PI with two inputs P i1 and P i2 is as follows: (Assume P i1 is a fast phase signal) (1) When P i1 changes from high to low, the corresponding M1 turns on and M 3 and M 4 turn off. (2) Current I P flows through the M 1 transistor to C out and the output is charged with the slew rate of I P /C out . (3) Meanwhile, when P i2 is switched from high to low, the M 2 transistor becomes transparent, and the slew rate of the output node doubles (2I P /C out ). These processes are illustrated in Figure 8a. Similarly, reverse operation occurs when P i1 and P i2 change from low to high, eventually discharging the output capacitor C out to the ground. The conventional PI unit can generate intermediate phases in most cases if the phase difference between the two inputs is not too large, but turning on only one of the NMOS transistors (M 3 or M 4 ) in the middle of operation causes short circuit problems. To avoid this, a circuit technique with a logic AND gate to control the NMOS transistor is proposed in [23], which is shown in Figure 7b. However, if two PI inputs are not close enough, the output signal can be generated with an inappropriate phase. As shown in Figure 8b, if the second PI input signal is too far from the previous signal, the output signal reaches the maximum voltage level before the second input changes. Though this issue can be resolved by reducing the slew rate (I P /C out ), increasing capacitance is not a good idea because capacitors occupy large areas and may need to increase current to maintain a slew rate under normal circumstances. To improve the performance of unit PI, we propose constant charge scheme on internal nodes (X i , Y i ). To explain how it works and what has changed from the conventional PI, it would be better to illustrate how a problematic situation occurs when two PI inputs are not close enough. The problem happens by following the next steps: (1) Initially, when both inputs (P i1 and P i2 ) are high, M 1 and M 2 transistors remain turned off, so charges accumulate on parasitic capacitors of nodes X 1 and X 2 . Therefore, the voltages of X 1 and X 2 rise close to the supply voltage levels. (2) When P i1 is changed to low, M 1 is turned on and the charge of X 1 immediately moves to the load capacitor at the output which incurs a large current flow. Therefore, the voltage of the output node changes suddenly at the beginning of the PI operation, and the situation gets worse if the second PI input does not arrive within the appropriate time frame. This causes a phase error in the PI output. Additionally, the same problem can occur with M 3 and M 4 transistors. To avoid this issue, it is necessary to prevent charging or discharging the source node of each transistor. This means that the source voltage of each transistor should be prepared so that the output voltage does not show a sudden increase/decrease in voltage when the first PI input is received. To this end, constant charge circuits were added to both sides of the unit PI in Figure 7b, making Figure 9. The constant charge circuit then generates a current path to prevent node voltage from charging or discharging. The proposed circuit technique prevents large instantaneous current flows, which leads to improved phase error during normal PI operation. Figure 10
PI-Linked Sampling Clock Generator (PSCG)
The RSPLL basically samples input sine waves using VCO-rated high-speed clock signals, which can result in high power consumption. To avoid this problem, sampling of the RSPD is performed only if the voltage level of the input sine wave passes through the DC reference level of the input signal. For this operation, a masking window has been used to selectively choose the sampling pulses using logic AND operation [23]. However, fractional-N operations on the RSPLL architecture require an adaptive masking window whose masking position depends on the PI output signal selected from 32 different phase signals. Here, we propose a concise method to implement a PI-linked sampling clock generation through simple digital logic.
The schematic diagram of the proposed PSCG unit is shown in Figure 11a. The PSCG unit consists of two building blocks, the PI-linked sampling window logic that creates a sampling window for fractional operations and the differential tracking logic. For simple designs, only a few logic gates and resettable DFFs (D flip-flops) were used for both units.
The operation of each unit will be followed by where the expected waveforms are depicted in Figure 11b. When three output signals from the PI arrive, the first phase signal (PI <0>) and the last phase signal (PI <2>) are used as clock and reset signals for the DFF, respectively. Data input 'D' receives a fixed VDD input. Therefore, when the PI <0> becomes 'H' and RPB is 'H' at the same time, the output of the DFF becomes 'H'. The DFF output remains 'H' until the combined signal '/PI <2> AND RNB' becomes 'H'. In this way, the masking signal, so 'flexible window' encapsulates the intermediate input signal PI <1>, and the sampling clock, will be generated by ANDing two receiving signals from the differential tracking logic. This is how the proposed RSPLL can leverage PI and adaptive mask window generation to perform energy efficient fractional-N operations.
To show the effectiveness of the proposed PSCG method, we compared two cases of fractional-N RSPLL with the first case using a fixed mask window and the second case having a moving window. The simulation waveforms of the two different examples are shown in Figure 12. Figure 12a illustrates the simulation results of the conventional sampling clock generator using the fixed window method proposed in [12], and Figure 12b illustrates the results of the proposed sampling clock generator. As expected, the sample clock generated from the fixed window case has a truncated waveform (Figure 12a) which can occur if three-phase signals out of PI are located at the end of the fixed mask window. As a result, the system samples the wrong points of the input sine wave (far from the input reference DC level), resulting in locking errors.
On the other hand, the proposed sample pulse generation of RSFPLL shows complete consecutive pulses based on the moving mask pulse (Figure 12b). Therefore, the system displays frequency locking after~6 us. Note that the proposed RSFPLL includes a differential RSPD to reduce charge injection from the sampling switch. Three sampling pulses are required to drive the differential type RSPD, where the first and last pulses are used to generate the sampling window and the intermediate pulse is used as the sampling clock.
Reference-Sampling Phase Detector and VCO
The RSPD of the proposed RSFPLL adopts a differential structure to solve the charge injection problem of the sampling switch. There are two half-sampling rate circuits inside the RSPD to reduce the reference spurs [12]. Figure 13a shows the detailed diagram of the half-sampling rate circuit inside the RSPD unit along with the main components in the forward signal path of the proposed RSFPLL. Here, the RSPD unit receives four control signals (Samp.N, Samp.P, Hold.N, and Hold.P) from the PSCG unit shown in Figure 11. Since the operations of the upper RSPD and the lower RSPD are the same, the operations of the lower RSPD will be briefly described based on the control signal waveform of Figure 11b. Initially, the state of Hold.P = 'H' and Hold.N = 'L', which makes the signal captured by the lower capacitor, is presented to the next stage loop filter. Meanwhile, the input REFN signal is sampled from the top capacitor based on the sampling input signal (Samp.P). After this sampling period, the REFN signal is captured through the Samp.N signal at the bottom capacitor, and the values stored in the upper capacitor are transferred to the loop filter based on inverse Hold.P and Hold.N inputs. In this way, the RSPD of our system can provide a seamless control signal to the VCO while operates at half-rate speed. Apart from the basic structure, the transmission gate switch was used in the design to reduce fractional spurs. In sampling-based PLLs, linearity must be guaranteed near the DC offset voltage of the input signal. Case studies based on both single transistor and transmission gate switches show that the latter case shows sufficient linearity to perform fractional-N sampling operation. VCO is implemented in a differential cross-coupled LC structure to receive a differential control voltage. Figure 13b shows the schematic diagram of differential cross-coupled LC VCO in the proposed system. Note that an inductor inside was designed to have an inductance of 2.6 nH, and it was found that the simulated Q-factor is 15.4 at 2.4 GHz. In the reference-sampling structure, the K VCO value is designed at 25 MHz/V to reliably lock to the desired frequency without a frequency lock loop (FLL). Since the output frequency range of the VCO is limited by the low K VCO value, signals of other frequencies that may cause harmonic locking are not generated. A 7-bit binary capacitor bank was added to ensure that the proposed RSFPLL achieves a sufficient frequency synthesis range while maintaining this characteristic.
Noise Analysis
This section describes the noise analysis of the proposed RSFPLL using the linear phase domain model in Figure 14. The model includes influences from key sources of noise from PLL components such as RSPD, loop filter (LF), VCO, Delta-Sigma-Modulator (DSM), and PI. It is worth noting that noise components of DSM, PI, and reference inputs have less impact on total phase noise output than those of RSPD and VCO. For RSPD phase noise, a method from the references [10,12] could be used where the thermal noise current applied to the RSPD sampling capacitor (C samp ) appears as a kT/C noise. Noise sampled by RSPD during the tracking and holding period is stored in C samp . The RSPD updates the control voltage across the C samp every reference cycle, so switching noise is also captured and stored in the C samp . Therefore, it is very important to analyze the phase noise of RSPD accurately. The RSPD phase noise equation from [12] is as follows: where S v n ,RSPD ( f ) is kT/C noise of RSPD. Based on Equation (3), the RSPD in-band phase noise performance is calculated to be −134 dBc/Hz using a 6 pF sampling capacitor in the proposed design.
Regarding the noise components of the feedback path, the proposed RSFPLL contains MMDIV/PI pairs and PSGC in the feedback path instead of a counter-based divider. The MMDIV and PSCG units consist of logical units (e.g., combinational logic and flipflops), and PI basically operates as a chain of inverters. Therefore, the main signal passes through mostly logical units, and the noise contribution of these logical units can be ignored [24,25]. In addition, flip-flop (DFF of our design) rearranges the received data based on the reference clock, which in turn removes jitter accumulated in the input data across the various combination logic gates. In particular, according to a study by [24], the noise components by logic gates and DFF are less than −140 dBc, which is sufficiently small compared to the typical RSPD noise in Equation (3). Therefore, jitter accumulation from the conventional divider can be eliminated, and it can be thought that the noise element caused by the divider occurs only in a single DFF. For other noises, PI resolution in our design is equal to 1/32 times 2T VCO . Based on this, quantitative analysis shows that the proposed design has 24 dB lower quantization noise compared to the conventional fractional-N divider [20]. Overall, the sum of in-band noise sources except for VCO (which will be explained in the succeeding paragraph) has been found to be −134 dBc/Hz, which is sufficient performance for most targeted applications.
The RSFPLL proposed in this paper is a type-I PLL that does not have a charging pump (CP) in the loop. That is, RSFPLL in this article suppresses noise at −20 dB/decade within the bandwidth, while conventional SSPLL suppresses noise at −40 dB/decade. Therefore, noise generated by VCO has the greatest impact on the overall noise performance of the PLL, and LC-VCO becomes crucial. In the proposed structure, the inductor has a Q factor of 15.4 and an inductance of 2.59 nH. The noise performance of the VCO is shown to be −126.2 dBc/Hz at 1 MHz offset. Finally, we apply a third-order loop filter to the proposed system to shape the out-band noise of MASH 1-1-1 DSM [16].
Measurement
The proposed RSFPLL prototype was fabricated with a 65 nm CMOS process with a core area of 0.42 mm 2 . Figure 15 shows a die photo of the proposed RSFPLL (left) and a power and area analysis table (right). Note that the core idea of fractional-N operation lies in the PSCG unit, which accounts for only 0.01% of the total area. The total power consumption of the proposed RSFPLL is 8.17 mW. The power breakdown analysis shows that most of the power consumption happens from LC-VCO (42%) and PI/RSPD analog blocks (9.8%), which is understandable. Noticeably, the digital block (DSM, MMDIV, and PSCG) only consumes 0.8 mW (0.7%) of power. This confirms that the proposed fractional-N operation for the RSPLL based on PSCG techniques is area and energy efficient. Before commenting on the performance of the proposed RSFPLL, we briefly describe the measurement environment. The supply voltage level is 1.2 V from the power supply equipment Agilent E3646A. Reference input sine waves of 100 MHz frequency with lowphase noise are derived from the off-chip Voltage Control Crystal Oscillator (VCXO). Based on the above configuration, the output frequency varies from 2.17 GHz to 2.3 GHz depending on the effective capacitance of the cap bank in VCO. Figure 16 shows the measured spurs and phase noise when the PLL performs a fractional-N operation. The measurement results show that the worst fractional spur is about −44.06 dBc, and the integrated jitter value from 10 kHz to 50 MHz is 322 fs, under the following measurement conditions: A VCXO frequency of 100 MHz, a division ratio of 22.5, and a carrier frequency of 2.249 GHz. In-band phase noise value is −109.9 dBc/Hz at 100 kHz and the out-of-band phase noise value is −144.9 dBc/Hz at 10 MHz. The figureof-merit (FoM) value is calculated as −240.7. Table 1 summarizes detailed performance numbers compared to other fractional-N PLLs.
Conclusions
A 2.2 GHz reference-sampling fractional-N PLL based on the PSCG technique is proposed. The proposed fractional-N PLL leverages reference-sampling techniques to eliminate reference buffer noise. In RSPLL, a buffered VCO signal samples the input reference signal. Since the VCO-rated sampling consumes enormous power, it is necessary to selectively generate a sampling pulse based on mask window generation and sample only the region of interest. To this end, we proposed a flexible window technique based on the PSCG unit that enables area and energy-efficient fractional-N operation of RSPLL without DTC unit in loops that requires large design efforts. However, implementing a high-resolution fractional-N PLL requires a very precise phase-interpolator. Therefore, the CCS method that makes the PI robust over PVT changes is added to the conventional PI design. Even with all the new features we have achieved, the area overhead of additional digital units is quite small, envisioning an area and energy-efficient RSFPLL compared to the PLL systems in Table 1. The proposed PI-linked RSFPLL is fabricated with a 65 nm CMOS process. The proposed system has an RMS jitter of 322 fs, 8.17 mW of power consumption, and a worst-case spur of −44.06 dBc in the range of 10 kHz to 50 MHz. | 2021-10-17T15:09:24.261Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "90e0437cc63bfb456eb26c36228fb60d21c9cd30",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/21/20/6824/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "92bbed7995af9f7deb9ebf509ac0caf09c39ee23",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
214642433 | pes2o/s2orc | v3-fos-license | Nodular Hidradenoma
BACKGROUND. Nodular hidradenoma is a rare adnexal tumor most likely arising from the eccrine gland. OBJECTIVE. We describe three cases of a nodular hidradenoma presenting as an expanding nodule on the forehead (case l), left lower extremity (case 2), and lefi neck (case 3). W e discuss the clinical and histologic features of this tumor and present a review of the literature. CONCLUSIONS. This report highlights the salient histologicfindings that distinguish nodular hidradenomas from other adnexal tumors and emphasizes the benefit of complete local excision to prevent recurrence of these tumors.
Although regarded as benign tumor, NH may recur after inadequate surgical excision and can be clinically mistaken for other benign or malignant cutaneous tumors. In order to increase awareness of this entity, we describe three typical cases of NH and review the histopathologic and immunohistochemical features of this tumor.
Case 1
An 87-year-old woman presented with an asymptomatic erythematous nodule on the central forehead that had been present for approximately 3 years and was slowly enlarging in size. Spontaneous drainage of a yellowish odorless fluid was occasionally noted. There was no history of trauma preceding the onset of the lesion. Physical examination of the central forehead revealed a 7 X 5-mm freely movable, erythematous nodule with ill-defined borders and a gelatinous texture. Small amounts of serous fluid were expressed from the lesion. The surrounding normal-appearing skin was indurated at a diameter of 0.5 cm. There was no regional lymphadenopathy. The lesion is shown in Figure 1.
A lesional skin biopsy specimen showed a well-circumscribed nodule located in the dermis with extension into the subcutaneous tissue (Figure 2). A crusted ulceration was seen at the center where the tumor cells replaced the overlying epidermis. The tumor was composed of multiple lobules of epithelial cells containing ducts and cystic spaces (Figure 3). The epithelial cells were of two types: polygonal cells with abundant variably pale or eosinophilic cytoplasm and round nuclei, and fusiform cells with scant basophilic cytoplasm and round nuclei (Figure 4) plasma cells. No cytologic atypia or mitotic activity was observed. Surgical excision of the tumor was performed with a resection margin of 5 mm. Because the tumor involved the underlying fascia by gross inspection, the excision included the galea aponeurotica. The wound was closed by primary closure. One year after the procedure there was no evidence of recurrence.
Case 2
A 52-year-old woman presented with a lesion on the left lower extremity that had been noticed for more than a year. She stated that it had slowly increased in size and drained a clear watery fluid spontaneously or upon inadvertent trauma. Her past medical history was significant for breast cancer and bilateral fibrocystic breast disease, for which she had underwent bilateral mastec- Higher magnification (X20) shows a dense tomy. On physical examination there was a 2.5-cm nodule on the left lateral calf ( Figure 5).
Examination of a biopsy specimen revealed a dermal proliferation of basaloid cells with an edematous and focally mucinous stroma. Within the tumor there were small ductal structures as well as areas of keratinization and necrosis. A focal cystic space lined by a double layer of cuboidal cells was also noted. There were no clear cells, mitotic activity, or cellular pleiomorphism.
The lesion was surgically excised with 4-mm lateral margins and the wound was repaired by primary closure. 'The patient has been followed for 3 months without any recurrence of the lesion.
Case 3
A 57-year-old man was referred for evaluation of an asymptomatic nodule on the left neck that had been present for more than 5 years and had been gradually Examination of a biopsy specimen showed a multilobulated dermal proliferation of basaloid and clear cells within a mucin-rich stroma. The tumor was connected to the epidermis and contained small cystic spaces and ductal structures. There were focal areas of hyalinized collagen within the tumor. No cellular atypia or mitoses were seen.
The lesion was excised deep to the subcutaneous fat with 4-mm lateral margins. Repair was easily accomplished with primary layered closure. The patient has been followed for 3 months and has had no signs of recurrence on the surgical site.
Discussion
Nodular hidradenomas are rare appendageal tumors occurring mainly in adults (average age of 37.2 years) with a male-to-female ratio of 1:1.7.6 They usually present as solitary slowly enlarging, firm, freely movable tumors and may have a pedunculated or cystic appearance. Their average size is 0.5-2 cm, but larger tumors have been reported with a diameter of 9.5 ~m .~ The overlying skin may be smooth, thickened, atrophic, or ulcerated: and has a skin-colored, red, or brown color.3 Some tumors may spontaneously drain serous or hemorrhagic material, which may serve as a useful diagnostic sign.' Occasionally, multiple lesions can oc-cu1.l' Although any cutaneous site can be affected, the most common sites of involvement are the head and neck (in 30% of patients) and anterior t r~n k .~,~ The histopathology of nodular hidradenoma reveals a well-circumscribed dermal nodule, surrounded by a collagenous pseudocapsule and often extending deep into the subcutaneous tissue." Usually there is no epidermal attachment and the overlying epidermis appears normal; however, in some cases the tumor merges at the surface surrounded by a hyperplastic epidermis. The tumor is composed of multiple lobulated masses of epithelial cells and tubular lumina of variable size and number (Table 1). Larger cystic spaces filled with a homogeneous eosinophilic material are often present within the tumor and are usually bordered by degenerated tumor cells. ' The epithelial cells form the solid portion of the tumor and may be of two types. One type of cells are fusiform cells with elongated, vesicular nuclei and basophilic cytoplasm, usually located on the periphery of the tumor. The other type consists of large polygonal cells with round, often eccentrically located nuclei and pale eosinophilic cytoplasm. In certain cases these cells exhibit a striking cytoplasmic clearing, hence the designation clear cell hidradenoma for the histologic variant where these cells predominate. Their pale or "clear" appearance is explained by the large amount of cytoplasmic glycogen, which is a Periodic acid-Schiff (PAS)positive, diastase-labile material, and which washes off during fixation and embedding procedures. In addition, a PAS-positive, diastase-resistant material can be found at the periphery of the cells.' Transitional cells of both types may also be present within the tumor. The tubular lumina often exhibits branching and is lined by four types of cells: columnar epithelial cells that may show active secretion resembling decapitation secretion, cuboidal cells, squamous cells with a well-formed eosinophilic cuticle, and intermediate cells." The luminar structures contain PAS-positive, diastase-resistant material that also stains with colloidal iron. Mitotic activity may be present (average of 0.3-2.7 mitoses per 10 HPF).7 Occasionally extensive keratinization and horn pearls may be present (epidermoid hidraden~ma).'~ NH shares similar histologic features with eccrine poroma (arising from the intra-epidermal ductal epithelium), and eccrine spiradenoma (ductal and secretory epithelium). A recently described variant of NH, namely poroid hidradenoma, shows architectural features of hidradenoma (tumor with solid and cystic components) and cytologic findings of poroid neoplasm (poroid and cuticular cells with ductal differentiation).14 The differential diagnosis of NH also includes trichilemmoma (usually unilobulated, associated with papillary epidermal hyperplasia, hypergranulosis, and peripheral palisading of tumor cells with no cystic spaces), glomus tumor, inverted follicular keratosis, and metastatic renal cell carcinoma, which is usually not connected with the epidermis and has prominent cytologic atypia and a highly vascularized stroma. The be- nign epidermoid type of NH shows squamous metaplasia and, thus, may be confused with squamous cell carcinoma.
The histologic distinction between benign and malignant hidradenomas or hidradenocarcinomas may be difficult. Malignant hidradenomas frequently exhibit typical findings of malignancy such as overt nuclear atypia, high index of mitotic activity, areas of necrosis, infiltrative patterns, and perineural or vascular invasion.15 However, in some cases these findings may be ~u b t l e .~ Reported cases of clinically benign hidradenomas with high mitotic activity and cellular pleiomorp h i~m '~,~~ suggest that histology is not always an accurate predictor of the clinical behavior of hidradenomas.
The immunohistochemical profile of hidradenomas has been demonstrated in recent studies ( Table 2)." On electron microscopy the epithelial cells are loosely connected with desmosomes and exhibit numerous pseudovilli that project from their border into the intercellular spaces."^' The fusiform basophilic cells contain numerous tonofilaments and little glycogen, whereas the paler or clear cells contain abundant glycogen and resemble the cells of an eccrine poroma. As previously mentioned the luminal structures are composed of four types of cells: columnar, cuboidal, squamous, and intermediate cells.
The origin of hidradenomas has been a longstanding subject of dispute. Because of the presence of polyhedral and fusiform cells around tubular lumina, N H were originally regarded as tumors differentiating toward myoepithelial cells.'~~ However, the absence of alkaline phosphatase and myofilaments ultrastructurally does not support this histogenesis.'' Although decapitation secretion has been observed in the luminar structures of the tumor,' none of the NH has been shown to have enzymes that would indicate an apocrine differentiation. In contrast, the presence of eccrine-type enzymes, ie, amylophosphorylase, branching enzyme, succinic dehydrogenase, diphosphopyridine nucleotide diaphorase, and leukine aminopeptidase suggests a tumor of eccrine differentiation." This has been supported by the finding of eccrine-specific ultrastructural features such as an inter-cellular canalicular system and intracellular organelles that closely resemble those of eccrine secretory cells?' Thus, nodular hidradenoma is reasonably considered as an eccrine tumor differentiating from the intraepidermal ductal portion (based on the presence of intracellular tonofilaments), or the secretory segment of the eccrine gland @ugh glycogen content, intercellular canalidar system). Similarly, it is regarded as an intermediate neoplasm between eccrine poroma, which has an intraepidermal ductal differentiation, and eccrine spiradenoma with its dermal-ductal and secretory differentiation." In a recent study, short-term cultures of a clear cell hidradenoma were obtained and cytogenetically analyzed?l The tumor displayed a multiclonal pattern with a single abnormal clonal population arising from the tumor and six additional clones found in the adjacent skin. These findings reflect an increase in genomic complexity (cytogenetic divergence) and a reduction of karyotypic diversity (covergence) of the tumor and may indicate a multicellular origin. Interestingly, these karyotypic changes were not seen in similar cytogenetic studies of other eccrine skin tumors (ie, eccrine spiradenoma). 22 Most cases of NH have a benign c o~r s e .~ Recurrences may occur in inadequately excised tumors and are usually located in the deeper dermis or subcutaneous tissue. They do not exhibit more atypia or aggressiveness compared with the primary tumors. According to a large series of patients with clear cell hidradenomas the rate of recurrence is estimated to be approximately 10% of surgically excised tumor^.^ Malignant clear cell hidradenomas usually arise de novo but may develop from benign clear cell hidradenomas in extremely rare case^.'^ Given this remote possibility and the aggressive nature and occasionally bland histology of their malignant counterpart, total surgical excision of nodular hidradenomas is recommended.' Mohs micrographic surgery has been successfully performed in isolated cases of hidradenomas and may be particularly helpful in large or recurrent tumors where wider subclinical spread is possible. 24 | 2020-02-13T09:23:45.089Z | 2020-02-07T00:00:00.000 | {
"year": 2020,
"sha1": "0e655b005d9247dc998ed6ceba77a47b4a14787e",
"oa_license": "CCBY",
"oa_url": "https://www.qeios.com/read/Y6ALN0/pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "52d881f729bb1bae7f40fe41021e64ef1252bd6a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
14234691 | pes2o/s2orc | v3-fos-license | Sitosterolemia: a review and update of pathophysiology, clinical spectrum, diagnosis, and management
Sitosterolemia is an autosomal recessive disorder characterized by increased plant sterol levels, xanthomas, and accelerated atherosclerosis. Although it was originally reported in patients with normolipemic xanthomas, severe hypercholesterolemia have been reported in patients with sitosterolemia, especially in children. Sitosterolemia is caused by increased intestinal absorption and decreased biliary excretion of sterols resulting from biallelic mutations in either ABCG5 or ABCG8, which encode the sterol efflux transporter ABCG5 and ABCG8. Patients with sitosterolemia show extreme phenotypic heterogeneity, ranging from almost asymptomatic individuals to those with severe hypercholesterolemia leading to accelerated atherosclerosis and premature cardiac death. Hematologic manifestations include hemolytic anemia with stomatocytosis, macrothrombocytopenia, splenomegaly, and abnormal bleeding. The mainstay of therapy includes dietary restriction of both cholesterol and plant sterols and the sterol absorption inhibitor, ezetimibe. Foods rich in plant sterols include vegetable oils, wheat germs, nuts, seeds, avocado, shortening, margarine and chocolate. Hypercholesterolemia in patients with sitosterolemia is dramatically responsive to low cholesterol diet and bile acid sequestrants. Plant sterol assay should be performed in patients with normocholesterolemic xanthomas, hypercholesterolemia with unexpectedly good response to dietary modifications or to cholesterol absorption inhibitors, or hypercholesterolemia with poor response to statins, or those with unexplained hemolytic anemia and macrothrombocytopenia. Because prognosis can be improved by proper management, it is important to find these patients out and diagnose correctly. This review article aimed to summarize recent publications on sitosterolemia, and to suggest clinical indications for plant sterol assay.
Introduction
Sitosterolemia, also known as phytosterolemia, is an autosomal recessive disorder characte rized by increased plant sterol levels, xanthomas, and accelerated atherosclerosis 1,2) . It is caused by increased intestinal absorption and decreased biliary excretion of plant sterols resulting from homozygous or compound heterozygous mutations in either ABCG5 or ABCG8, which encode the sterol efflux transporter ABCG5 (sterolin1) and ABCG8 (sterolin2) that pumps sterols out to intestinal lumen or into bile 3,4) . Although it is a rare disease, it is an important disease that led to understanding of the physiologic pathway about sterol influx and efflux 5,6) .
Mediterranean stomatocytosis/macrothrombocytopenia has been identified as the hemato logical presentation of sitosterolemia 7) . Stomatocytic hemolysis, large platelets, splenomegaly, and abnormal bleeding can be associated, and hematologic manifestations can be the only clinical sign of sitosterolemia 8) . The true prevalence of sitosterolemia is unknown due to www.e-apem.org underdiagnosis, and sitosterolemia may be more frequent than previously thought. One Asian individual with sitosterolemia was identified incidentally out of 2,542 persons from a study in which plasma plant sterols were analyzed 9) .
Because delayed diagnosis can lead to poor clinical outcome due to advanced atherosclerotic cardiovascular disease and prognosis can be improved by proper management including plant sterol restriction and cholesterol absorption inhibitor in sitosterolemia, it is important to find these patients out and diagnose correctly 2,8,10) . This review article aimed to summarize recent publications on the pathophysiology, clinical spectrum, diagnosis and management of sitosterolemia, and to suggest clinical indications for plant sterol assay.
The plant sterols
Plant sterols are rich in vegetable oils, wheat germs, nuts, seeds, avocados, chocolate, and margarine 2,6) . They are structurally very similar to cholesterol, but they differ by the presence of an ethyl or methyl group (sitosterol and campesterol, respectively) or a double bond (stigmasterol) 1) . Sitosterol is usually the most abundant plant sterol in the diet and the predominant form found in patients with sitosterolemia 8,11) .
Average Western diet contains similar amount of cholesterol and plant sterols. Although approximately 50% of dietary chole sterol is absorbed, less than 5% of plant sterols are absorbed in normal individuals 6,11) . High plant sterol diet was extremely toxic in animal models of sitosterolemia, and it was suggested that the mammalian body defends itself against plant sterols because they are toxic when accumulated, although similar toxicity have not been documented in human yet 12,13) .
It is clear that plant sterols are toxic to those with sitos terolemia, but plant sterol intake seems to be safe to nonsito sterolemic individuals. Instead, plant sterols can competitively inhibit cholesterol absorption, and the cholesterol lowering effect of plant sterols have been documented 11) . Although they have not been shown to reduce clinical outcomes, many cholesterol lowering functional foods are enriched with plant sterols 14,15) . On the other hand, studies have raised the possibility of association between plant sterol levels and atherosclerosis 16,17) . The cholesterol lowering effect may compensate the potential risk of increased plant sterol intake 9,11) , and the debate whether plant sterol is beneficial or harmful is still ongoing 15,18) .
Sterol absorption in normal subjects
Dietary cholesterol and noncholesterol sterols, mainly plant sterols and stanols (saturated sterols), are absorbed form the intestinal lumen via the sterol influx transporter, Nieman Pick C1 Like 1 (NPC1L1) 6) . The NPC1L1, the gatekeeper of sterol absorption, have lower affinity to plant sterols than cholesterol 19,20) . After absorption to the enterocytes, about 50%-60% of cholesterol is esterified by the acetylsterol Oacyl transferase 2 (SOAT2) and transported to liver packed in the chylomicrons 21) . Unesterified cholesterol or plant sterols are pumped back to intestinal lumen by ABCG5/ABCG8, the sterol efflux transporters 6) . The SOAT2 also have low affinity with plant sterols, allowing preferential plant sterol efflux by the ABCG5/ABCG8 22) . Plant sterols not pumped back to the intestinal lumen become part of the chylomicrons, transported to the liver, and eventually pumped out into the bile by the hepatic ABCG5/ABCG8 transporters 6,11) .
Disrupted sterol homeostasis in sitosterolemia
Biallelic defects in either ABCG5 or ABCG8 result in increa sed intestinal absorption and decreased biliary excretion of plant sterols, leading to extremely high plasma levels of plant sterols 3,4) . Patients with sitosterolemia absorb 15% to 60% of ingested sitosterol, which lead to a 50 to 200fold increase in their plasma sitosterol levels 1,3) . Plant sterols comprise 15% to 20% of total plasma sterols in patients with sitosterolemia and are carried in lowdensity lipoprotein (LDL) and veryLDL particles 3,23) .
In a patient with liver failure and sitosterolemia that under went liver transplantation, the elevated plant sterol levels decreased to values less than 1/10 of pretransplantation level, suggesting that the liver functions as the predominant organ for maintaining sterol balance 24) . ABCG5/ABCG8 expression either in liver or intestine protected animals from sterol accumulation in a recent study 25) .
Although sterol absorption was moderately increased in heterozygotes, they are asymptomatic with normal cholesterol levels and normal to slightly increased plant sterol levels 1,26) .
Clinical spectrum of sitosterolemia
Patients with sitosterolemia show extreme phenotypic heterogeneity. Whereas some patients with homozygous mutations are almost totally asymptomatic, others show severe hypercholesterolemia leading to accelerated atherosclerosis and premature cardiac death 1,2730) . A 10yearold girl from Iran, who had received almost vegetablefree diet in Iran and started to intake much more vegetables and olive oil after her family moved to Europe, developed xanthomas and hypercholesterolemia in a short period of time and was finally diagnosed for sitosterolemia 31) . Although the amount of dietary plant sterol intake should be at least partially related with the severity of clinical disease, the mechanism of phenotypic heterogeneity, even between the family members that shares same gene and environment, is not fully understood yet. A recent report in a Chinese family with sitosterolemia suggested potential effects of NPC1L1 polymorphisms in protecting against clinical disease 29) . Major clinical features of sitosterolemia, especially in young patients with sitosterolemia are summarized in Table 1. www.e-apem.org
Hypercholesterolemia
Although it was originally reported in patients with normo lipemic xanthomas 27) , cholesterol absorption is also increased in patients with sitosterolemia, and serum cholesterol levels are usually elevated 1,10) . Very high levels of cholesterol (up to 1,000 mg/dL) have been reported in patients with sitosterolemia, especially in children 32,33) . Immature intestine may absorb higher amounts of cholesterol compared with that of adults 34) .
Breastfed infants with sitosterolemia show unique clinical features 32,33,35) . The plant sterol intake of a breastfed infant should be minimal because the plasma sitosterol levels of the heterozygote mother should be only slightly increased. However their cholesterol intake can be high due to high cholesterol content of human milk (90-150 mg/L vs. 0-4 mg/L in human milk and infant formula, respectively) 36) . Breastfed infants with sitosterolemia can present with extremely high cholesterol levels with xanthomatosis, but with normal sitosterol:cholesterol ratio due to only mildly elevated plant sterol levels 33) . The plant sterol level increase and the cholesterol level somewhat decrease as the infant start taking fruits and vegetables 33,35) .
Xanthomas
Tendinous or tuberous xanthomas on extensor areas, such as Achilles tendon, extensor tendons of the hand, elbows and knees are the major clinical manifestations of sitosterolemia 1,23,27) . Minor trauma plays an important role in the development of xanthomas, and this is why they appear on extensor surfaces in most patients 37) .
Xanthomatosis is rarely observed in young children, and when present, homozygous familial hypercholesterolemia (FH) or autosomal recessive hypercholesterolemia is most often suspected 37,38) . Xanthomas may begin to appear at very young age in sitosterolemia, sometimes during the first year of life 33,35) .
Intertriginous xanthomas are a very rare type of planar xanthomas and have been reported to be pathognomonic for homozygous FH 39) . However, intertriginous xanthomas (first noticed at the age of 3 months when the patient was being exclusively breastfed) were observed in a 15monthold Korean girl with sitosterolemia, suggesting that intertriginous xanthomas may develop in young children with extremely high cholesterol levels of any etiology 35) . Friction between the skins in intertriginous areas may contribute to the development of xanthomas in a chubby infant in whom extensor areas are relatively spared because movement is not active yet.
Most sitosterolemic patients with severe atherosclerotic cardiovascular disease also showed xanthomas 1,10,23,28). Xan thomas evolve as clusters of foam cells in the skin, and the mechanisms involved in the development of xanthoma seem to be similar to those in early stages of atherosclerotic plaques 40) . According to a metaanalysis on patients with genetic diagnosis of FH, the presence of tendon xanthomas was associated with a 3.2 times higher risk of cardiovascular disease 41) . Xanthelasma of the eyelids was considered to be only a cosmetic lesion until recently, however recent prospective studies showed that it is connected with an increased cardiovascular risk and reduced average lifespan 42) . In contrast to the initial case with normolipemic xanthomas 27) , xanthomas regress and sometimes completely disappear in some patients with sitosterolemia, usually associated with dramatically decreased plasma choles terol levels, although plant sterol levels were still relatively high 32,35,4346) .
Atherosclerotic cardiovascular disease
Some patients with sitosterolemia develop premature atherosclerosis leading to sudden cardiac death at as early as 5 47) ,13 48) ,18 22) years of age, whereas others, even in the same family of symptomatic patients, do not show any classic sign of sitosterolemia 24,28,35) .
Both elevated plasma cholesterol and plant sterol levels can contribute to the premature vascular disease in patients with sitosterolemia. Accumulation of plant sterol in plasma lipoproteins influences the stability of both cholesterol and plant sterol in lipoproteins, favoring the accumulation of these sterols within tissues, initiating inflammatory reactions, and may cause premature atherosclerosis 6) .
Coronary plaque disruption and superimposed thrombosis is the major cause of acute myocardial infarction and sudden cardiac death 49) . The composition and vulnerability of plaque rather than its volume or the severity of stenosis are more important for the development of the thrombusmediated acute coronary syndromes 49) . Plant sterols are relatively poorly www.e-apem.org esterified by the sterolesterifying enzyme acylCoAcholesterol acyl transferase. Macrophages incubated with sitosterol containing lipoproteins accumulated free sterols and underwent necrotic cell death, which may contribute to the formation of ruptureprone plaque 50) .
Premature coronary heart disease can develop in sitostero lemic patients with normal cholesterol levels. A 16yearold sitosterolemic girl with normal cholesterol level was reported to have premature coronary heart disease requiring coronary bypass grafts 51) , and a normocholesterolemic patient who underwent a 3 vessel coronary bypass surgery at the age of 29 was diagnosed with sitosterolemia after that 10) .
On the other hand, Hansel et al. 30) could not find signifi cant signs of premature atherosclerosis in 5 patients with sitosterolemia aged 11 to 21 years, in spite of severe hypercho lesterolemia as well as extremely high plant sterol levels. They suggested that the premature atherosclerosis in some patients with sitosterolemia may be due at least in part to mechanisms independent of elevated circulating plant sterol levels 30) .
Hematologic manifestations
Rees et al. 7) revealed that stomatocytic hemolysis and macro thrombocytopenia (previously known as the Mediterranean stomatocytosis or Mediterranean macrothrom bocytopenia, which had been a poorly understood hematological condition) is the hematological presentation of sitosterolemia.
Stomatocysis, hemolytic anemia, thrombocytopenia with very large platelets, splenomegaly, and abnormal bleeding can be associated with sitosterolemia 8) . Because the ABCG5 and ABCG8 are only expressed in intestine and liver, acquired accumulation of circulating plant sterols and their incorporation into red blood cells (RBC) and platelet seems to be resulting in abnormal morphology and function 7) .
Blood cells can be a main target for the toxic effect of plasma plant sterols, and sitosterolemia can be manifested mainly by hematologic abnormalities 52) . Three patients from a Chinese family, all of whom had suffered from severe hemolytic anemia and macrothrombocytopenia since 3 to 4 years of age and underwent splenectomy in their 10's, was diagnosed as sitosterolemia in their 20's. All of these patients had increased plasma sitosterol but normal cholesterol levels 52) . Thirteen sitosterolemic patients with hematologic manifestations, including 2 patients without any classical features of sitosterolemia, had been misdiagnosed with immune thrombocytopenia (ITP), Evans syndrome, or secondary ITP with delay being 15 to 49 years between symptom onset and correct diagnosis 53) . Plasma plant sterols should be analyzed in patients with unexplained hemolytic anemia with macrothrombocytopenia to avoid unnecessary splenectomy 54) .
Recently, Kanaji et al. 55) have identified that the bleeding abnormalities and macrothrombocytopenia associated with sitosterolemia are due to direct plant sterol incorporation into the platelet membrane, resulting in platelet hyperactivation, reduced αIIbβ3 surface expression, loss of the GPIba FlnA linkage, microparticle formation, and ultimately poor hemostatic functions.
Diagnosis of sitosterolemia
Routine laboratory methods do not distinguish plant sterols from cholesterol, and a more accurate method such as gas chromatographymass spectrometry (GCMS) is required. Measurement of serum plant sterol by GCMS or liquid chromatographymass spectrometry is regarded as a reliable test for screening sitosterolemia, in which unequivocally increased plant sterol levels and sitosterol: cholesterol levels are almost invariably observed 1) .
Genetic confirmation can be given by direct sequencing of exons and intronexon boundaries of the ABCG5 and ABCG8 genes, each comprised of 13 exons and located in a headto head organization on chromosome 2p21, and documenting the homozygous or compound heterozygous mutations in either ABCG5 or ABCG8 3,4) . Asian patients usually have mutations in ABCG5, while Caucasian patients usually have ABCG8 mutations 3,32) . However, mutations in ABCG8 have been reported in 3 of 8 families with hematologic manifestations of sitosterolemia according to a recent Chinese study, suggesting that ABCG8 mutations are not exclusive to Caucasians 53) . DNA sequencing of ABCG5/ABCG8 is should be performed to rule out sitosterolemia in breastfed infants, because they can exhibit only mild elevation of plasma sitosterol level and normal sitosterol:cholesterol ratio 33) .
In contrast to patients with homozygous FH that are relatively refractory to dietary modification and cholesterollowering agents, plasma cholesterol levels in sitosterolemic patients are extremely sensitive to dietary cholesterol restriction and bile acid sequestrants 32,43,44) .
The entire pathway of cholesterol biosynthesis including hepatic hydroxymethylglutaryl coenzyme A (HMG CoA) reductase is exceptionally downregulated in patients with sitosterolemia 10,56) . It was also reported that stigmasterol and campesterol inhibit activation of sterol regulatory binding protein2 (SREBP2), a transcription factor involved in chole sterol biosynthesis, in cultured adrenocortical cells 57) , and that stigmasterol, not sitosterol, inhibits processing of SREBP2 leading to reduced cholesterol synthesis in mice 58) .
In nonsitosterolemic individuals, cholesterol synthesis increases after sterol depletion, limiting the effect of sterol absorption inhibitor or bile acid sequestrant 57) . However, there is no such compensatory increase in cholesterol synthesis in those with sitosterolemia, resulting in dramatic reduction in plasma cholesterol levels 59) . Sitosterolemia should be suspected when the plasma cholesterol falls more than 40% on a lowcholesterol diet.
Sitosterolemia seems to be significantly underdiagnosed, and many of these patients should be continuing to intake large amount of plant sterols, not knowing that the plant sterols are 'toxic' to them, but believing that those food are www.e-apem.org good for their health. Sitosterolemia might also be significantly underdiagnosed in children in whom screening for lipid profiles is not universally performed. Recent guidelines recommend screening all children at 9-11 years and again at 17-21 years to find those with hypercholesterolemia 60) . Some of those screened may in fact have sitosterolemia, and these patients may be distinguished by either remarkable response to dietary modification or poor response to statins 35) .
Management of sitosterolemia
Management of sitosterolemia aims to reduce plasma plant sterol (as low as possible; although perfect control [sitosterol level <1 mg/dL] cannot be achieved) and cholesterol concen trations and to prevent or reduce xanthomas and atherosclerotic cardiovascular diseases 2) .
Mainstay of therapy is dietary restriction of both cholesterol and plant sterols. Foods rich in plant sterols include vegetable oils, wheat germs, nuts, seeds, avocado, most of which are known to be hearthealthy foods 2,61) . Margarine, shortening, and chocolate should also be avoided. Polished rice should be taken instead of whole grains. Shellfish and seaweeds contain significant amount of algaederived plant sterols that are also hyperabsorbed in these patients, and they should also be avoided 62) . However, plant sterolfree diet is almost impossible to accomplish because plant sterols are found in almost every plantbased foods, and low plant sterol diet have resulted in only about 30% reduction of plasma plant sterol levels 11,44) .
Pharmacotherapy include the sterol absorption inhibitor, ezetimibe, or bile acid sequestrants such as cholestyramine. Patients with sitosterolemia usually do not respond to statins because HMG CoA reductase activity is already maximally inhibited 60) .
The bile acid sequestrants inhibit the reabsorption of bile acids in the ileum and disrupt the enterohepatic circulation of bile acids. The bile acid sequestrants was reported to reduce plasma plant sterol levels by up to 45%, although they may result in more dramatic decrease in plasma cholesterol levels (50%-80%) and regression of xanthomas 43,44) . Sitosterolemia should be considered in patients with hypercholesterolemia and/or xanthomas who show dramatic reduction of cholesterol levels or regression of xanthomas by bile acid sequestrant therapy. However, poor compliance and gastrointestinal side effects limit the use of cholestyramine.
Ezetimibe, an inhibitor of intestinal sterol absorption through its binding to NPC1L1, is currently considered the choice of treatment for sitosterolemia 63) . It has been widely used for decreasing serum LDLcholesterol levels in patients with hypercholesterolemia. Ezetimibe also reduces the intestinal absorption of plant sterols, thereby also lowering plasma plant sterol levels. Ezetimibe alone or in combination with cholestyramine successfully decreased plasma cholesterol and plant sterol levels (about by 50%; although still much higher than normal values) 63) , resulting in regression of xanthomas and improvement of carotid bruits and cardiac murmurs in patients with sitosterolemia 45) . Longterm treatment with ezetimibe 10 mg/day was safe, tolerable, and effective in reducing plasma plant sterol concentrations in patients with sitosterolemia 61,64) . Ezetimibe reduced plasma and RBC plant sterol levels, while increasing platelet count and decreasing mean platelet volume, and thereby may reduce the risk for bleeding in sitosterolemia 65) . Although pharmacotherapy is usually not performed for children under age 10, an individual with extremely high levels of cholesterol may begin therapy earlier 66) Ezetimibe therapy seems to be also safe and effective in children with sitosterolemia, although an infant did not respond to ezetimibe therapy at 7 months of age possibly due to immature glucuronidation system, who finally showed improvement when ezetimibe was restarted at 2 years of age 32) . Bile acid sequestrants such as cholestyramine can be added for those with insufficient response to ezetimibe 2,63) .
Arthritis and arthralgia can also be associated with sitostero lemia, and more strict management of sitosterolemia can be helpful 2) .
Conclusions
Plant sterol assay should be performed in patients with normocholesterolemic xanthomas, hypercholesterolemia with unexpectedly good response to dietary modifications or to cholesterol absorption inhibitors, or hypercholesterolemia with poor response to statins, or those with unexplained hemolytic anemia and macrothrombocytopenia ( Table 2).
The dramatic cholesterol reduction and regression of xanthomas by proper treatment including plant sterol restriction and cholesterol absorption inhibitor suggest that sitosterolemia can be a controllable condition, and it is important to find these patients out and diagnose correctly because prognosis can be improved by early diagnosis and proper management. | 2016-05-12T22:15:10.714Z | 2016-03-01T00:00:00.000 | {
"year": 2016,
"sha1": "7c63fd403b69341a7ccbc913f2571e2c6c0e062f",
"oa_license": "CCBYNC",
"oa_url": "http://e-apem.org/upload/pdf/apem-21-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c63fd403b69341a7ccbc913f2571e2c6c0e062f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263796024 | pes2o/s2orc | v3-fos-license | Can Economic Theory Be Informative for the Judiciary? Affirmative Action in India via Vertical and Horizontal Reservations
Sanctioned by its constitution, India is home to the world's most comprehensive affirmative action program, where historically discriminated groups are protected with vertical reservations implemented as"set asides,"and other disadvantaged groups are protected with horizontal reservations implemented as"minimum guarantees."A mechanism mandated by the Supreme Court in 1995 suffers from important anomalies, triggering countless litigations in India. Foretelling a recent reform correcting the flawed mechanism, we propose the 2SMG mechanism that resolves all anomalies, and characterize it with desiderata reflecting laws of India. Subsequently rediscovered with a high court judgment and enforced in Gujarat, 2SMG is also endorsed by Saurav Yadav v. State of UP (2020), in a Supreme Court ruling that rescinded the flawed mechanism. While not explicitly enforced, 2SMG is indirectly enforced for an important subclass of applications in India, because no other mechanism satisfies the new mandates of the Supreme Court.
Introduction
Sanctioned by its constitution, India is home to one of the world's largest affirmative action programs. Allocation of government positions and seats at publicly funded educational institutions are governed by the mandates outlined in the landmark Supreme Court judgment Indra Sawhney and others v. Union of India (1992), 1 widely known as the Mandal Commission Case. Under these mandates, a mechanism that otherwise allocates positions based on an objective merit list of candidates is amended to implement two types of affirmative action policies known as vertical reservations (VR) and horizontal reservations (HR). Of the two policies, the VR policy is envisioned as a higher-level protection policy, and, as such, it is mandated to be implemented on an "over-and-above" basis. This means that if a member of a VR-protected class is "entitled" to an open position based on merit, then she must be awarded an open position and not use up a VR-protected position. This higher-level protection policy has been largely intended for historically oppressed classes, most notably Scheduled Castes (SC), Scheduled Tribes (ST), and Other Backward Classes (OBC). The HR policy, on the other hand, is envisioned as a lower-level protection policy, and, as such, it is mandated to be implemented on a "minimum guarantee" basis. This means that any position awarded to a member of an HR-protected group counts toward HR protections.
The Supreme Court judgment Anil Kumar Gupta (1995) and Its Consequences.
In the absence of the HR policy, implementation of the VR policy is a straightforward task with a simple two-step procedure. First, open positions are awarded to individuals with the highest merit rankings (including those from VR-protected classes), and next, for each VR-protected class, positions set aside for this class are awarded to members with the highest merit rankings who have not yet received one of the open positions. We refer to this procedure as the over-and-above choice rule. Most applications in the field, however, also involve the HR policy, 2 and it is less clear how the two policies can be implemented concurrently in this more elaborate version of the problem. While the principles that guide the implementation of reservation policies are clearly laid out in Indra Sawhney (1992) when the two protective policies are implemented independently, no guidance is provided for their concurrent implementation by the important judgement. This gap 1 The case is available at https://indiankanoon.org/doc/1363234/ (last accessed on 01/19/2021). 2 has been later filled in Anil Kumar Gupta v. State of U.P. (1995), another judgment of the Supreme Court, where an explicit procedure for the concurrent implementation of VR and HR policies is devised and enforced in India. 3 For the past quarter century, this judgment has served as a main reference for virtually all subsequent litigations on concurrent implementation of VR and HR policies, of which there are thousands. This is our starting point, where our original motivations in writing this paper were; (i) formulating an important flaw in the procedure mandated under this judgment, (ii) documenting its adverse consequences in India, and (iii) advocating for an alternative procedure as a remedy.
The procedure enforced under Anil Kumar Gupta (1995) first derives a tentative outcome using the over-and-above choice rule, then it makes any necessary replacements for the tentative recipients of open positions to accommodate HR protections within open positions, and finally it makes any necessary replacements for the tentative recipients of the VR-protected positions to accommodate HR protections within VR-protected positions. We refer to this procedure as the SCI-AKG choice rule. One critical mandate in this judgment, however, has introduced two related and highly consequential anomalies into the procedure, often generating unintuitive outcomes at odds with the philosophy of affirmative action, and thereby sparking thousands of litigations in India for the next 25 years. To present the scale of the resulting disarray, some of the key litigations triggered by the flawed mandate are documented in detail in Section C of the Online Appendix. 4 The root cause of the failure of the SCI-AKG choice rule boils down to its exclusion of the members of VR-protected classes from any replacements necessary to accommodate the HR protections for open positions. Thus, members of the higher-privilege general category, i.e. individuals who are not members of the VR-protected classes, are the only ones entitled to replace the tentative holders of the open positions to accommodate its HR protections. This restriction regularly created situations in India where higher-merit individuals from VR-protected classes lose their positions to lower-merit individuals from the higher-privilege general category, an anomaly we refer to as a failure of no justified envy. The same flaw also created a conflict for individuals who qualify for both types of protections, since for these individuals claiming their VR protections would mean giving up their HR protections for open positions, an anomaly we refer to as a failure of incentive compatibility. Both types of failures have been originally formulated in Aygün and Bó (2016) in the context of Brazilian college admissions, although our paper is the first one to document their disruptive implications through numerous litigations and interrupted recruitment processes.
Since the root cause of the crisis is the exclusive access given to general-category individuals for HR protections within open positions, a simple and intuitive solution lies in the removal of this restriction in the SCI-AKG choice rule, thus making everyone eligible. Focusing on applications with non-overlapping horizontal reservations where each individual qualifies for at most one HR protection, we refer to this alternative choice rule as the two-step minimum guarantee (2SMG) choice rule. This version of the problem is important both because it is widespread in the field with persons with disabilities being the only group granted with HR protections at the federal level, and also because the judgments on HR protections abstract away from the technical aspects of overlapping horizontal reservations.
A Resolution with the Supreme Court judgment Saurav Yadav (2020). Prior to the
March 2019 circulation of the first draft of our paper, the above-presented failure of the SCI-AKG choice rule has never been directly addressed by the highest court of India, despite the large scale disarray it created in the country for 25 years. As we have emphasized earlier, this was our primary motivation in writing this paper, along with formulating the 2SMG choice rule as a possible replacement for the SCI-AKG choice rule. However, a key December 2020 judgment by a three-judge bench of the Supreme Court, not only changed this situation, but also reshaped some of the questions of interest for our paper, while it was under revision for this journal. The material presented in the rest of the Introduction reflects the revisions to our paper following this important judgment.
Based on arguments parallel to our analysis and using several of the high court judgments presented in Section C of the Online Appendix, the justices reached some of the same conclusions in Saurav Yadav & Ors v. State of Uttar Pradesh & Ors (2020) 5 as we have reached earlier in our paper. For our purposes, the key aspects of this judgment can be summarized as follows: (1) The axiom of no justified envy is mandated for all choice rules used in India.
(2) The SCI-AKG choice rule is rescinded due to its failure to satisfy no justified envy.
(3) As a possible replacement for the SCI-AKG choice rule, the 2SMG choice rule is endorsed, although it is not explicitly mandated. 6 (4) Clarity is brought to which positions awarded to members of VR-protected classes are to be used up from open positions, rather than the VR-protected positions.
Apart from correcting a flawed mandate from Anil Kumar Gupta (1995) with highly disruptive consequences, 7 Saurav Yadav (2020) brings clarity on the meaning of VR protections in the presence of HR protections, at a level that was never done before. The defining characteristic of the VR protections is originally formulated in Indra Sawhney (1992) as follows: It may well happen that some members belonging to, say Scheduled Castes get selected in the open competition field on the basis of their own merit; they will not be counted against the quota reserved for Scheduled Castes; they will be treated as open competition candidates.
In the absence of the HR protections, the interpretation of this formulation is straightforward. Up Altogether, the axioms of no justified envy, compliance with VR protections, maximal accommodation of HR protections (which means that all HR protections must be accommodated up to the number of eligible applicants), and non-wastefulness (which means that no position should remain idle while there are eligible applicants), each mandated under Saurav Yadav (2020), uniquely characterize the 2SMG choice rule for problems with non-overlapping horizontal reservations (Theorem 1). Therefore, even though the 2SMG choice rule is merely endorsed but not enforced under Saurav Yadav (2020), the mandates in this judgment indirectly enforce it for field applications with non-overlapping horizontal reservations. Some of our main contributions in this revised paper include the https://www.livelaw.in/pdf_upload/pdf_upload-380856.pdf, last accessed 06/06/2021. Our formulation and advocacy of the 2SMG choice rule also precedes this judgment, which we believe is the first judgment in India to formulate this choice rule. 7 The justices of the Supreme Court indicate in Saurav Yadav (2020) that the flaw in the SCI-AKG choice rule is based on a misinterpretation of Anil Kumar Gupta (1995). characterization of the 2SMG choice rule with axioms that directly formulate the mandates in Saurav Yadav (2020) and what it means for India, i.e. the observation that the 2SMG choice rule in indirectly enforced with this judgment.
Extended Analysis and Policy Advice for Overlapping Horizontal Reservations.
In contrast to field applications with non-overlapping horizontal reservations where Saurav Yadav (2020) has a very sharp policy implication, as has its predecessors, the judgment leaves some flexibility for applications with overlapping horizontal reservations. In this version of the problem, an individual can benefit from multiple HR protections. We make our most significant conceptual and theoretical contributions for this general version of the problem.
Consider an individual who is a member of multiple groups, each of which is eligible for HR protections. For example, a woman with a disability can benefit from HR protections both for women and also for persons with disabilities. The law does not specify whether this individual accommodates the minimum guarantees for all HR protections she is qualified for, in this example both for women and for persons with disabilities, or for only one of them. We refer to the first convention as one-to-all HR matching and the second convention as one-to-one HR matching. While the law is silent on this aspect of the problem, we advocate for the one-to-one HR matching convention for two reasons. The first reason is technical: Adopting the alternative one-to-all HR matching convention introduces complementarities between individuals, which in turn renders the problem computationally hard in general and allows for multiplicities. In the above example, the admission of a man with no disability may depend on the admission of a woman with disability. The second reason is practical: In many real-life applications in India, the number of positions are announced for vertical category-horizontal trait pairs, which automatically embeds the one-to-one HR matching convention into the problem.
Under the one-to-one HR matching convention, an additional matching problem is essentially built into the original problem, where a secondary task matches individuals to different types of HR protections to account for these protections. Fortunately, this secondary task can be formulated as a maximum bipartite matching problem, a well-studied problem in the combinatorial optimization literature. Moreover, this approach not only allows us to formulate natural and immediate extensions of all four axioms, but also allows for a natural extension of the 2SMG choice rule in the two-step meritorious horizontal (2SMH) choice rule. In our main theoretical result (Theorem 3), we extend our characterization of the 2SMG choice rule for the case of non-overlapping horizontal reservations to the 2SMH choice rule for the general case of overlapping horizontal reservations.
Organization of the Rest of the Paper.
After introducing the model in Section 2, we present analysis and policy implications for problems with non-overlapping horizontal reservations in Section 3. The failure of the mechanism mandated by Anil Kumar Gupta (1995), its resolution by Saurav Yadav (2020), and the formal relation between these Supreme Court judgments and our analysis are also presented in this section. Section 4 presents an analysis of the model in its full generality with overlapping horizontal reservations, along with the related theoretical literature. We conclude with an epilogue in Section 5 and present all proofs in the Appendix. Finally, we relegate the institutional background on VR and HR policies, extensive evidence from Indian court rulings on the disruption caused by the SCI-AKG choice rule, and the equivalence of our formulation of the SCI-AKG choice rule with its original formulation in Anil Kumar Gupta (1995) to the Online Appendix.
Model and Vertical/Horizontal Reservations
There exists a finite set of individuals I competing for q identical positions. Each individual i ∈ I is in need of a single position, and has a distinct merit score σ(i) ∈ R + . 8 While individuals with higher merit scores have higher claims for a position in the absence of affirmative action policies, disadvantaged populations are protected through two types of affirmative action policies, (i) vertical reservation (VR) policies providing "higher level" VR protections, and (ii) horizontal reservation (HR) policies providing "lower level" HR protections.
Vertical Reservations.
There exists a set of reserve-eligible categories R and a general category g ∈ R. Each individual belongs to a single category in R ∪ {g}. Define the (reserve-eligible) category membership function ρ : I → R ∪ {∅} such that, for any individual i ∈ I, ρ(i) = c indicates that i is a member of the reserve-eligible category c ∈ R, and ρ(i) = ∅ indicates that i is a member of the general category g.
Given a set of individuals I ⊆ I and a reserve-eligible category c ∈ R, define I c = {i ∈ I : ρ(i) = c} 8 While students can have the same merit score in practice, tie-breaking rules are used to strictly rank them. For example, the Union Public Service Commission uses age and exam scores to break ties. See https://www.upsc.gov.in/sites/default/files/TiePrinciplesEngl-26022020-R.pdf (last accessed on 6/7/2020). as the set of individuals in I who are members of the reserve-eligible category c ∈ R. Given a set of individuals I ⊆ I, define as the set of individuals in I who are members of the general category g.
There are q c positions exclusively set aside for the members of category c ∈ R. We refer to these positions as category-c positions. In contrast, members of the general category do not receive any special provisions under the VR policies. Therefore, It is important to emphasize that, in contrast to category-c positions that are exclusively reserved for the members of category c ∈ R, open-category positions are available for all, and hence they are not exclusively reserved for the members of the general category g. Given a category v ∈ V, let I v ⊆ I denote the set of individuals who are eligible for category-v positions.
VR protections have one important property that makes them the "higher level" affirmative action policy. Positions that are earned by the members of reserve-eligible categories without invoking the VR protections, and thus on the basis of their merit scores only, do not count against the VR-protected positions. In this sense, VR protections are implemented on an "over-and-above" basis.
(1) for any category v ∈ V, for any two two distinct categories v, v ′ ∈ V, In addition to specifying the recipients, our formulation of a choice rule also specifies the categories of their positions.
Definition 4. For any choice rule C = (C v ) v∈V , the resulting aggregate choice rule C : 2 I → 2 I is given as For any set of individuals, the aggregate choice rule yields the set of chosen individuals across all categories.
In the absence of horizontal reservations, which will be introduced in Section 2.3, the following three principles mandated in Indra Sawhney (1992) uniquely define a choice rule, thus making the implementation of VR policies straightforward. First, an allocation must respect inter se merit: Given two individuals from the same category, if the lower merit-score individual is awarded a position, then the higher merit-score individual must also be awarded a position. Next, VR protections must be allocated on an "over-andabove" basis; i.e., positions that can be received without invoking the VR protections do not count against VR-protected positions. Finally, subject to eligibility requirements, all positions have to be filled without contradicting the two principles above. It is easy to see that these three principles uniquely imply the following choice rule: First, individuals with the highest merit scores are assigned the open-category positions. Next, positions reserved for the reserve-eligible categories are assigned to the remaining members of these categories, again based on their merit scores. We refer to this choice rule as the over-andabove choice rule.
Horizontal Reservations within Vertical Categories.
In addition to the reserveeligible categories in R that are associated with the higher level VR protections, there is a finite set of traits T associated with the lower level HR protections. Each individual has a (possibly empty) subset of traits, given by the trait function τ : I → 2 T . Each trait represents a societal disadvantage, and individuals who have this trait are provided with easier access to positions through a second type of affirmative action policy.
HR protections are provided within each vertical category. 9 For any reserve-eligible category c ∈ R and trait t ∈ T , subject to the availability of qualified individuals, a minimum of q c t category-c positions are to be assigned to individuals from category c with trait t. We refer to these positions as category-c HR-protected positions for trait t. Similarly, for any trait t ∈ T and subject to the availability of individuals with trait t, a minimum of q o t open-category positions are to be assigned to individuals with trait t. We refer to these positions as open-category HR-protected positions for trait t.
For each vertical category v ∈ V, we assume that the total number of category-v HRprotected positions is no more than the number of positions in category v. That is, for We refer to HR policies where an individual can have at most one trait as nonoverlapping HR protections, and HR policies where an individual can have multiple traits as overlapping HR protections. In many field applications in India, HR protections are non-overlapping. 10 Unlike this version of the problem which is relatively less complex, analysis of the problem with overlapping HR protections introduces a number of subtleties.
In contrast to VR protections, which are provided on an "over-and-above" basis, HR protections are provided within each vertical category on a "minimum guarantee" basis. This means that positions obtained without invoking any HR protection still accommodate the HR protections. 11 Given a category v ∈ V and assuming that HR policies are non-overlapping, category-v HR protections can be implemented with the following (category-v) minimum guarantee choice rule C v mg (Echenique and Yenmez, 2015).
Minimum Guarantee Choice Rule C v mg
Given a set of individuals I ⊆ I v , Step 1: for each trait t ∈ T , choose all individuals in I with trait t if the number of trait-t individuals in I is less than or equal to q v t , and q v t highest merit-score individuals in I with trait t otherwise. 9 This is not a federal mandate in India but rather a formal recommendation by the Supreme Court judgment Anil Kumar Gupta (1995). The vast majority of the institutions in India follow this recommendation in implementing HR policies in this form, also called compartmentalized horizontal reservations. 10 That is in part because persons with disabilities are the only group that is explicitly granted HR protections at the federal level. 11 The official language used for the distinction between HR protections and VR protections is given in Section B.2 of the Online Appendix.
Step 2: For positions unfilled in Step 1, choose unassigned individuals in I with highest merit scores.
The reason for restricting attention to problems with non-overlapping HR protections in defining this choice rule is technical. It is easy to see that the processing sequence of traits in Step 1 of the procedure becomes immaterial for this case. In contrast, the processing sequence of traits can affect the outcome under overlapping HR protections. Moreover, in this more general case, and even with the additional specification of a trait processing sequence, it is not clear whether the resulting choice rule is equally plausible for implementing the HR protections. Indeed, in Section 4.2.3 we advocate for an alternative approach in extending the minimum guarantee choice rule for problems with overlapping HR protections.
Analysis and Policy Implications with Non-Overlapping HR Protections
In this section, we present an analysis of concurrent implementation of VR and nonoverlapping HR protections. Therefore, throughout this section, each individual is assumed to have at most one trait. While Indian judgments and legislation on VR and HR policies more broadly apply to applications with overlapping HR protections as well, as we show in this section they have sharper implications for field applications with nonoverlapping HR protections. Moreover, choice rules that have been either mandated or endorsed by the Supreme Court since Indra Sawhney (1992) all abstract away from any details pertaining to overlapping HR protections. Therefore, our analysis of this more restrictive version of the model in this section has more direct policy implications in India.
SCI-AKG Choice Rule and Its Flaws.
We start our analysis by introducing the SCI-AKG choice rule that was mandated in India for 25 years until December 2020. The following definition simplifies the description of the SCI-AKG choice rule.
Definition 5.
A member of a reserve-eligible category i ∈ ∪ c∈R I c is a meritorious reserved candidate if she has one of the q o highest merit scores among all individuals in I.
Let I m denote the set of meritorious reserved candidates. We are ready to formulate the SCI-AKG choice rule, originally introduced in the Supreme Court judgment Anil Kumar Gupta (1995) for the case of a single trait. It is important to emphasize that the formulation of the SCI-AKG choice rule given above is not the original formulation presented in Anil Kumar Gupta (1995). The original formulation is based on first tentatively allocating the positions based on the over-andabove choice rule presented in Section 2.1, and subsequently carrying out any necessary adjustments to accommodate the HR protections. We instead present a simpler formulation of the SCI-AKG choice rule, using its relation to the minimum guarantee choice rule introduced in Section 2.3. 12 It is also important to note that, while the justices formally introduced the SCI-AKG choice rule only for the case of a single trait, their formulation immediately extends to multiple traits assuming HR protections are non-overlapping. Later in Section 4.2, we show that extending the SCI-AKG choice rule to the more general version of problem with overlapping HR protections introduces a number of subtleties, allowing for multiple generalizations of this rule.
We next show that the SCI-AKG choice rule has two important flaws even for the simple case with a single trait. 12 The original description of the SCI-AKG choice rule in the Supreme Court judgments Anil Kumar Gupta (1995) and Rajesh Kumar Daria (2007), and the result that shows the outcome equivalence of this formulation can be seen in Section B.3 of the Online Appendix.
Therefore, the set of individuals who are each awarded a position under the SCI-AKG choice rule is C SCI (I) = {m g 1 , w g 1 , m c 1 }. There are two troubling aspects of this outcome. The first issue is that, even though the category-c woman w c 1 has a higher merit score than the general category woman w g 1 , the latter receives a position while the former does not. That is, contrary to the philosophy of affirmative action, a lower merit score individual from the (unprotected) general category receives a position at the expense of a higher merit score individual from a protected category. The second issue is that, since she is the highest merit score woman among all applicants, woman w c 1 can receive the open-category HR-protected position for women simply by not declaring her eligibility for the VR-protected position for category-c.
The shortcomings of the SCI-AKG choice rule presented in Example 1 are not merely abstract possibilities, but rather are highly visible flaws that have been responsible for thousands of litigations that distrupt recruitment processes throughout India, as documented in Section C.1 of the Online Appendix. The root cause of both anomalies is the restriction of the open-category HR protections to general category individuals only. This restriction creates an immediate (and rather obvious) conflict for individuals who qualify for both VR and HR protections: With the exception of meritorious reserved candidates, any such individual loses her qualifications for open-category HR protections by claiming her VR protections. Consequently, this conflict reflects itself in the following two deficiencies that go against the philosophy of affirmative action: These deficiencies motivate our axioms of no justified envy and incentive compatibility.
The following HR-maximality function plays a key role not only in our formulation of the axiom of no justified envy, but also in our formulation of two additional axioms introduced later in this section. Moreover, the extension of our analysis to the more general model with overlapping HR protections later presented in Section 4 also critically depends on the extension of this function. Definition 6. Given a vertical category v ∈ V, the (category-v) HR-maximality function n v : 2 I v → N is defined as, for any I ⊆ I v , Observe that, for any set of individuals I who are eligible for category-v positions, the category-v HR-maximality function n v gives the maximum number of category-v HRprotected positions that can be awarded. 13 Definition 7. A choice rule C = (C ν ) ν∈V satisfies no justified envy if, for every I ⊆ I, v ∈ V, i ∈ C v (I), and j ∈ I ∩ I v \ C(I), This axiom requires that, given two individuals who are both eligible for a position in a category, the lower merit-score individual can receive a position at the expense of the higher merit-score individual only if not doing so strictly decreases the number of HR protections that are accommodated in that category. Therefore, under this axiom, increasing the utilization of HR protections in a category can be the only reason to award a position at this category to a lower merit-score individual at the expense of an unassigned higher merit-score eligible individual.
We next formulate the axiom of incentive compatibility, first introduced by Aygün and Bó In India, individuals are not required to declare their reserve-eligible privileges.
Definition 9. A choice rule C is incentive compatible if, for every I ⊆ I, any individual i ∈ I who is selected from I under the aggregate choice rule C by withholding some of her reserve-eligible privileges is also selected from I under C by declaring all her reserve-eligible privileges.
Under a choice rule that satisfies this axiom, privileges that are meant to provide positive discrimination would never produce the opposite effect and thus hurt an individual upon declaring eligibility. Failure of incentive compatibility is implausible both from a normative perspective, since it is against the philosophy of affirmative action, and also from a strategic perspective, since it may force individuals to withhold their privileges. As we document clear evidence in Section C.1.2 of the Online Appendix, it also creates one additional difficulty in India.
Eligibility for VR protections typically depends on an individual's caste membership. While this information is supposed to be private information, it can often be inferred by the central planner due to various indications such as the individual's last name. A central planner can also obtain this information through documents such as a diploma. Hence, eligibility for VR protections may not be truly private information, and the lack of incentive compatibility of a choice rule may enable a malicious central planner to exploit this information to deny an applicant her open-category HR protections. As documented in Section C.1.2 of the Online Appendix, this type of misconduct not only has been widespread in parts of India, but it even appears to be centrally organized by the local governing bodies in some of its jurisdictions.
An Easy Fix: 2SMG Choice Rule.
Apart from its simplicity, an additional advantage of formulating the SCI-AKG choice rule using its relation to the minimum guarantee choice rule is that, unlike its original formulation that obscures a possible remedy, our equivalent formulation suggests an easy fix. Both anomalies of the SCI-AKG choice rule are caused by the exclusive access given to the general-category individuals for open-category HR protections. This restriction reflects itself in our formulation of the SCI-AKG choice rule during the derivation of the open-category assignments through the formula Observe that, instead of running the choice rule C o mg for the set of individuals I m ∪ I g , running it for the set of all individuals I provides us with an immediate and intuitive fix. We refer to this alternative mechanism as the two-step minimum guarantee (2SMG) choice rule.
Two-Step Minimum Guarantee (2SMG) Choice Rule
Given a set of individuals I ⊆ I, Since the SCI-AKG choice rule is formally introduced in Anil Kumar Gupta (1995) for the case of a single trait, and in particular when HR protections are non-overlapping, it is best to consider the 2SMG choice rule for the model with non-overlapping HR protections only. 14 As one would naturally expect, replacing the SCI-AKG choice rule with the 2SMG choice rule results in a weakly less favorable outcome for members of the general category. The comparison for members of reserve-eligible categories is less straightforward, because in addition to the VR-protected positions, these individuals also compete for the open positions. However, assuming sufficient demand at each reserve-eligible category, replacing the SCI-AKG choice rule with the 2SMG choice rule results in a weakly more favorable outcome in aggregate for members of the reserve-eligible categories.
Proposition 1. For every I ⊆ I,
and assuming |I c | ≥ q o + q c for each reserve-eligible category c ∈ R,
The Demise of the SCI-AKG Choice Rule and the Rise of the 2SMG Choice Rule.
In a rather unexpected development and while this paper was under revision for this journal, in Saurav Yadav (2020) a three-judge bench of the Supreme Court declared that the SCI-AKG choice rule is a product of misinterpretation of the Court's earlier judgments. Referring to the failure of the SCI-AKG choice rule to satisfy no justified envy as an "incongruity," the justices annulled this mechanism, since it can result in "irrational" results. Importantly, the same judgment also endorsed the 2SMG choice rule as a possible replacement for the abandoned SCI-AKG choice rule. While the justices have not mandated the 2SMG choice rule in Saurav Yadav (2020), they mandated that any choice rule adopted in India satisfy the axiom of no justified envy and further brought clarity for one additional subtle aspect of the HR protections presented in Section 3.4. Importantly, the 2SMG choice rule is the only mechanism that satisfies these new mandates together with those from Indra Sawhney (1992) in applications with non-overlapping HR protections. We next present this significant implication of Saurav Yadav (2020), which is not observed in this important judgment.
The Implicit Mandate of the 2SMG Choice Rule Under Saurav Yadav (2020).
We next formulate three additional axioms, the first of which is originally mandated by Indra Sawhney (1992) and maintained by Saurav Yadav (2020), whereas the latter two are only recently mandated by Saurav Yadav (2020) (as in the case of the no justified envy axiom formulated in Section 3.1) at their strength formulated below.
That is, if an individual j is declined a position from each one of the categories (thus remaining unmatched) while there is an idle position at some category v ∈ V, then it must be the case that individual j is not eligible for a position at category v. This mild efficiency axiom has been mandated in India since Indra Sawhney (1992).
Definition 11. A choice rule C = (C ν ) ν∈V maximally accommodates HR protections, if for every I ⊆ I, v ∈ V, and j ∈ I ∩ I v \ C(I), In words, an individual who remains unassigned should not be able to increase the utilization of HR protections at any category where she has eligibility, if she were to be instead assigned a position in this category. The only reason this axiom was not mandated in India prior to Saurav Yadav (2020) is that under the previous interpretation of Anil Kumar Gupta (1995) members of reserve-eligible categories were considered ineligible for open-category HR protections. This restriction, which has been the root cause of the controversies involving the SCI-AKG choice rule, has been revoked by Saurav Yadav (2020), and consequently the axiom of maximum accommodation of HR protections is mandated in its stronger form as formulated above.
Definition 12.
A choice rule C = (C ν ) ν∈V complies with VR protections if, for every I ⊆ I, c ∈ R, and i ∈ C c (I), Here the first two conditions formulate the idea of a vertical reservationà la Indra Sawhney (1992), and they are directly implied by the concept of "over-and-above." For an individual i to receive a position set aside for a reserve-eligible category (thereby not receiving an open position), it must be the case that each open position is either assigned to a higher merit-score individual j, or to an individual j whose selection instead of i increases the utilization of open-category HR protections. The third condition additionally requires that a member of a reserve-eligible category who can improve the utilization of open-category HR protections shall not use up a VR-protected position. Importantly, this third condition is an implication of another mandate in Saurav Yadav (2020), and therefore this judgment enforces the axiom of compliance with VR protections in its stronger form as formulated above. 15 We are ready to present our first main result, one that has important and previously unknown policy implications for India.
Theorem 1. Suppose each individual has at most one trait. A choice rule
(1) maximally accommodates HR protections, (2)
satisfies no justified envy, (3) is non-wasteful, and (4) complies with VR protections if, and only if, it is the 2SMG choice rule C 2s
mg .
Prior to its endorsement by the three-judge bench of the Supreme Court in Saurav Yadav (2020), the 2SMG choice rule has been introduced by the justices of the High Court of Gujarat in Tamannaben Ashokbhai Desai (2020), an August 2020 judgment which also mandated the 2SMG choice rule in the state of Gujarat. 16 However, while this choice rule is merely endorsed and not explicitly mandated by Saurav Yadav (2020) throughout India, our first main result in Theorem 1 implies that this important judgment has implicitly mandated this mechanism in field applications with non-overlapping HR protections.
General Analysis and Policy Recommendations with Overlapping HR Protections
To the best of our knowledge, the judgments on the implementation of HR policies in India largely abstract away from any technical complications due to overlapping HR protections. Since this more general version of the problem is fairly common in the field, in this section we extend our analysis to the model with overlapping HR protections. This version of the problem, however, introduces a subtle but critical technical consideration that allows for at least two approaches to generalize our model. Hence, before presenting an analysis of concurrent implementation of VR and overlapping HR protections, we first elaborate on this consideration and justify the modeling choice we make for our generalization.
4.1. One-to One vs One-to-All HR Matching. Whether horizontal reservations are overlapping or not, an individual loses her open-category HR protections upon declaring her VR protections under the Supreme Court judgment Anil Kumar Gupta (1995). Therefore, the main flaws of the SCI-AKG choice rule, originally defined for a single trait, carry over 15 See Section B.4 of the Online Appendix for this important mandate in Saurav Yadav (2020). 16 Our introduction and advocacy of the 2SMG choice rule predates both of these important judgments.
to any possible generalization with overlapping HR protections. For this more general and complex case, however, one technical and subtle aspect of implementation of HR protections has been left unlegislated and remains at the discretion of the central planner. The law is silent on whether the admission of an individual with multiple traits accommodates the minimum guarantee requirements for all her traits or only for one of her traits. For example, suppose there is one HR-protected position for women and one HRprotected position for persons with disabilities. If a woman with a disability is admitted, the law does not specify whether she is to accommodate the minimum guarantee requirements both for women and also for persons with disabilities, or only for one of these two protected groups.
In our extension, we focus on the second convention of implementing the HR protections, and thus assume that an individual counts toward the minimum guarantee requirement for only one of her traits upon admission. We refer to this convention of implementing HR protections as one-to-one HR matching, and the alternative convention (where an individual counts toward the minimum guarantee requirements for all her traits upon admission) as one-to-all HR matching. There are two reasons for this important modeling choice.
The first reason is technical. The alternative convention of one-to-all HR matching introduces complementarities between individuals, making their admissions potentially contingent on each other. For example, if there is one HR-protected position for women and one HR-protected position for persons with disabilities, the admission of a man without a disability may depend on the admission of a woman with a disability who can accommodate the HR protections for both protected groups. This complementarity, in turn, not only renders the derivation of feasible groups of individuals computationally hard, but it also makes any possible solution technically less elegant. 17 In contrast, our adopted convention of one-to-one HR matching enables a fairly clean and computationally simple solution, as we present later in this section.
The second reason is practical. While either convention appears to be allowed by the Indian judgments and legislation, we have been unable to find any application with overlapping HR protections where the allocation rules clearly specify (or imply) the adoption of the one-to-all HR matching convention. In contrast, in many field applications, the central planner announces the number of positions for each category-trait pair, 18 which implicitly implies that they adopt the one-to-one HR matching convention. 19
Single-Category Analysis with Overlapping HR Protections.
Since HR policies are implemented within vertical categories, we start our analysis with the simple case of a single category. This version of the problem also relates to practical applications other than our main application in India, such as the allocation of K-12 public school seats in Chile where there are overlapping HR protections (Correa et al., 2019).
Throughout Section 4.2, we fix a category v ∈ V.
The Case Against a Fixed
Processing Sequence of Traits. The 2SMG choice rule, introduced in Section 3.2, is not well-defined in problems with overlapping HR protections, because, for any vertical category v ∈ V, the outcome of the category-v minimum guarantee choice rule may depend on the processing sequence of traits. Therefore, it may be compelling to resolve this multiplicity by simply specifying a processing sequence of traits for each vertical category as additional list of parameters of the choice rule. However, we caution against this (admittedly compelling) generalization for it may introduce additional flaws in the system.
Example 2.
There is one category (say open category), three individuals i 1 , i 2 , i 3 , and two positions. There are two traits t 1 , t 2 , with one HR-protected position each. Individual i 1 has both traits, individual i 2 has no trait, and individual i 3 has trait t 1 only. Individuals are merit ranked as . We next generate the outcome of the (open category) minimum guarantee choice rule for both processing sequences of the two traits, first by processing the trait-t 1 minimum guarantee prior to the trait-t 2 minimum guarantee, and subsequently by processing them in the reverse order.
Trait t 1 first, trait t 2 next: The highest merit score individual with trait t 1 is i 1 ; she receives a position, accommodating the minimum guarantee for trait t 1 . No remaining individual has trait t 2 ; therefore only individual i 1 receives a position in Step 1. The highest merit score remaining individual i 2 receives the second position in Step 2. The set of selected individuals is {i 1 , i 2 }, and only the trait t 1 minimum guarantee is accommodated under the first processing sequence of traits.
Trait t 2 first, trait t 1 next: The highest merit score individual with trait t 2 is i 1 ; she receives a position, accommodating the minimum guarantee for trait t 2 . Among the remaining individuals, the highest merit score individual with trait t 1 is i 3 ; she receives a position, accommodating the minimum guarantee for trait t 1 . No position remains, and therefore the set of selected individuals is {i 1 , i 3 }. Minimum guarantees for both traits are accommodated under the second processing sequence of traits. Example 2 shows that; (1) the outcome of the minimum guarantee choice rule, in general, depends on the processing sequence of traits, and (2) for some processing sequences of traits, it may accommodate fewer than the maximum possible HR protections.
Essentially, Example 2 shows that a fixed processing sequence of traits may result in denial of HR protections which can be avoided. Our next example reveals another problematic implication of implementing the minimum guarantee choice rule under a fixed processing sequence of traits.
Example 3. There is one category (say the open category), four individuals i 1 , i 2 , i 3 , i 4 , and three positions. There are two traits t 1 , t 2 , with one HR-protected position each. Individual i 1 has both traits, individual i 2 has no trait, individual i 3 has only trait t 1 , and individual i 4 has only trait t 2 . Individuals are merit ranked as We next generate the outcome of the (open category) minimum guarantee choice rule for both processing sequences of the two traits, first by processing the trait-t 1 minimum guarantee prior to the trait-t 2 minimum guarantee, and subsequently by processing them in the reverse order.
Trait t 1 first, trait t 2 next: The highest merit score individual with trait t 1 is i 1 ; she receives a position, accommodating the minimum guarantee for trait t 1 . Among the remaining individuals, the one with the highest merit score with trait t 2 is i 4 ; she receives a position, accommodating the minimum guarantee for trait t 2 and finalizing Step 1. The last position is assigned in Step 2 to the highest merit score remaining individual i 2 , and therefore the set of selected individuals is {i 1 , i 2 , i 4 } under the first processing sequence of traits.
Trait t 2 first, trait t 1 next: The highest merit score individual with trait t 2 is i 1 ; she receives a position, accommodating the minimum guarantee for trait t 2 . Among the remaining individuals, the one with the highest merit score with trait t 1 is i 3 ; she receives a position, accommodating the minimum guarantee for trait t 1 and finalizing Step 1. The last position is assigned in Step 2 to the highest merit score remaining individual i 2 , and therefore the set of selected individuals is {i 1 , i 2 , i 3 } under the second processing sequence of traits.
Example 3 reveals that, depending on the processing sequence of traits, the outcome of the minimum guarantee choice rule may admit lower merit score individuals at the expense of higher merit score ones without affecting adherence to the horizontal reservation policies. In Example 3, when the merit based outcome of {i 1 , i 2 , i 3 } already accommodates the HR protections, there is clearly no reason to select a less meritorious group.
These two examples not only guide us on adjustments of our axioms to account for overlapping HR protections, they also motivate the meritorious horizontal choice rule, introduced in Section 4.2.3, as a natural extension of the 2SMG choice rule.
HR Graph and the Generalized HR-maximality Function.
In contrast to the version of our model with non-overlapping HR protections where maximizing the accommodation of HR protections is a straightforward task, doing the same for the general version of the model with overlapping HR protections requires embedding a maximum trait matching procedure within each category. Therefore, we rely on the following construction to generalize our HR-maximality function, which we will use (1) to extend our axioms initially presented in Section 3 for the model with nonoverlapping HR protections, and (2) to generalize the 2SMG choice rule for the model with overlapping HR protections in a way that escapes the shortcomings presented in Examples 2 and 3.
Given a category v ∈ V and a set of individuals I ⊆ I v , construct the following two- for any i, j ∈ I,
Definition 14. Given a category v ∈ V and a set of individuals I ⊆ I v , a trait-matching of individuals in I with HR-protected positions in H v has maximum cardinality in a (categoryv) HR graph if there exists no other trait-matching that assigns a strictly higher number of HRprotected positions to individuals.
Let n v (I) denote the maximum number of category-v HR-protected positions that can be assigned to individuals in I. 20 Observe that function n v generalizes the category-v HRmaximality function presented in Definition 6 for the model with non-overlapping HR protections to the more general version of the model with overlapping HR protections (under the convention of one-to-one HR matching). Remark 1. All our axioms in Section 3 are extended for our more general model with overlapping HR protections by simply replacing the simpler version of the HR-maximality function given in Definition 6 with the generalized version.
The following terminology is useful for our generalization of the 2SMG choice rule.
Meritorious Horizontal Choice Rule.
We are ready to introduce a single-category choice rule that escapes the shortcomings presented in Examples 2 and 3. The main innovation in this choice rule is the optimization it carries out to determine who is to account for each minimum guarantee when some of the individuals can account for one or another due to multiple traits they have. Intuitively, this choice rule exploits the flexibility in trait-matching in order to accommodate the HR protections with higher merit-score individuals.
Given a category v ∈ V and a set of individuals I ⊆ I v , the outcome of this choice rule is obtained using the following procedure. When the number of individuals is less than q v , this procedure selects all individuals. Otherwise, if there are more than q v individuals, then it chooses a set with q v individuals.
4.2.4.
Single-Category Results with Overlapping HR Protections. We next present two singlecategory results under overlapping HR protections, which suggest that the case for the meritorious horizontal choice rule is especially strong in this framework.
Justifying the naming of this choice rule, our next result shows that the meritorious horizontal choice rule C v M always selects higher merit-score individuals compared to other choice rules that maximally accommodate HR protections. We next present our main characterization result, extending our analogous characterization of the 2SMG choice rule under non-overlapping HR protections to its generalization the 2SMH choice rule under overlapping HR protections.
satisfies no justified envy, (3) is non-wasteful, and (4) complies with VR protections if, and only if, it is the 2SMH choice rule C 2s
M . In addition to being the only choice rule that satisfies each of the four axioms in Theorem 3, our proposed 2SMH choice rule C 2s M also satisfies the axiom of incentive compatibility defined in Section 3.1.
Proposition 3. The 2SMH choice rule C 2s
M satisfies incentive compatibility. 4.4. Related Literature. Our theoretical analysis of reservation policies differs from its predecessors in two ways: (1) concurrent implementation of VR and HR protections, and (2) potentially overlapping structure of HR protections.
While there is a rich literature on affirmative action policies in India and elsewhere, our paper is the first one to formally analyze vertical and horizontal reservation policies when they are implemented concurrently.
There are a number of recent papers on reservation policies, most in the context of school choice. Abdulkadiroglu and Sönmez (2003) study affirmative action policies that limit the number of admitted students of a given type through hard quotas. Kojima , 2017) is closely related to ours, their analysis is independent because not only horizontal reservations are assumed away altogether in these papers, but also analyses in these papers largely abstract away from the legal requirements in India. 22 In contrast, the presence of horizontal reservations is of key importance for our analysis that is build on Indian legislation. The Brazilian affirmative action application studied by Aygün and Bó (2016) relates to ours in that it also includes multi-dimensional reservation policies, but unlike our models their application is a special case of Kominers and Sönmez (2016). There is, however, one important element in our paper that directly builds on Aygün and Bó (2016). The two desiderata that play an important role in our proposed reform in India, no justified envy and incentive compatibility are originally introduced by Aygün and Bó (2016). Evidence from aggregate data suggesting that the presence of justified envy is widespread in Brazil is also presented in this paper. As in Aygün and Bó (2016), we also present extensive evidence of justified envy in the field, but in addition we also document the large scale disruption this anomaly creates in the field. Other less related papers on reservation policies include Westkamp (2013) (2017) result in the limitations presented in Section 4.2. Building on the literature in matroid theory, we overcome these difficulties with the meritorious horizontal choice rule. More specifically, Proposition 2 and Theorem 2 are conceptually related to abstract results in matroid theory. Proposition 2 can be seen as a generalization of a result in Gale (1968) which shows that the outcome of the Greedy algorithm "dominates" any independent set of a matroid. In our appendix, we refer to this domination relation as "Gale domination." The first step of our meritorious horizontal choice rule corresponds to the Greedy algorithm defined on an adequately defined matroid, and Proposition 2 shows that this choice rule Gale dominates any choice rule that maximally complies with HR protections.
Epilogue: Life Imitates Science with the December 2020 Supreme Court judgment
Saurav Yadav v State of Uttar Pradesh (2020) As our paper was under revision for this journal, a December 2020 Supreme Court judgment in Saurav Yadav v State of Uttar Pradesh (2020) became headline news in India. 23 Using arguments parallel to our analysis presented in Section 3 and the evidence we documented from high court cases presented in Section C. (1) all allocation rules for public recruitment are federally mandated to satisfy no justified envy, and thereby (2) the SCI-AKG choice rule, mandated for 25 years, becomes rescinded.
Using several of the same judgments we present in Section C.1.1 of the Online Appendix, the justices also highlighted the inconsistencies between several high court judgments in relation to desiderata we formulated as the axiom of no justified envy. The justices also declared that while the "first view" that enforces no justified envy by the high court judgments of Rajasthan, Bombay, Gujarat, and Uttarakhand is "correct and rational," the "second view" that allows for justified envy by the high court judgments of Allahabad and Madhya Pradesh is not. 24 While the axiom of no justified envy becomes federally enforced with Saurav Yadav v State of Uttar Pradesh (2020), unlike in Anil Kumar Gupta (1995) no explicit procedure is mandated with this new Supreme Court ruling. Two points, however, are important to emphasize in this regard. The first one is that prior to Saurav Yadav (2020), the 2SMG choice rule became mandated in the state of Gujarat with the August 2020 high court judgment Tamannaben Ashokbhai Desai v. Shital Amrutlal Nishar (2020). 25 While the justices of the Supreme Court have not enforced any specific rule in their December 2020 judgment, they endorsed the 2SMG choice rule given in Tamannaben Ashokbhai Desai v. Shital Amrutlal Nishar (2020): 36. Finally, we must say that the steps indicated by the High Court of Gujarat in para 56 of its judgment in Tamannaben Ashokbhai Desai contemplate the correct and appropriate procedure for considering and giving effect to both vertical and horizontal reservations. The illustration given by us deals with only one possible dimension. There could be multiple such possibilities. Even going by the present illustration, the first female candidate allocated in the vertical column for Scheduled Tribes may have secured higher position than the candidate at Serial No.64. In that event said candidate must be shifted from the category of Scheduled Tribes to Open / General category causing a resultant vacancy in the vertical column of Scheduled Tribes. Such vacancy must then enure to the benefit of the candidate in the Waiting List for Scheduled Tribes -Female. 24 It is important to emphasize that, prior to this ruling, the second view-now deemed incorrect and irrational-was in line with the SCI-AKG choice rule, whereas the first view-now deemed correct and rational-deviated from the previously mandated choice rule. 25 The mandated choice rule in Gujarat is described for a single group of beneficiaries (women) for horizontal reservations under this High Court ruling. See Section B.5 in the Online Appendix for the description of the procedure in Tamannaben Ashokbhai Desai (2020).
The steps indicated by Gujarat High Court will take care of every such possibility. It is true that the exercise of laying down a procedure must necessarily be left to the concerned authorities but we may observe that one set out in said judgment will certainly satisfy all claims and will not lead to any incongruity as highlighted by us in the preceding paragraphs.
Since both the Supreme Court's and the Gujarati High Court's judgments abstract away from any issues in relation to overlapping horizontal reservations, these rulings are parallel to our recommendations in Section 3. There is, however, a potentially misleading aspect in the last sentence of the above quote in Saurav Yadav (2020), which brings us to our second point.
Apart from enforcing the axiom of no justified envy and rescinding the SCI-AKG choice rule, Saurav Yadav (2020) also brought clarity to a subtle aspect of implementation of vertical reservations in the presence of horizontal reservations. When the concept of vertical reservations was originally introduced in Indra Sawhney (1992) This clarification is of key importance, because with the resolution of this ambiguity the 2SMG choice rule remains the only choice rule by Theorem 1 that satisfies all mandates of the Supreme Court for applications in the field with non-overlapping horizontal reservations. Therefore, while the justices have not explicitly mandated the 2SMG choice rule with Saurav Yadav (2020) and they merely endorsed it emphasizing that "the exercise of laying down a procedure must necessarily be left to the concerned authorities," they have indirectly enforced it when individuals have at most one trait.
Finally, while the judgments of the Supreme Court offer some flexibility for the more general case of overlapping horizontal reservations, we have advocated in Section 4 for a specific choice rule, the two-step meritorious horizontal choice rule, for this more general case, and characterized it in Theorem 3 with axioms which can be considered natural extensions of the simpler versions mandated by the Supreme Court.
Appendix Appendix A. Proofs
In this Appendix, we present the proofs of our results. Some of our results for the more general version of the model in Section 4, most notably Proposition 2 and Theorem 2, are conceptually related to abstract results in matroid theory. Although these results have more direct proofs that rely on the literature on maximum matchings in bipartite graphs, we present proofs that highlight the conceptual connection between our results and the literature on matroid theory.
Before we present the proofs of our results in Section A.3, we present preliminaries in matroid theory in Sections A.1 and A.2.
A.1. Preliminary Definitions and Results in Matroid Theory. In this section we provide some basic definitions and results in matroid theory. We follow Oxley (2006).
A matroid is a pair (E, M) where E is a finite set and M is a collection of subsets of E that satisfies the following three properties: Set E is called the ground set of the matroid. Each set in M is called an independent set. An independent set M is maximal if there is no proper superset of M that is independent. A maximal independent set of a matroid is called a base. All bases of a matroid have the same cardinality by M3. The set of bases B satisfies the following two properties: The rank of X ⊆ E is defined as the cardinality of a maximal independent set in the restriction of (E, M) to X. Since all maximal independent sets have the same cardinality, the rank of a set is well-defined. The rank of X ⊆ E is is denoted by r(X). The rank function satisfies the following properties:
A.2. Greedy Choice Rule and Its Properties.
For a given weight function w : E → R + that takes distinct values, the greedy algorithm chooses the element with the highest weight subject to the constraint that the chosen set of elements is independent.
Step 2: If there exists e ∈ E \ X i such that X i ∪ {e} ∈ M, then choose such an element e i+1 of maximum weight, let X i+1 = X i ∪ {e i+1 }, and go to Step 3; otherwise let B = X i and go to Step 4.
Step 3: Add 1 to i and go to Step 2.
The textbook definition of the Greedy algorithm takes w to be any weight function that can take same values for different elements of E. In this case, the Greedy algorithm can select different sets depending on how elements are chosen when they have the same weight. To avoid this issue, we assume that distinct elements of E have different weights.
The greedy algorithm is defined on matroid (E, M). However, it can be applied to any restriction of this matroid. Therefore, the greedy algorithm can be viewed as a singlecategory choice rule on 2 E (Fleiner, 2001). For the rest of the paper, we view it as a singlecategory choice rule and refer to it as the greedy choice rule.
The greedy algorithm chooses an independent set that has the maximum weight, where the weight of a set is the sum of weights of individual elements. Before we introduce a stronger property of the greedy algorithm, we need the following definition.
We use the notation X G Y to denote set X Gale dominates set Y.
The following property of the greedy choice rule is the driving force for a similar property of the meritorious horizontal choice rule that is presented in Proposition 2.
Lemma 1. (Gale, 1968) For every E ′ ⊆ E, the outcome of the greedy choice rule for E ′ Gale dominates any independent subset of E ′ .
The following property of choice rules plays an important role in market design.
Definition 16. (Kelso and Crawford, 1982) A choice rule C : 2 E → 2 E satisfies the substitutes condition, if, for every E
We use the following result in some of our proofs.
Lemma 2. (Fleiner, 2001) The greedy choice rule satisfies the substitutes condition.
Fix a matroid (E, M) with rank function r. We next formulate some properties of choice rules. The first two properties extend the notion of independence for sets and the maximality for independent sets to choice rules.
Definition 17. A choice rule C
The next property is a reformulation of our axiom no justified envy in the abstract context of matroids.
The following result follows from the well-known properties of the greedy algorithm. We will rely on it extensively in the proof of Theorem 2 to highlight the conceptual similarities between our characterization of the meritorious horizontal choice rule and some of the key properties of the greedy algorithm.
Lemma 3. A choice rule C : 2 E → 2 E is independent, rank maximal, and satisfies no justified envy if, and only if, it is the greedy choice rule.
Proof. Let C be the greedy choice rule. Then by construction it is independent. Rank maximality follows easily because if C(E ′ ) is not rank maximal for some E ′ ⊆ E, then there exists e ∈ E such that C(E ′ ) ∪ {e} is independent. Thus, the greedy choice rule cannot produce C(E ′ ).
Suppose, for contradiction, that C fails to satisfy no justified envy. Then there exists We next show that any choice rule satisfying the properties has to be the greedy choice rule. Let D be a choice rule that satisfies the three axioms. Suppose, for contradiction, that D(E ′ ) = C(E ′ ) for some E ′ ⊆ E. Since both D and C are independent and rank maximal, D(E ′ ) and C(E ′ ) are bases in the matroid restriction of (E, M) to E ′ and so |D(E ′ )| = |C(E ′ )|. Therefore, there exists e 1 in D(E ′ ) \ C(E ′ ). Then, by B2', there exists This is a contradiction because (D(E ′ ) \ {e 1 }) ∪ {e 2 } and D(E ′ ) are both bases and, therefore, they have the same rank since they have the same cardinality.
A.3. Proofs of Main Results.
Using the HR graph for a category v ∈ V, we can study the transversal matroid with the ground set I v (Edmonds and Fulkerson, 1965). In this matroid, a set of individuals is independent if they can be matched with distinct positions and therefore the rank of a set of individuals is equal to the maximum number of distinct positions they can be matched with, which is the n v function that we have defined for a category v ∈ V. Furthermore, the weight of an individual can be defined as their merit score. With this setup, Step 1 of the meritorious horizontal choice rule is the same as the greedy choice rule for the transversal matroid. We use this important observation in proofs of Proposition 2 and Theorem 2 presented below.
Proof of Proposition 1. Let I ⊆ I be a set of individuals and I m ⊆ I be the set of reserveeligible individuals considered at Step 1 of C SCI when I is the set of applicants.
Let i ∈ C 2s mg (I) ∩ I g . Then i ∈ C o mg (I) ∩ I g because C 2s mg (I) ∩ I g = C o mg (I) ∩ I g . Since C o mg satisfies the substitutes condition (Echenique and Yenmez, 2015), i ∈ C o mg (I m ∪ I g ) because i ∈ I g and i ∈ C o mg (I). Therefore, i ∈ C o mg (I m ∪ I g ) ∩ I g , which implies i ∈ C SCI (I) ∩ I g because C SCI (I) ∩ I g = C o mg (I m ∪ I g ) ∩ I g . Therefore, we conclude that C 2s The assumption that |I c | ≥ q o + q c for each reserve-eligible category c ∈ R implies that all category-c positions are filled under both C 2s mg and C SCI . In addition, the first part of the proposition implies that there are weakly more individuals with reserved categories assigned to open-category positions under C 2s mg than under C SCI . Therefore, If |K ′ | = 0, then the proof is complete as in the base case using Lemma 1. For the rest of the proof suppose that |K ′ | > 0.
These are equivalent to K ′ G K ′′ and K ′′ G K ′ , respectively. Therefore, we must have Now we use Lemma 4 to finish the proof. Consider j ∈ K and j ′ ∈ K ′ such that and j ∈ C v (I). Next consider the case when σ(j) > σ(j ′ ). We need the following result.
Lemma 5. Individual j ′ is not a member of J.
Proof. Suppose, for contradiction, that j ′ ∈ J.
We apply the inductive hypothesis to the market with the set of individuals I \ {j, j ′ } and the capacity min{q v , |C v M (I)|} − 1 as in the previous case (when j = j ′ ). In this reduced market, at the first step of C v M , the greedy choice rule selects the same set of individuals as in the original market and the responsive choice rule selects the same set of Lemma 9. If a category-v choice rule maximally accommodates HR protections, satisfies no justified envy, and is non-wasteful, then it has to be C v M . Proof. Let C v be a category-v choice rule that maximally accommodates HR protections, satisfies no justified envy, and is non-wasteful. Construct the following choice rule D, where for any . We show that D v is the greedy choice rule by showing that D v is independent, rank maximal, and satisfies no justified envy.
First, D v is independent because C v G is independent (Lemma 3). Second, since C v maximally accommodates HR protections, n v (C v (I)) = n v (I). In addition, since the greedy choice rule is rank maximal n v (D v (I)) = n v (C v (I)). Therefore, n v (D v (I)) = n v (I), which means that D v is rank maximal.
Finally, suppose for contradiction that D v fails to satisfy no justified envy. Then there exist Since C v G satisfies no justified envy (Lemma 3), j has to be in I \ C v (I). Furthermore n v (D v (I)) = n v (I) by rank maximality of D v (I), so we get n v ((D v (I) \ {i}) ∪ {j}) = n v (I). As a result, n v ((C v (I) \ {i}) ∪ {j}) = n v (I) as well. This gives a contradiction to the assumption that C v satisfies no justified envy because i ∈ C v (I), j ∈ I \ C v (I), Since D v (I) is independent, rank maximal, and satisfies no justified envy, we conclude by Lemma 3 that Furthermore, by no justified envy, there cannot be an individual in I \ C v (I) who has a higher merit score than any individual in This finishes the proof of Theorem 2.
Proof of Theorem 3. Let C = (C v ) v∈V be a choice rule that complies with VR protections, maximally accommodates HR protections, satisfies no justified envy, and is non-wasteful. We show this result using the following lemmas.
Proof. We prove that C o maximally accommodates category-o HR protections, satisfies no justified envy, and is non-wasteful.
First, we show that C o maximally accommodates category-o HR protections. Suppose, for contradiction, that n o (C o (I)) < n o (I) for some I ⊆ I. Then there exists i ∈ I \ C o (I) such that n o (C o (I) ∪ {i}) = n o (C o (I)) + 1. If i ∈ I \ C(I), then we get a contradiction with the assumption that C maximally accommodates HR protections. Otherwise, if i ∈ C c (I) where c ∈ R, then we get a contradiction with the assumption that C complies with VR protections. Therefore, C o maximally accommodates category-o HR protections.
Next, we show that C o satisfies no justified envy. Let i ∈ C o (I) and j ∈ I \ C o (I) such that σ(j) > σ(i). If j ∈ I \ C(I), then because C satisfies no justified envy. However, if i ∈ C c (I) for category c ∈ R, then Lemma 11. C c (I) maximally accommodates category-c HR protections forĪ c .
Proof. Suppose, for contradiction, that n c (C c (I)) < n c (Ī c ). This is equivalent to n c (C c (I)) < n c (Ī c ) = n c C c (I) ∪ {i ∈ I \ C(I)|ρ(i) = c} , which implies that there exists i ∈ I \ C(I) who is eligible for category c such that n c (C c (I ∪ {i})) = n c (C c (I)) + 1.
This equation contradicts the assumption that C maximally accommodates HR protections. Therefore, C c (I) maximally accommodates category-c HR protections forĪ c .
Lemma 12. C c (I) satisfies no justified envy forĪ c .
Proof. Let i ∈ C c (I) and j ∈Ī c \ C c (Ī c ) be such that σ(j) > σ(i). Note that i ∈Ī c . Since C satisfies no justified envy, we have Hence, C c satisfies no justified envy forĪ c .
Proof. We consider two cases.
Therefore, C c (I) maximally accommodates category-c HR protections forĪ c , C c (I) satisfies no justified envy forĪ c , and C c (I) is non-wasteful forĪ c . By Theorem 2, C c (I) = C c M (Ī c ) and, thus,
Online Appendix Appendix B. Institutional Background on Vertical and Horizontal Reservations
In this section of the Online Appendix, we present: • the former was formulated as a policy tool to accommodate the higher-level protective provisions sanctioned by Article 16(4) of the Constitution of India, and • the latter was formulated as a policy tool to accommodate the lower-level protective provisions sanctioned by Article 16(1) of the Constitution of India.
The description of these two affirmative action policies and how they are intended to interact with each other is given in the judgment with following quote: To be more precise, suppose 3% of the vacancies are reserved in favour of physically handicapped persons; this would be a reservation relatable to clause (1) of Article 16. The persons selected against his quota will be placed in the appropriate category; if he belongs to SC category he will be placed in that quota by making necessary adjustments; similarly, if he belongs to open competition (OC) category, he will be placed in that category by making necessary adjustments.
It is further emphasized in the judgment that vertical reservations in favor of backward classes SC, ST, and OBC (which the judges refer to as reservations proper) are "set aside" for these classes.
In this connection it is well to remember that the reservations under Article 16(4) do not operate like a communal reservation. It may well happen that some members belonging to, say Scheduled Castes get selected in the open competition field on the basis of their own merit; they will not be counted against the quota reserved for Scheduled Castes; they will be treated as open competition candidates.
B.2. Rajesh Kumar Daria (2007): The Distinction Between Vertical Reservation and
Horizontal Reservation. The distinction between vertical reservations and horizontal reservations, i.e. the "over-and-above" aspect of the former and the "minimum guarantee" aspect of the latter, is further elaborated in the Supreme Court judgment Rajesh Kumar Daria (2007). to vertical (social) reservations will not apply to horizontal (special) reservations. Where a special reservation for women is provided within the social reservation for Scheduled Castes, the proper procedure is first to fill up the quota for scheduled castes in order of merit and then find out the number of candidates among them who belong to the special reservation group of 'Scheduled Castes-Women'. If the number of women in such list is equal to or more than the number of special reservation quota, then there is no need for further selection towards the special reservation quota. Only if there is any shortfall, the requisite number of scheduled caste women shall have to be taken by deleting the corresponding number of candidates from the bottom of the list relating to Scheduled Castes. To this extent, horizontal (special) reservation differs from vertical (social) reservation. Thus women selected on merit within the vertical reservation quota will be counted against the horizontal reservation for women. The procedure to implement compartmentalized horizontal reservation is described in Anil Kumar Gupta (1995) as follows:
B.3. Anil Kumar Gupta (1995): Implementation of Horizontal Reservations Com
The proper and correct course is to first fill up the O.C. quota (50%) on the basis of merit: then fill up each of the social reservation quotas, i.e., S.C., S.T. and B.C; the third step would be to find out how many candidates belonging to special reservations have been selected on the above basis. If the quota fixed for horizontal reservations is already satisfied -in case it is an over-all horizontal reservation -no further question arises. But if it is not so satisfied, the requisite number of special reservation candidates shall have to be taken and adjusted/accommodated against their respective social reservation categories by deleting the corresponding number of candidates therefrom.
(If, however, it is a case of compartmentalised horizontal reservation, then the process of verification and adjustment/accommodation as stated above should be applied separately to each of the vertical reservations. The adjustment phase of the procedure for implementation of horizontal reservation is further elaborated in the Supreme Court judgment Rajesh Kumar Daria (2007) The following quote from the judgment, however, is also critical, because it implies that our axiom of compliance with VR protection in Definition 12 is also enforced in its stronger form with Condition 3: 36. Finally, we must say that the steps indicated by the High Court of Gujarat in para 56 of its judgment in Tamannaben Ashokbhai Desai contemplate the correct and appropriate procedure for considering and giving effect to both vertical and horizontal reservations. The illustration given by us deals with only one possible dimension. There could be multiple such possibilities. Even going by the present illustration, the first female candidate allocated in the vertical column for Scheduled Tribes may have secured higher position than the candidate at Serial No.64. In that event said candidate must be shifted from the category of Scheduled Tribes to Open / General category causing a resultant vacancy in the vertical column of Scheduled Tribes. Such vacancy must then enure to the benefit of the candidate in the Waiting List for Scheduled Tribes --Female.
More specifically the quote formulates the mandate that a member of a reserve-eligible category (Scheduled Tribes in the example) has to be considered for open-category HR-protected positions (for women HR protections in the example) before using up a VR-protected position. Apart from its enforcement of our axiom compliance with VR protections, this quote also brings clarity for the following defining characteristic of VR protections, originally formulated in Indra Sawhney (1992), in the presence of HR protections: It may well happen that some members belonging to, say Scheduled Castes get selected in the open competition field on the basis of their own merit; they will not be counted against the quota reserved for Scheduled Castes; they will be treated as open competition candidates.
Prior to Saurav Yadav (2020), a formal interpretation was never provided for the following question: What does it mean to be selected in the open competition field on the basis of one's own merit in the presence of HR protections? For the case of non-overlapping horizontal reservations, this question is answered as follows in Saurav Yadav (2020): Any individual who is entitled to an open position based on her merit score, including those who are entitled to one due to the adjustments to accommodate the HR protections, is considered as an individual who is selected in the open competition field on the basis of her own merit.
Since the fourth axiom non-wastefulness has always been enforced since Indra Sawhney (1992), our Theorem 1 implies that the 2SMG choice rule is the only mechanism that satisfies all mandates of Saurav Yadav (2020) for in field applications with non-overlapping HR protections. Step 1: Draw up a list of at least 100 candidates (usually a list of more than 100 candidates is prepared so that there is no shortfall of appointees when some candidates don't join after offer) qualified to be selected in the order of merit. This list will contain the candidates belonging to all the aforesaid categories.
Step 2: From the aforesaid Step 1 List, draw up a list of the first 51 candidates to fill up the OC quota (51) on the basis of merit. This list of 51 candidates may include the candidates belonging to SC, ST and SEBC.
Step 3: Do a check for horizontal reservation in OC quota. In the Step 2 List of OC category, if there are 17 women (category does not matter), women's quota of 33% is fulfilled. Nothing more is to be done. If there is a shortfall of women (say, only 10 women are available in the Step 2 List of OC category), 7 more women have to be added. The way to do this is to, first, delete the last 7 male candidates of the Step 2 List. Thereafter, go down the Step 1 List after item no. 51, and pick the first 7 women (category does not matter). As soon as 7 such women from Step 1 List are found, they are to be brought up and added to the Step 2 List to make up for the shortfall of 7 women. Now, the 33% quota for OC women is fulfilled. List of OC category is to be locked. Step 2 List list becomes final.
Step 4: Move over to SCs. From the Step 1 List, after item no. 51, draw up a list of 12 SC candidates (male or female). These 12 would also include all male SC candidates who got deleted from the Step 2 List to make 26 The two-step minimum guarantee choice rule is referred to as C hor 2s in Sönmez and Yenmez (2019).
up for the shortfall of women.
Step 5: Do a check for horizontal reservation in the Step 4 List of SCs.
If there are 4 SC women, the quota of 33% is complete. Nothing more is to be done. If there is a shortfall of SC women (say, only 2 women are available), 2 more women have to be added. The way to do this is to, first, delete the last 2 male SC candidates of the Step 4 List and then to go down the Step 1 List after item no. 51, and pick the first 2 SC women. As soon as 2 such SC women in Step 1 List are found, they are to be brought up and added to the Step 4 List of SCs to make up for the shortfall of SC women. Now, the 33% quota for SC women is fulfilled. List of SCs is to be locked.
Step 4 List becomes final. If 2 SC women cannot be found till the last number in the Step 1 List, these 2 vacancies are to be filled up by SC men. If in case, SC men are also wanting, the social reservation quota of SC is to be carried forward to the next recruitment unless there is a rule which permits conversion of SC quota to OC.
Step 6: Repeat steps 4 and 5 for preparing list of STs.
Step 7: Repeat steps 4 and 5 for preparing list of SEBCs.
Appendix C. Documentation of Evidence from Indian Court Rulings on Disruption Caused by the Flaws of the SCI-AKG Choice Rule
In this section we present extensive evidence on the disarray caused by the shortcomings of the SCI-AKG choice rule in India. Much of our analysis, the High Court judgments presented Section C.1.1, and our policy recommendations parallel the arguments and the decision of the December 2020 Supreme Court judgment Saurav Yadav v State of Uttar Pradesh (2020). Our entire analysis and policy recommendations predate this important judgment, and it was already presented in an earlier draft of this paper in Sönmez and Yenmez (2019).
C.1. Litigations on the SCI-AKG Choice Rule.
As we have argued in Section 3.1, the SCI-AKG choice rule fails our axioms of no justified envy. Moreover, it also fails incentive compatibility due to backward class candidates losing their open-category HR protections upon claiming their VR protections by declaring their backward class status.
The failure of SCI-AKG choice rule to satisfy no justified envy is fairly straightforward to observe. All it takes is a rejected backward class candidate to realize that her merit score is higher than an accepted general-category candidate, even though she qualifies for all the HR protections the less-deserving (but still accepted) candidate qualifies for. Since the primary role of the reservation policy is positive discrimination for candidates with more vulnerable backgrounds, this situation is very counterintuitive, and it often results in legal action. Focusing on complications caused by either anomaly, we next present several court cases to document how they handicap concurrent implementation of vertical and horizontal reservation policies in India.
C.1.1. High Court Cases Related to Justified Envy. The possibility of justified envy under the SCI-AKG choice rule has resulted in numerous court cases throughout India for more than two decades, and since the presence of justified envy in the system is highly implausible, these legal challenges often result in controversial rulings. In addition, there are also cases where authorities who implement a better-behaved version of the choice rule, one that does not suffer from this shortcoming, are nonetheless challenged in court, on the basis that their adopted choice rules differ from those mandated by the Supreme Court. These court cases are not restricted to lower courts, and include several cases in state high courts. Even at the level of state high courts, the judgments on this issue are highly inconsistent, largely due to the disarray created by the possibility of justified envy under the SCI-AKG choice rule. We next present several representative cases from high courts: ( The judge sides with the petitioners, and rules that the State of Uttar Pradesh must correct its erroneous application of the provisions of horizontal reservations. The judge further emphasizes that the State has played foul, stating: There is merit in the submission of the learned counsel for the petitioners that the conduct of the members of the Board appears not only mischievous but motivated to achieve a calculated agenda by deliberately keeping meritorious candidates out of the select list. The Board and the officials involved in the recruitment process were fully aware of the principle of horizontal reservations enshrined in Act, 1993 and Government Orders which were being followed by them in previous selections of SICP and PC (PAC), but in the present selection they chose to adopt a principle against their own Government Orders and the statutory provisions which were binding upon them... I am constrained to hold that both the State and the Board have played fraud on the principles enshrined in the Constitution with regard to public appointment.
What is especially surprising is that, despite the heavy tone of this judgment, the State goes on to appeal in another Allahabad High Court case State Of U.P. And 2 Ors. vs Ashish Kumar Pandey And 58 Ors, 29 July, 2016, 30 in an effort to continue using its preferred method for implementing horizontal reservations. Perhaps unsurprisingly, this appeal was denied by the High Court. This particular case clearly illustrates that there is strong resistance in at least some of the states to implementing the provisions of horizontal reservations as mandated under the SCI-AKG choice rule. While this resistance most likely reflects the political nature of this debate, the arguments of the counsel for the state to maintain their preferred mechanism are mostly based on the presence of justified envy under the SCI-AKG choice rule. The following quote from the appeal illustrates that this was the main argument used in their defense: The arguments that have been advanced on behalf of State and private appellant with all vehemence that women candidates irrespective of their social class i.e. SC/ST/OBC are entitled to make place for themselves in an open category on their inter-se merit clearly gives an impression to us that State of U.P and its agents/servants and even the private appellants are totally unaware of the distinction that has been time and again reiterated in between vertical reservation and horizontal reservation and the way and manner in which the provision has to be pressed and brought into play. Under the federal law, there is no merit to this argument, because the SCI-AKG choice rule allows for justified envy. However, the judges side with the petitioner on the basis that a candidate cannot be denied a position from the open category based on her backward class membership, essentially ruling out the possibility of justified envy under a Supreme Court-mandated choice rule, which is designed to allow for positive discrimination for vulnerable groups. 32 Their justification is given in the court records as follows: We find the argument advanced as above to be fallacious. (2017) where the judges have been erroneous siding with petitioners whose lawsuits are based on instances of justified envy, in this case a general category petitioner seeks legal action against the state on the basis that several HR-protected open-category positions for women are allocated to women from OBC who are not eligible for these positions (unless they receive it without invoking the benefits of horizontal reservation). While all these OBC women have higher merit scores than the petitioner and the state has apparently used a better behaved procedure, the petitioner's case has merit because the SCI-AKG choice rule allows for justified envy in those situations. In an earlier lawsuit, the petitioner's case was already declined by a single judge of the same court based on an erroneous interpretation of Indra Sawhney (1992). The petitioner subsequently appeals this erroneous decision and brings the case to a larger bench of the same court. However, the three judges side with the earlier judgment, thus erroneously dismissing the appeal. Their decision is justified as follows: The outstanding and important feature to be noticed is that it is not the case of the appellant-petitioner that she has obtained more marks than those 8 OBC (Woman) candidates, who have been appointed against the posts meant for General Category (Woman), inasmuch as, while the appellant is at Serial No.184 in the merit list, the last OBC (Woman) appointed is at Serial No.125 in the merit list. The controversy raised by the appellant is required to be examined in the context and backdrop of these significant factual aspects.
As seen from this argument, many judges have difficulty perceiving that the Supreme Court-mandated procedure could possibly allow for justified envy. Moreover, not all decisions in these lawsuits are made in accordance with the SCI-AKR choice rule, which allows candidates to forego their VR (or HR) protections. This is the case both for the first and last lawsuit listed above. For example, in the last lawsuit, two petitioners each applied for a position without declaring their backward class membership, with the intention to benefit from open-category HR protections. Following their application, these petitioners were requested to provide their school leaving certificates, which provided information on their backward class status. Upon receiving this information, the petitioners were declined eligibility for open-category HR protections, even though they never claimed their VR protections. Hence, they filed the fourth lawsuit given above. Remarkably, their petition was declined on the basis of their backward class membership. Here we have a case where the authorities not only go to great lengths to obtain the backward class membership of the candidates, and wrongfully decline their eligibility for open category HR protections, but they also manage to get their lawsuits dismissed. The mishandling of this case is consistent with the concerns indicated in the February 2006 issue of The Inter-Regional Inequality Facility policy brief: 38 Another issue relates to the access of SCs and STs to the institutions of justice in seeking protection against discrimination. Studies indicate that SCs and STs are generally faced with insurmountable obstacles in their efforts to seek justice in the event of discrimination. The official statistics and primary survey data bring out this character of justice institutions. The data on Civil Rights cases, for example, shows that only 1.6% of the total cases registered in 1991 were convicted, and that this had fallen to 0.9% in 2000.
C.1.3.
Loss of Access to HR protections without any Access to VR protections. The main justification offered in various Supreme Court cases for denying backward class members their open-category HR protections is avoiding a situation where an excessive number of positions are reserved for members of these classes. In several cases, however, members of these classes are denied access to open-category HR protections even when the number of VR-protected positions is zero for their reserve-eligible vertical category. This is the case in the following two lawsuits: indeed the first step of the SCI-AKG choice rule. Once a tentative assignment is determined, the necessary adjustments are subsequently made to implement HR protections, first for the open-category positions, then for positions at each reserve-eligible category. The adjustment process is repeated for each trait. Formally, for a given category v ∈ V of positions, let a set of individuals J ⊆ I v who are tentatively assigned to category-v positions and a set of individuals K ⊆ I v \ J who are eligible for horizontal adjustments at category v be such that (1) |J| = q v and (2) σ(i) > σ(i ′ ) for any i ∈ J and i ′ ∈ K.
Then, for a given processing sequence t 1 , t 2 , . . . , t |T | of traits, the horizontal adjustment process is carried out with the following procedure.
Proof of (2): Let i ∈ I ′ and j ∈ I \ I ′ such that σ(j) > σ(i). Since j / ∈ I ′ , either j does not have a trait or there are at least q v t individuals in I ′ where t is j's only trait. If j does not have a trait, then i must have a trait t ′ such that the number of individuals in I ′ who have trait t ′ is min{q v t ′ , |{i ′ ∈ I : t ′ ∈ τ(i ′ )}|}. Then n v ((I ′ \ {i}) ∪ {j}) = n v (I ′ ) − 1, which means that there is no instance of justified envy involving j and i. If j has trait t, then it must be that i does not have trait t, there are at least q v t individuals with trait t in I ′ , and i must have a trait t ′ = t such that the number of individuals in I ′ who have trait t ′ is min{q v t ′ , |{i ′ ∈ I : t ′ ∈ τ(i ′ )}|}. Then, as before, n v ((I ′ \ {i}) ∪ {j}) = n v (I ′ ) − 1, which means that there is no instance of justified envy involving j and i.
Proof of (3): For every trait t, there is a corresponding step of AKG-HAS so that the number of individuals in I ′ who have trait t is min{q v t , |{i ′ ∈ I : t ∈ τ(i ′ )}|}. Since each individual has at most one trait, this implies that n v (I ′ ) = n v (I).
We are ready to present the original formulation of the SCI-AKG choice rule as it is described in Anil Kumar Gupta (1995) and Rajesh Kumar Daria (2007). Proposition 4 presented above immediately establishes the equivalence of the original formulation with the formulation presented in Section 3.1.
SCI-AKG Choice Rule C SCI
For every I ⊆ I, Step 3 (Reserve-eligible category tentative assignment): For any reserve-eligible category c ∈ R, • If |I c \ C SCI,o (I)| ≤ q c then assign all individuals in I c \ C SCI,o (I) to categoryc positions, finalizing the assignments of individuals in I c . In this case C SCI,c (I) = I c \ C SCI,o (I). • Otherwise, if |I c \ C SCI,o (I)| > q c , then tentatively assign the highest meritscore q c individuals in I c \ C SCI,o (I) to category-c positions. Let J c denote the set of individuals who are tentatively assigned to category-c positions in this case.
Step The outcome of the SCI-AKG choice rule is C SCI (I) = C SCI,v (I) v∈V . | 2021-02-08T02:16:07.664Z | 2021-02-05T00:00:00.000 | {
"year": 2021,
"sha1": "5741d670a327b064370a6e5041482edad2ff80a8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5741d670a327b064370a6e5041482edad2ff80a8",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
250689282 | pes2o/s2orc | v3-fos-license | Angular momentum effects in electron scattering from atoms
This paper concerns angular momentum-dependent phenomena in excited gas-phase atoms using incident photons or electrons in scattering experiments. A brief overview indicates the main capabilities of experimental techniques and the information which can be deduced about atomic structure and dynamics from conservation of momenta with measurement of polarization and detection of the number of emerging electrons, photons and ions. Maximum information may be obtained when the incident particles and the targets are state-selected both before and after scattering. The fundamental scattering amplitudes and their relative phases, and consequently derived quantities such as the parameters describing the electron charge cloud of the atomic target, have enabled significant advances of understanding of collision mechanisms. The angular momentum-dependent scattering probabilities change when, for example, the spin-orbit interaction for the target electrons becomes large compared with the Coulomb electron-electron interactions and also when electron exchange and the relative orientation of the electron spins change. Several examples are discussed to indicate significant principles and recent advances. Major contributions to this field from the technology associated with electron spin production and detection time, as well as time-coincidence detection, are discussed. New results from the authors' laboratory are presented.
Introduction
Recent progress in experimental studies of atomic collision processes concerns angular momentum changes associated with (i) polarized colliding particles enabling spin effects, such as electron exchange and spin-orbit interaction, and with (ii) coincidence techniques exploiting symmetry constraints to study scattering dynamics. Theoretical progress using non-perturbative R-matrix close-coupling methods, distorted wave Born approximation (DWBA) methods and propagating complex exterior scaling methods is covered elsewhere [1] and generally indicates advanced numerical and wave function choices enabled a good description of most collision processes involving simple one and two-electron atoms. The meeting places of calculation and experiment are usually derived from a collision model for which the scattering amplitudes and their relative phases can describe the observables. Some background information is given now to indicate the nomenclature and reason behind various observational strategies for conducting scattering experiments.
It was shown long ago, for example Fano [2], Fano and Macek [3], Macek and Hertel [4], that amplitudes and phases could be related readily to state multipole moments, irreducible components of the collision density matrix and expectation values of the components of angular momenta. These equivalent descriptions allowed the observed characteristics to be described in a physical way such that the geometry and dynamical factors were separable and the observables were readily associated with the symmetry of the experiment. This approach is suited to experiments in which changes are made in the collision system, the colliding particles before collision or the detection system and, for example, is associated readily with the polarization of the particles before and after scattering. For example, classical visualization of an excited atom usually is obtained more readily from an electronic charge distribution which can be derived from the electrostatic potential of a charge distribution. Then after expansion in terms of spherical harmonics and multipole moments, an equipotential surface can be interpreted as a simplified picture of an atom as a surface-charged rigid body from which a classical view of angular momentum can be developed. These early papers point out the situations in which it is advantageous, for reasons of simplicity, to choose an appropriate quantization direction for particular experimental geometries or polarization of a particle or detector properties. The quasi-classical view can be expanded further to relate the collision times T , such as T F S >> T col , to a typical interaction range, R, and relative scattering velocity, V , in which hR/(V ∆E F S ) << 1 where ∆E F S is the atomic energy change in a spin-orbit interaction. Also, for example, if T F S ≈ h/∆E F S >> T col , the spin-orbit coupling may be weak and the electron spin uncoupled from the orbital angular momentum so that only the orbital angular momentum scattering amplitudes have to be known. The consequential considerations of both the experimental and theoretical approaches may then be simplified. Andersen and Bartschat [5], for example, give a detailed discussion of these aspects of electron scattering.
Various examples of the way in which these fundamental ideas have been implemented could be drawn from the areas of relevance here. For example, in photon impact studies such collision processes include photo-ionization of polarized atoms, resonance effects in multiphoton ionization, spin polarization and rotation in photo-ionization and polarization in multi-photon ionization of atoms in intense laser fields. Also in electron impact scattering the processes include laser-assisted and laser-produced scattering, spin-polarized Auger electrons and polarization effects in inner and outer shell excitation. The experimental techniques and apparatus, which are related particularly to advances in angular momentum-dependent features, include sources and detectors of polarized electrons, particularly position-sensitive detectors, circular and linear polarizers, photon-photon correlation methods, stepwise excitation with electron-photon coincidence techniques, the collision of cooled and polarized atoms and spin-dependent (e,2e) techniques. All these aspects deserve discussion but here we concentrate on aspects of angular momentum transfer in electron impact excitation processes. A second application with an atomic approach to surfaces, particularly with the polarized (e,2e)-in-reflection method, is discussed in a separate paper.
Fundamental quantum mechanics indicates that it is angular momentum in an atom that is quantised and that the angular momentum transfer between states determines the nature of the emitted radiation. Atomic spin may become visible in emitted radiation for transitions between fine-structure levels and when the spin-orbit interaction, within the atom between excitation and decay, transfers spin orientation into orbital angular momentum; for example when circularly polarised light is produced without resolving the fine structure. An atom may become oriented, for example, if its spin system becomes polarised after electron exchange. For the plane polarization data the expectation values of the second order products of angular momenta are determined while the circular polarization determines the first order angular momentum, along a given quantization axis. From these, and other, fundamental ideas the measurements which enable the visualisation of the size, shape and rotation of the electron charge cloud of an excited state, as depicted in figure 1, were developed over many decades [3,5] and references therein. The linear momenta of the incident, k in , and scattered, k sc , electrons create a well-defined plane of symmetry, the scattering plane, and in the collision the reflection symmetry, with respect to this plane, is conserved. The charge cloud of an atomic 1 P 1 state is represented in figure 1 with its relative length ( ), height (h), width (w) and alignment angle (γ). An additional parameter P describes the shape of the charge cloud in the scattering plane and is defined as the difference between the length and width, or maximum and minimum density |Ψ| 2 max and |Ψ| 2 min , respectively. There are numerous ways in which observations can be made to obtain various qualitative descriptions of electron scattering processes. A representation of an electron scattering experiment to observe properties such as shape, alignment and orientation of an excited atomic state charge cloud is indicated in figure 2 which shows the reference axes and the electron and photon detectors and scattering angles.
Angular momenta from unresolved-spin experiments
In the experiments described in this section, unpolarized electron beams are used. The essential observational requirement is to detect the electron scattered at an angle θ, and the radiated photon from the same quantum ensemble and this can be achieved by detecting them in timecoincidence and selecting an appropriate scattering symmetry. First, we discuss atoms which are excited into a 1 P 1 state, i.e. s-p excitations from a fully spherically isotropic s 2 ground state. For a fully coherent process, the alignment indicates a non-isotropic distribution of magnetic sublevels |JM > with expectation values < M 2 > =< J 2 > /3. For symmetry reasons, angular momentum L ⊥ can be transferred between the interacting particles only perpendicular to the collision plane, as indicated in figure 1. The atom is oriented if it has a finite expectation value of angular momentum, and in figure 1 that means there is an unequal population of the m = +1 and m = −1 magnetic sublevels with the quantization axis parallel to the z-axis.
For the particular case of excitation of the 1 P 1 state of helium, all of the dimensionless electron impact coherence parameters (EICPs) P , γ and L ⊥ can be determined [5] from the polarization properties of the emitted photons when detected in coincidence with the energy loss electrons Also I(R) and I(L) are the numbers of right and left handed circularly polarized photons, respectively. Measurements of the linear polarization Stokes parameters, P 1 and P 2 , determine the polarization of the charge cloud P and alignment angle γ, while measurements of the circularly polarization Stokes parameter, P 3 , reveals information on the angular momentum L ⊥ where P = P 1 2 + P 2 2 ; γ = 0.5 arg(P 1 + iP 2 ); L ⊥ = −P 3 .
The electron-photon coincidence detection method can be used in two different ways to obtain equivalent information on P 1 and P 2 leading to the determination of P and γ from the linear polarization measurements. In the angular correlation method electrons scattered at an angle θ and the decay photons emitted at a variable angle, usually in the scattering plane, are detected in coincidence while, in the polarization correlation method, the scattered electrons are detected in coincidence with the linearly polarized photon detected in the direction of the z-axis, as described above. The equivalence of the two methods is evidenced by noting that the axes of the intensity distribution coincide with the axes of the polarization ellipse. The main difference between the two methods concerns the determination of L ⊥ for which a circular polarization has to be measured independently. The angular correlation method gives an absolute value only when complete coherence, i.e. (P 1 2 + P 2 2 + P 3 2 ) 1/2 = 1, is either measured or assumed. These consideration have been applied to various atoms and selected scattering dynamics in the following sections.
For the specific case of 50 eV electrons exciting the 1s2p 1 P 1 state of helium, and observing the radiated photons of 58.43 nm wavelength in coincidence with the energy-loss electrons scattered through 70 • , the charge cloud may appear similar to that in figure 1. Experimental results for this particular excitation process have been obtained mainly using the angular correlation method due to its simplicity compared with the polarization measurements in the vacuum-UV region. Some representative experimental data for the angular behavior of the differential cross section σ(θ), the linear polarization P , and the alignment angle γ of the charge cloud and the momentum transfer L ⊥ , at an electron impact energy of 50 eV together with predictions of the CCC and RMPS models, are shown in figure 3. For reasons of clarity in the next two figures only a subsection of published data is shown. The choice illustrates the agreement that can be obtained between different experiments and the agreement of experimental data with calculations.
There are two important properties of the electron coherence parameters displayed in figure 3. The first concerns the quantum completeness of the data set. In an ideal situation, which is the case of excitation of the helium 1s2p 1 P 1 state, a perfect scattering experiment determines all the scattering amplitudes and their relative phases. For an S → P excitation and the geometry of figure 1, the transition into the M L = 0 state is forbidden due to conservation of symmetry in the scattering plane, so only the scattering amplitudes f −1 and f +1 need be considered. This process is determined by the three parameters, the absolute differential cross section σ = |f −1 | 2 + |f +1 | 2 and two more dimensionless parameters determining the relative magnitudes and phases of the amplitudes. In this sense the parameter set (σ; L ⊥ ; γ) is sufficient, and their determination constitutes a perfect scattering experiment [13]. These data serve as a benchmark for other studies. • , McAdams et al [9]; , Eminiyan et al [10]; Theory: bold line, CCC Fursa and Bray [11]; thin line, RMPS Bartschat et al [12].
The second property concerns the relative nature of the electron impact coherence parameters which permits a sensitive comparison between the theoretical and experimental data as no absolute calibration is needed. Inspection of figure 3 indicates good agreement between experimental and theoretical predictions for all four parameters. The behavior of the momentum transfer L ⊥ from the electrons to the atom and its influence on the atomic state polarization P and the alignment angle γ can be traced from the angular behavior of the parameters. First, at small scattering angles L ⊥ is positive and at large angles negative and this indicates a change in orientation equivalent to the change of direction of rotation of the electron charge cloud. Also the behavior of P and γ has some common features which can be traced to the behavior of the expectation value of orbital momentum, L ⊥ . For those scattering angles where a nearly-circular state (L ⊥ ≈ +1 or L ⊥ ≈ −1) is created, excitation of the M L = +1 or M L = −1 magnetic sublevel is predominant, both P and γ exhibit a rapid variation, which for the alignment angle γ involves the change of sign between the forward and backward scattering.
The insight gained into the importance of the angular momenta in such a scattering process is made more apparent by comparing the parameters σ, P , and L ⊥ for helium with the same parameters for the two more complex atoms, Mg (Z = 12) and Ca (Z = 20), as shown in figure 4. These atoms, with a low atomic number and a quasi two-electron configuration similar to helium and small fine structure splitting i.e. weak spin-orbit interaction between atomic electrons, are expected to conform to LS coupling and be representative of light metal atoms. The ground state of these two atoms is still a spherically isotropic s 2 1 S state, and excitation into a first optically allowed nsnp 1 P 1 state is expected to be fully coherent. [11].
The data for Mg [16] were obtained using the polarization correlation method with detection of the 4.35 eV energy-loss electron in coincidence with the 285.2 nm radiated photon. The experiment by Murray and Cvejanović [14] for Ca used a third method which can be considered as a time-inverse polarization correlation method. Polarized (linear or circular) laser light (wavelength of 422.7 nm) is used to pump the atom into a well-defined M L state and observations are made of the super-elastic scattering of electrons at a variable angle θ from the already aligned and oriented atoms. The advantage of this method is its good time efficiency compared to the relatively slow coincidence experiments; here the intensity of the scattered electrons, with an energy gain of 2.93 eV only, is measured so that better statistical accuracy with measurements on a finer mesh of scattering angles can be achieved. The disadvantage is the limited applicability to transitions which can be pumped efficiently by lasers. Energies at which data were recorded correspond roughly to ten times the excitation thresholds for both Ca and Mg and the parameters are compared in figure 4 with corresponding values for helium 1s2s 1 P 1 state at 200 eV, again at roughly ten times the excitation energy. The helium data were calculations using convergent close-coupling theory [11]. A brief inspection of figure 4 indicates some common characteristics as well as significant differences between He and Ca and Mg, but also between the two metal atoms. All the parameters for Ca and Mg show angle-dependent structures in contrast with He for which parameters change rather smoothly. Of special interest here is the relationship between the maxima and minima in the differential cross sections and the structures in the angular behavior of the different electron impact coherence parameters. The physical origin of these effects was modeled in the calculations of Madison et al [19] which explained the observed behavior in helium by quantum mechanical interference phenomena. The main factors determining the positive L ⊥ at small scattering angle are nuclear atraction and distortion of the p = 0 and 1 incident electron partial waves, while distortion of the p = 0 and 2 is the most important for negative values at large scattering angles.
A particular difference is observed in the momentum transfer at forward and backward scattering angles in Ca and Mg. In Ca, a circular state is created in a large range of backward scattering angles between approximately 100 • and 150 • , while only smaller values up to approximately L ⊥ = 0.5, are measured and predicted at scattering angles less than 90 • . In contrast, two circular states with L ⊥ ≈ ±1 are created in forward scattering in Mg, i.e. at angles θ < 90 • , while only a small momentum transfer happens at scattering angles θ > 100 • . As the structure of these two atoms in terms of ground state electron configuration is very similar, the observed differences can be used to improve our understanding of momentum transfer in complex atoms. The behavior of the momentum transfer L ⊥ has been discussed for sodium [20], calcium [21], and magnesium [22]. A review by Cvejanović et al [23] of the data for the alkaline earth atoms indicated the generality of the observations.
As the final topic in this section an example of Stark-mixed states is discussed briefly to indicate an angular momentum consideration. The transitions discussed so far are the normal dipole transitions in which the atomic system changes parity. However levels of opposite parity may be Stark-mixed with an external weak electric field to carry information on the levels of both parity and on their degree of coherence [24]. For an atomic hydrogen target, quantum beats arise from the S − P coherence, for example, and from the Stark mixing in the subsequent time development of the atomic fine and hyperfine superposition states. These features may be deduced from figure 5 which shows two examples of the remarkable time development of the circular Stokes parameters IP 3 for a positive (left) and negative (right) electric field of 250 V/cm, an incident electron energy of 350 eV and an electron scattering angle of 3 • . The phase difference between the positive and negative field beat structures is about 180 • while the calculated behavior was modeled with a neglect of the coupling to the m j = 3/2 states which appeared to be reasonable for the weak electric field. It is noted that the data were limited by the large value of the 2 2 P cross section relative to the 2 2 S cross section for the scattering energy and angle. Measurements at larger angles where the ratio of 2 2 P to 2 2 S approaches unity would enhance the beat structure and make interpretations of the changing values of the interference more clear. However other observations of the linear polarization showed much smaller amplitudes of the beat structure than observed for the circular polarization, as shown in figure 5, i.e. the populations of the magnetic sublevels reflected the transfer of angular momentum along the quantization axis and the collision mechanism favored the unequal populations of the m = +1 and m = −1 sublevels for the particular scattering conditions.
Observation of electron exchange and spin-orbit interaction in experiments with spin polarized particles
The initial and significant studies of spin angular momentum effects in electron scattering have been developed and reviewed by Kessler [25] and Hanne [26] and references therein. Basically, the spin-orbit SL interaction between the continuum electron and a target atom results in different scattering potentials seen by electrons with different spin projections. This interaction leads to a difference of the fractions of scattered electrons with spin-up and spin-down, relative to a given quantization axis, which is defined as the spin polarization P . In this way unpolarised electrons become polarized through scattering and an existing polarization P may be analysed through the left-right asymmetry in a differential cross section which is energy and angle dependent. There are many combinations of angular momentum effects which can be pursued, depending on the relative strengths of electron exchange and the spin-orbit interaction in the target atoms and whether the target atom is initially polarized. In general, the spin asymmetry in scattering from spin-1/2 targets may arise from pure exchange, from the spin-orbit interaction or a combination of both exchange and spin-orbit effects. In a pioneering experiment, Baum et al [27] determined spin asymmetries for polarized electrons incident on polarized lithium atoms for which the spin-orbit interaction is negligible (for a very light atom) and electron exchange is the dominant spin effect. Their work gave a general discussion of the major components of such an experiment and was a landmark in showing how to perform a crossed polarised-particle beams scattering experiment and gave a general discussion of the major components of such an experiment. Their approach enabled the direct determination of spin-dependent scattering amplitudes and their relative phases. Spin asymmetries, of the inelastically scattered electrons after exciting the 2 S to 2 P transitions, were measured at angles of 65 • , 90 • and 107.5 • in the double differential cross section and then the integral of the previous measurements was measured by detecting the photon intensities at 90 • for spins up and down. In this way the ratio of singlet to triplet scattering was shown to be large near the threshold of 1.85 eV and near zero about 6 eV above threshold; thus the singlet scattering was large nearer threshold and by about 6 eV the singlet and triplet scattering probabilities are about equal. An asymmetry can occur even if the scattering angle is not defined, Figure 6. Representation of the change in the polarization vector in a scattering process. Symbols are as follows: i.e. when averaged over all angles, and in that way integral and differential measurements can be performed, as shown for example for elastic [28], excitation [27] and ionization [29] processes. Further developments of this approach can determine in principle a quantum mechanically complete scattering experiment by measuring the changes of the components of the polarization vector. These changes are defined simply, as indicated in figure 6 by the T and U contraction and rotation vectors, respectively, by the left/right asymmetry S A and the polarisation-afterscattering S P . While it is physically intuitive to discuss length and angles of vectors it is very difficult to measure all the S, T , U parameters, so measuring one or a few of these parameters provides valuable information for understanding collision processes. For example, for the special case where both the photons and the initial electron polarization directions are perpendicular to the scattering plan, the spin-up/down asymmetry S P , given by the difference over the sum of the intensities of the emitted photons corresponding to spin up and spin down electrons respectively, may be measured relatively easily. As a step in this direction, we have measured the spin up/down asymmetry for 26 eV and 60 eV incident 30% polarized electrons exciting the 5p [3/2] 2 state of krypton atoms, as shown in figure 7 [30], using the polarized electron-photon (λ = 826.3 nm) coincidence technique.
The spin up-down asymmetry may be non-zero due to either the "fine structure" effects [31] or the spin-orbit interaction for the continuum electrons or the combined effects of both mechanisms. The mechanism for spin up-down asymmetry due to fine-structure effects arises for the excitation of 5p 6 → 5p 5 6 of krypton (826.3 nm) because an oriented ionic 5p 5 2 P 1/2,3/2 core may be produced since the cross sections for exciting the M l = ±1 magnetic sublevels of the ionic 2 P core are different (for a quantization axis perpendicular to the scattering plane). This orbital orientation causes a spin orientation of the ionic core when the final J state is resolved, here by detection of the radiated 826.3 nm photon in coincidence with the energy-loss scattered electron. The spin of the excited electron is thus defined because of the conservation of the spin angular momentum. Then a spin-up/down asymmetry may be observed since the cross sections for spin-up/down incident electrons are different. While the asymmetry was observable there was a significant difference between the observations and the calculations from a relativistic distorted wave approximation which indicated a stronger angular dependence and larger asymmetries. The connection between the observables and the scattering amplitudes has been given by Mette et al [32], and also Guo et al [33], for the specific case of the spin-resolved triple differential ionization cross sections of xenon. The most recent publication [34] enable earlier work to be traced and will not be discussed further here.
Our final example is for the special case of excitation of the 6 3 P J states of mercury where the Münster group [35,36,37] showed in a series of separate experiments how all of the S, T and U parameters can be measured when the J sublevels of the 6 3 P 0,1,2 level are separated. The lack of subsequent complete experiments indicates the difficulty of the measurements even though there is still a significant demand for such data. Their data for the 6 3 P 1 excitation (observing the 254 nm photons in the decay to the 6 1 S 0 state) for an incident electron energy of 8 eV and the 6 3 P 2 state excitation at 40 eV are shown in figure 8. The angular differential excitation cross sections (noting the change of scale in the figure) display the usual angular behaviour for singlet and triplet transitions at this relatively low energy. The extensive data at 40 eV indicate a generally good agreement between measurement and the relativistic distorted-wave calculations of Srivastava et al [38] and of Bartschat and Madison [39] and so indicate the general applicability of their basic scattering model. The model gives a good understanding of angular momentum effects observed in such measurements since the difference arises from the different J values and the relative values of exchange and the spin-orbit interaction, apart from the different incident energies. For a light atom for which LS-coupling may apply, the orbital orientation L ⊥ is identical to the total angular momentum component J ⊥ ; however in contrast for a heavy atom such as mercury the orientation of J ⊥ may depend on the spin projection onto the quantization axis. In figure 8 it is seen in the 40 eV data that the S A parameter changes in sign numerous times as do the theoretical predictions. The model indicated the suitability of the intermediate coupling scheme with a superposition of pure LS-coupled 1 P 1 and 3 P 1 states to explain the shape of the spin asymmetry S A where the singlet admixture did contribute significantly to the scattering process by observing. It was deduced for these scattering dynamics that, for excitation of the 3 P 1 state, spin-flips due to the spin-orbit interaction are small so they must originate mainly from electron exchange and this result is in contrast to excitation of the 6 1 P 1 state for which exchange is very small and the spin-orbit interaction causes a difference in spinup/down cross sections.
Spin-related angular momentum effects in electron and photon impact
Measurements of the integrated Stokes parameters using spin polarized electrons is a powerful experimental approach characterized by high efficiency and high sensitivity. Integral here means that scattered electrons are not detected, so the intensity of the detected photons is integrated over all the electron scattering angles. The experimental geometry of figure 2 indicates the linear momentum and the transversely polarized spin vector of the incident electron beam define the x and z axes respectively. Then the three Stokes parameters can be defined as in equation (1) and with the same photon polarization conditions, but now direct photon intensities rather than the yield of true coincidences, are measured. The incident electron spin and linear momentum vectors define a plane of symmetry which enables angular momentum-dependent interpretations of the Stokes parameters. This definition of the parameter P 1 indicates it has the same value irrespective of the spin polarization. Similarly in experiments with unpolarized electrons, a cylindrical scattering geometry applies and both P 2 and P 3 are always zero. However, in experiments with spin polarized incident electrons planar symmetry applies and the parameters P 2 and P 3 can have non-zero values as a consequence of the spin-orbit interaction (P 2 ), and exchange or a combination of exchange and spin-orbit interaction (P 3 ). These deductions [41] offer an efficient way to study these spin effects and their dependence on atomic structure and electron correlation effects.
The current interest in open-shell atoms, particularly s-subshell photo-ionization, concerns electron correlations in the form of interchannel coupling, particularly in the metallic atoms of Ca, Mg (discussed above) and Zn (discussed below) and in the spin polarization of photoelectrons, in the angular distributions and in the cross sections for the neon valence subshells [42,43]. Our work on electron impact excitation processes in neon has shown the polarization of the decay photons depends on the different LS-mixing properties of the intermediate coupled states. The triplet components indicate exchange and spin-orbit interaction are important while for states with a large singlet component the spin-orbit interaction may be dominant. These comments apply in general to the transitions indicated in figure 9. Here we draw attention to the λ = 703.3 nm transition from the 2p 5 3p[1/2] 1 to the 2p 5 3s[3/2] 2 state. Figure 10 shows a strong resonance at 18.7 ± 0.2 eV in all the polarised excitation functions and that is in contrast with the magnitudes of the linear polarisations of those resonances which are seen in figures (a), (c) and (e) to be small. The near-zero values of P 1 and P 2 clearly indicate the influence of the dominant (98.2%) 3 S component of the state LS-composed wave function and of the ∆J value of -1 for the transition. The large circular polarisation P 3 indicates the electron exchange process is the dominant excitation mechanism. The small non-zero negative values of the P 2 parameter may be attributed to the effects of the negative ion resonances since the spin-orbit coupling effect within the atom is expected to be small for this state.
Our observations have provided a detailed description of the angular momentum changes in the excitation and decay processes of this 3p manifold of states [40]. linear polarizations than those with ∆J = 0 and −1. For the transitions coming from the upper states with a different core state, those with a 2 P 1/2 core have larger linear polarization than those with a 2 P 3/2 core. The resonances have a more significant influence on the alignment of the excited states with a j = 1/2 core than for states with a j = 3/2 core. The above studies are now moving in the direction of resonance phenomena and interchannel coupling where j-dependent spin effects can cause significant differences in partial wave amplitudes and their interference. An indication of spin effects is obtained from the photoionization of s-subshells in closed-shell atoms for which any deviation of the angular distribution parameter beta from 2.0 of the s-subshell photoelectrons is due necessarily to dynamical differences of the p 1/2 and p 3/2 transition amplitudes [44]. Strong resonances are indicated in figure 11 in the relative cross section and angular asymmetry parameter beta as a function of the incident photon energy for the Xe 5s-photoelectrons in the region of the 4d → mp (m ≥ 6) excitations. The resonances are associated with deviations of the angular distribution anisotropy parameter, β, from the nominal value of two which are enhanced by spin effects. This example is clear since without spin effects there is only a single LSJ term 1 P 1 describing the final state of ion plus electron and beta must equal two.
With the success of this study questions arise about the changes in the shift of the energy and the width of an isolated resonance when observed in different vector correlation parameters. Grum-Grzhimailo et al [45] discuss the earlier data and observed in Xe that the autoionizing 4d −1 5/2 6p (J = 1) resonance in the orientation and alignment parameters for the ionic ( 3 P 2 )6p 3/2 and ( 1 D 2 )6p 1/2 states in the total ion yield produced effectively different values of both the energy and the width, as shown in figure 12. However they fitted constant values for both the The relative cross section and angular asymmetry parameter beta as a function of the incident photon energy for Xe. The two upper sections are the differential cross sections at 0 • and 90 • respectively, the next is the angular asymmetry parameter β and the lowest section is the partial differential cross section derived from the β factor. The two full circles are β values obtained from independent photoelectron measurements. The labels indicate 4d 5/2 and 4d 3/2 (underlined) excitations. (from Whitfield et al [44]) energy and width by allowing for the different partial D 5/2 and D 3/2 differential cross sections, which could even explain different signs as well as magnitudes of the shifts. These effects have been discerned in photon impact studies using the large intensities from a synchrotron and the associated good wavelength resolution. They do not appear to have been detected in electron impact observations where the energy resolution, although good, does not rival that of a photon beam.
Similar insight into the role of spin-dependent interactions with negative ion resonances has been observed in our recent studies of electron impact excitation of zinc using spin polarized electrons. An energy level diagram showing the neutral, negative ion and autoionizing states of zinc and the observed 636.2 nm emission line is shown in figure 13. The incident electron excites the atom and is bound temporarily into a discrete state of the excited negative ion. Decay by auto-detachment of the electron leaves the atom in different excited states of the neutral atom which then decay by photon emission. Here we study the spin-orbit and exchange effects which are observable in the high precision measurement of the integral Stokes parameters P 2 and P 3 of the 636.2 nm photons emitted in the decay of the 4s4d 1 D 2 state of zinc shown in figure 13. All [46], to show the negative ion states had energies E r (1) = 10.97 eV, E r (2) = 11.33 eV and widths of Γ(1) = 0.25 eV and Γ(1) = 0.25 eV, respectively. Two aspects of these data are discussed. First, the dominant partial wave in which each resonance occurs can be identified often from the angular behavior of a differential cross section for electron impact excitation. An example showing the yield of electrons scattered at an angle θ = 30 • after exciting 4s4p 3 P 0,1,2 states of zinc is show in figure 14(d) [47]. Also the polarization of the emitted radiation can be linked to the angular momentum of the negative ion state but the procedure has been applied so far only in the near-threshold region of the excited neutral states in mercury [48] where considerable simplifications can be applied. In the present case of negative ion states in the autoionizing region, where the electron is most likely attached to a 3d 9 4s 2 4p excited core, the negative ion is a good example of a highly correlated system where two electrons move outside a 3d 9 4s 2 ion core and a large number of states of different symmetries (different L and S states if LScoupling applies) can be formed. The Stokes parameters P 2 and P 3 then reveal information on electron exchange and the spin-orbit interaction. Also the angular behavior of the differential cross section can assist in the assignment of symmetry of the state.
For a relatively light atom such as zinc, the spin-orbit interaction between the continuum electron and the atom is negligible and the effects responsible for the fine structure of the atom are probed. These spin effects might be influenced and amplified by configuration mixing and electron correlations in the negative ions, especially when electrons with high angular momentum are involved in a re-arrangement process. For the particular case of the decay of the negative ion into the 4s4p 1 D 2 state, the results in figure 14 indicate the different role of the spin-dependent interactions in the two observed negative ions. The lower energy resonance with a [5/2] ion core is observed in both P 2 and P 3 and this indicates an observable effect of both the spin-orbit and exchange interaction. The higher energy resonance with a [3/2] ion core is observed only in P 3 and with positive values indicating the dominant role of exchange and the absence of spin orbit interaction. Also the energies and width of both resonances, obtained from a fitting procedure of a Fano profile to the three Stokes parameters, are mutually consistent. These are the first high precision experimental results and they indicate further experimental and theoretical studies are justified to establish the precise details of the scattering processes and the characterization of the negative ion states. Figure 14. The Stokes parameters P i (i = 1, 2, 3) for excitation of the λ = 636.2 nm photons in zinc as a function of the incident polarized electron energy and θ = 30 • differential electron excitation function for the 4s4p 3 P 1,2,3 neutral states of zinc using unpolarized electron beam. Both set of data, polarization of the 4s4p 1 D 1 decay photons, and differential cross section for the 4s4p 3 P 1,2,3 channel show the resonance effects in the auto-ionizing region of zinc.
Conclusion
The intended audience for this paper was the attendees at the 7 AISAMP in Chennai. This paper was intended to provide some background to the angular momentum concepts underlying current electron and photon scattering experiments which are aimed at observing electron correlation effects mainly via scattering dynamics. Neither the references nor topics are intended to be comprehensive although the references provide a traceable history of the development of the topics discussed. Our second paper at this symposium indicates an extension of our approach to the interaction of spin-polarised electrons with surfaces, particularly an oxygen-covered tungsten crystal. | 2022-06-28T02:58:10.909Z | 2007-01-01T00:00:00.000 | {
"year": 2007,
"sha1": "1cca9cd74a225c85f7944b733916ddfd8df90cb2",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/80/1/012023",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1cca9cd74a225c85f7944b733916ddfd8df90cb2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
226237515 | pes2o/s2orc | v3-fos-license | Some identities involving derangement polynomials and numbers
The problem of counting derangements was initiated by Pierre Remonde de Motmort in 1708. A derangement is a permutation that has no fixed points and the derangement number Dn is the number of fixed point free permutations on an n element set. Furthermore, the derangement polynomials are natural extensions of the derangement numbers. In this paper, we study the derangement polynomials and numbers, their connections with cosine-derangement polynomials and sine-derangement polynomials and their applications to moments of some variants of gamma random variables.
INTRODUCTION AND PRELIMINARIES
The problem of counting derangements was initiated by Pierre Rémonde de Motmort in 1708 (see [1,2]). A derangement is a permutation of the elements of a set, such that no element appears in its original position. In other words, a derangement is a permutation that has no fixed points. The derangement number D n is the number of fixed point free permutations on an n (n ≥ 1) element set.
The aim of this paper is to study derangement polynomials and numbers, their connections with cosine-derangement polynomials and sine-derangement polynomials and their applications to momnets of some variants of gamma random variables. Here the derangement polynomials D n (x) are natural extensions of the derangement numbers.
The outline of our main results is as follows. We show a recurrence relation for derangement polynomials. Then we derive identities involving derangement polynomials, Bell polynomials and Stirling numbers of both kinds. In addition, we also have an identity relating Bell polynomials, derangement polynomials and Euler numbers. Next, we introduce the two variable polynomials, namely cosine-derangement polynomials D (c) n (x, y) and sine-derangement polynomials D (s) n (x, y), in a natural manner by means of derangement polynomials. We obtain, among other things, their explicit expressions and recurrence relations. Lastly, in the final section we show that, if X is the gamma random variable with parameters 1, 1, then D n (p), D (c) n (p, q), D (s) n (p, q) are given by the 'moments' of some variants of X .
In the rest of this section, we recall the derangement numbers, especially their explicit expressions, generating function and recurrence relations. Also, we give the derangement polynomials and give their explicit expressions. Then we recall the gamma random variable with parameters α, λ along with their moments and the Bell polynomials. Finally, we give the definitions of the Stirling numbers of the first and second kinds.
From (5), we have n l D l x n−l t n n! .
For X ∼ Γ(α, λ ), the n-th moment of X is given by It is well known that the Bell polynomials are defined by Bel n (x) t n n! , (see [9]).
When x = 1, Bel n = Bel n (1), (n ≥ 0) are called the Bell numbers. The Stirling numbers of the first kind are defined as As an inversion formula of (13), the Stirling numbers of the second kind are defined by (14) x n = n ∑ l=0 S 2 (n, l)(x) l (n ≥ 0), (see [5,7,18]).
DERANGEMENT POLYNOMIALS AND NUMBERS
From (5), we have On the other hand, Therefore, by (15) and (16), we obtain the following lemma.
Replacing t by 1 − e t in (5), we get From (17), we have It is easy to show that Replacing t by log(1 − t) in (19), we get From (5) and (20), we have Therefore, by (18) and (21), we obtain the following theorem.
Theorem 2. For n ≥ 0, we have Replacing t by −e t in (5), we get On the other hand, we have where E n are the ordinary Euler numbers. Therefore, by (22) and (23), we obtain the following theorem, Now, we observe that where r is a positive integer. On the other hand, Therefore, by (24) and (25), we obtain the following Proposition.
From (5), we note that By (9), (27) and (28), we get From (29) and (30), we can derive the following equations: We define cosine-derangement polynomials and sine-derangement polynomials respectively by Therefore, we obtain the following theorem.
and D (s) From (33), we note that Therefore, by comparing the coefficients on both sides of (37), we obtain the following theorem.
By (33), we get n (x, y) t n n! (38) Thus, we have On the other hand, we also have Therefore, by (39) and (40), we obtain the following theorem. By (33), we get On the other hand, Therefore, by (41) and (42), we obtain the following theorem.
It is not difficult to show that where r is positive integer. By comparing the coefficients on both sides of (39), we get l (x, y)r n−l .
Now, we observe that Form (45), we note that Therefore, we obtain the following theorem.
n (x, y) as a polynomial in x, for each fixed y, and D n (x) are Appell sequences.
From (34), we note that
Therefore, by (46), we obtain the following theorem.
Corollary 14. For n ≥ 1, we have By (46), we see that On the other hand, we also have Therefore, by (47) and (48), we obtain the following theorem.
It is easy to show that ∂ ∂ x D We observe that Comparing the coefficients on both sides of (49), we have the following theorem.
Theorem 16. For n ≥ 1, we have For r ∈ N, we have l (x, y)r n−l , (n ≥ 0).
FURTHER REMARKS
Let X be a gamma random variable with parameters 1,1 which is denoted by X ∼ Γ(1, 1). Then we observe that where f (x) is the density function of X , and p ∈ R.
On the other hand, by Taylor expansion, we get Therefore, by (51) and (52), we obtain the following theorem.
It is easy to show that where X ∼ Γ(1, 1).
CONCLUSION
The introduction of deragement numbers D n goes back to as early as 1708 when Pierre Rémond de Montmort considered some counting problem on derangements. In this paper, we dealt with derangement polynomials D n (x) which are natural extensions of the derangement numbers. We showed a recurrence relation for derangement polynomials. We derived identities involving derangement polynomials, Bell polynomials and Stirling numbers of both kinds. In addition, we also obtained an identity relating Bell polynomials, derangement polynomials and Euler numbers. Next, we introduced the cosine-derangement polynomials D n (x, y), by means of derangement polynomials. Then we derived, among other things, their explicit expressions and recurrence relations. Lastly, as an applications we showed that, if X is the gamma random variable with parameters 1, 1, then D n (p), D We have witnessed that the study of some special numbers and polynomials was done intensively by using several different means, which include generating functions, combinatorial methods, umbral calculus, p-adic analysis, probability theory, special functions and differential equations. Moreover, the same has been done for various degenerate versions of quite a few special numbers and polynomials in recent years with their interests not only in combinatorial and arithmetical properties but also in their applications to symmetric identities, differential equations and probability theories. It would have been nicer if we were able to find abundant applications in other disciplines.
It is one of our future projects to continue to investigate many ordinary and degenerate special numbers and polynomials by various means and find their applications in physics, science and engineering as well as in mathematics.
Author Contributions: T.K. and D.S.K. conceived of the framework and structured the whole paper; D.S.K. and T.K. wrote the paper; L.C.J. checked the errors of the paper; H.L. typed the paper; D.S.K. and T.K. completed the revision of the article. All authors have read and agreed to the published version of the manuscript. | 2020-11-04T02:00:58.607Z | 2020-11-03T00:00:00.000 | {
"year": 2020,
"sha1": "0d8835d700cbbc60115065fc000cf933f7d39db1",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jfs/2020/6624006.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "0d8835d700cbbc60115065fc000cf933f7d39db1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
19566085 | pes2o/s2orc | v3-fos-license | Dynamics of a heavy particle in a Luttinger liquid
We study the dynamics of a heavy particle of mass $M$ moving in a one-dimensional repulsively interacting Fermi gas. The Fermi gas is described using the Luttinger model and bosonization. By transforming to a frame co-moving with the heavy particle, we map the model onto a generalized ``quantum impurity problem". A renormalization group calculation reveals a crossover from strong to weak coupling upon scaling down in temperature. Above the crossover temperature scale $T^*=(m/M) E_F$, the particle's mobility, $\mu$, is found to be (roughly) temperature independent and proportional to the dimensionless conductance, $g$, characterizing the 1d Luttinger liquid. Here $m$($<<M$) is the fermion mass, and $E_F$ is the Fermi energy. Below $T^*$, in the weak coupling regime, the mobility grows and diverges as $\mu(T) \sim T^{-4}$ in $T \to 0$ limit.
We study the dynamics of a heavy particle of mass M moving in a one-dimensional repulsively interacting Fermi gas. The Fermi gas is described using the Luttinger model and bosonization. By transforming to a frame co-moving with the heavy particle, we map the model onto a generalized "quantum impurity problem". A renormalization group calculation reveals a crossover from strong to weak coupling upon scaling down in temperature. Above the crossover temperature scale T * = (m/M )EF , the particle's mobility, µ, is found to be (roughly) temperature independent and proportional to the dimensionless conductance, g, characterizing the 1d Luttinger liquid. Here m(<< M ) is the fermion mass, and EF the Fermi energy. Below T * , in the weak coupling regime, the mobility grows and diverges as µ(T ) ∼ T −4 in the T → 0 limit. The quantum dynamics of a heavy particle moving through a fluid has been of longstanding interest. Most of the effort has focussed on three dimensional quantum fluids, either Fermi liquids such as 3 He or superfluids such as 4 He [1]. Recently, there has been a resurgence of interest in non-conventional quantum liquids. A paradigm is the Luttinger model [2], which describes a one-dimensional interacting Fermi gas.
In this paper we study in detail the dynamics of a single heavy particle moving through a 1d Luttinger liquid. Of interest is the temperature dependence of the heavy particle's mobility. Our motivation is two-fold. Firstly, since the excitations in a 1d Luttinger liquid are profoundly different than in a Fermi liquid, one might anticipate that the dynamics of an immersed heavy particle would likewise be qualitatively modified. Secondly, powerful non-perturbative methods in 1d, such as bosonization, might be fruitfully employed to analyze the dynamics of a strongly coupled heavy particle.
Our main results are as follows. After introducing the model in Section II, we transform to a frame of reference co-moving with the heavy particle in Section III. In this frame, the heavy particle sits at the origin. In the limit that M → ∞ the model then becomes equivalent to a Luttinger liquid scattering off a static localized impurity. This problem has been analyzed in great detail recently, and is now well understood [3]. In the zero temperature limit, the impurity effectively "breaks" the Luttinger liquid into two semi-infinite de-coupled pieces. Fermions incident on the impurity are completely reflected. To analyze the case with finite mass M , a natural starting point is thus a limit in which the amplitude t for incident fermions to tunnel through the heavy particle is set to zero. Provided t = 0, the mobility can be computed for arbitrary M , and one finds a temperature independent value, µ = πg/(2hk 2 F ). At low temperatures, though, this limit is unstable to non-zero tunnelling, t. A renormalization group calculation reveals a crossover to a regime where the fermions are transmitted readily through the heavy particle. This leads to a decoupling between the dynamics of the heavy particle and the Fermi sea. At zero temperature, the only effect of the Fermi sea is to renormalize the mass of the heavy particle -the mobility is infinite.
In Section IV we use a weak coupling perturbative approach to calculate the temperature dependence of the mobility in the T → 0 limit. The dominant scattering process involves four fermions, absorbing and then re-emitting a pair, one right and one left moving. This process changes the momentum and energy of the heavy particle, and is shown to lead to a low temperature mobility that diverges as µ(T ) ∼ T −4 .
II. THE MODEL
The Hamiltonian which describes the motion of a heavy particle coupled to a 1d interacting Fermi gas can be written as H = H 0 + H LL + H int . Here H 0 describes the free particle of mass M : with momentum P and position X. H LL is the Hamiltonian for N-interacting fermions, which in first quantized notation is, where x i and p i denote coordinate and momentum of the i th particle. The interaction between the heavy particle and fermions is assumed to take the form, For simplicity we assume that U (x) is repulsive and short-ranged. It is useful also to have a 2nd quantized formulation. We denote as ψ(x) the fermionic field operator describing the interacting Fermi gas. In the absence of interactions, the ground state consists of a filled Fermi sea, with Fermi momentum k F . As usual, we decompose the field into a sum of right and left movers: where ψ R/L are supposed to be slowly varying. It will also be useful to bosonize the interacting electron gas, by expressing where φ and θ are canonically conjugate fields satisfying The appropriate Luttinger liquid Hamiltonian takes the from: Here g is the dimensionless conductance, which is less than one for a repulsively interacting Fermi gas, equals one for free fermions, and is greater than one with attractive interactions. The Fermi velocity v is also renormalized by interactions, and will differ from the free fermion value, k F /m. The right and left moving electron densities, N R = ψ † R ψ R and N L = ψ † L ψ L have simple bosonic representations, and
III. DESCRIPTION IN FRAME CO-MOVING WITH PARTICLE
To transform the equations of motion into a frame comoving with the heavy particle, one can use the unitary transformation, This transformation has been previously used [4] in a similar context, but in the special case where M = m . Under this transformation, the coordinates and momenta transform as: The transformed Hamiltonian becomes, with Notice that when M → ∞ the full Hamiltonian reduces to H imp , which describes a Fermi gas interacting with a static potential, U (x), centered at the origin. This quantum impurity problem has recently been analyzed in great detail [3]. However, when M is finite the heavy particle can move, and exchange energy with the Fermi sea. Notice that the heavy particle is coupled to the fermions via a minimal coupling, where the "gauge" field is the total momentum in the Fermi sea [5][6][7][8].
The transformed Hamiltonian can be expressed directly in 2nd quantization using the fermion field operators (2.4). These fields can then be bosonized. It is convenient to use a path integral representation, since the Lagrangian is linear in the "gauge" field. The euclidean action for the free Luttinger liquid which corresponds to (2.6) can be expressed as, The total momentum of the fermions can also be easily bosonized, which enables the total action for the heavy particle plus Luttinger liquid to be written, To analyze the dynamics in the transformed frame, it is convenient to first consider a strong coupling limit (U → ∞). In this limit, the fermions cannot pass through the heavy particle, and the Luttinger liquid is divided into two de-coupled regions on either side of the particle. Perturbations away from this limit can be included by allowing for tunnelling of fermions from one side to the other, with a small amplitude t. This process can be expressed in terms of the bosonic fields as [3], As we shall see, in the limit t = 0 the heavy particle's dynamics can be obtained exactly. A perturbative analysis for small t is then possible.
To this end, we follow ref. [3] and integrate out the bosonic field φ(x), except at x = 0 -that is at the position of the heavy particle. In terms of the phase difference across the heavy particle, the action becomes S = S 0 + S T with In (3.10) the summation is over Matsubara frequencies ω n = 2πn/β, with β the inverse temperature.
In the limit of zero tunnelling (t = 0), the action is quadratic. One can then integrate over the field Φ(τ ), to obtain a simple action for the dynamics of the heavy particle, This action is of the Caldeira-Leggett form, and describes a particle undergoing Brownian motion in a viscous environment with friction coefficient, η = 2k 2 F πg [9,10]. The particle's mobility can be obtained from the Kubo formula [11], For the quadratic action (3.12) this gives a dc mobility, which is independent of temperature and proportional to the Luttinger liquid conductance g. In this limit (t = 0), the particle is heavily damped by the fermions, even at zero temperature. The damping is heavy because the fermions cannot pass through the heavy particle, so motion is only possible by "pushing" the fermions out of the way. Consider now perturbing about this limit, for small tunnelling t. We first integrate over X(τ ), to obtain an action which depends only on the bosonic field Φ: Notice that the phase mode has a mass term, due to the motion of the heavy particle. In the static limit (M → ∞) this mass term vanishes, and the action reduces to that for a Luttinger liquid with impurity. Consider now a renormalization group (RG) transformation which consists of integrating over modes Φ(ω), for frequencies between Λ/b and Λ, and then rescaling ω → ω ′ = ω/b. Here Λ ∼ E F is a high frequency cutoff, and b = e dl is a rescaling factor. This transformation leaves the coefficient g invariant, whereas M decreases as, The RG flows for t depend on whether the mass for the phase mode is larger or smaller than the cutoff Λ. For M >> At finite temperatures, these RG flows will be cut-off at a scale b ∼ Λ/T . Since the cutoff energy scale is essentially the Fermi energy, Λ ∼ k 2 F /m, the crossover between the two flows occurs when M (l) ∼ m. If the (bare) particle mass is very large, M >> m, the scaling of t will be determined by (3.18) over a large range of temperatures, between E F and a crossover scale T * ∼ (m/M )E F . In this temperature range, for a repulsively interacting Luttinger liquid (g < 1), the tunnelling rate will scale towards zero. The mobility of the heavy particle should then be roughly independent of temperature, given by (3.15). However, at temperatures below T * , (3.19) indicates that the tunnelling rate t starts increasing. As T → 0 the tunnelling rate becomes large, and the perturbative expansion breaks down.
Evidently, in the low temperature limit the fermions can tunnel easily through the heavy particle. One anticipates that as T → 0 the heavy particle becomes transparent, and it's dynamics decouples from the fermions.
At very low temperatures when t grows large, fluctuations in the phase Φ are greatly suppressed by the S T term in (3.16). In this limit it is a good approximation to expand the cosine in (3.11) for small argument: This explicitly breaks the 2π phase invariance of the action. This symmetry breaking presumably occurs spontaneously at T = 0, but would be restored at non-zero T . This approximation is thus only expected to be strictly valid at T = 0. Since each 2π phase-slip process represents an event in which a fermion backscatters off the heavy particle, these events are completely suppressed at T = 0. After expanding the cosine term the full action is quadratic, The mobility can then be calculated using (3.14). To this end we introduce a source term, S s = i dτẊ(τ )J(τ ), which enables us to express P (ω n ) as a correlation function over the phase field: This can be evaluated using (3.21) and one finds, When t << ω n << (m/M )E F , this reduces to our previous result (3.15). However, in the low frequency limit, ω n << t, (m/M )E F , it gives a diverging ac mobility: This describes ballistic motion of the heavy particle with an effective mass, M ef f . This result is valid only at T = 0. At non-zero but small temperatures, T << (m/M )E F , one expects a finite mobility. As will be confirmed in Section IV, the d.c. mobility indeed diverges as T → 0. The above results suggest a rich temperature dependence for the mobility for g < 1. Between the Fermi temperature and a crossover temperature, T * ∼ (m/M )E F , the mobility is roughly temperature independent and given by (3.15). Below T * , the mobility starts increasing with cooling, and diverges in the zero temperature limit. Physically, below T * the heavy particle becomes "transparent" to the fermions. The dynamics of the heavy particle decouples from the Fermi sea. In the next Section, we employ a weak coupling perturbative approach to calculate the functional form of µ(T ) as T → 0.
IV. WEAK COUPLING PERTURBATION THEORY
Since the heavy particle tends to decouple from the Fermi sea as T → 0, a weak coupling approach should be appropriate at low temperatures. In this Section we use perturbation theory in the coupling between particle and Fermi sea, to extract the temperature dependence of the mobility as T → 0.
It is convenient to employ a 2nd quantized description for the heavy particle, denoting as c † (x) and c(x) the creation and destruction operators. Since we are only interested in a single particle, x c † c = 1. The free Hamiltonian (2.1) is, where N (x) = ψ † (x)ψ(x) is the fermion density. Here we have replaced the short-ranged interaction by a deltafunction: U (x) → U 0 δ(x). It is important to distinguish between small momentum transfer processes, and processes which scatter the fermions by 2k F . Using the decomposition (2.4) one can express, involves small momentum transfer, and denotes the large momentum contributions. The two corresponding terms generated from the interaction Hamiltonian will be denoted H int,0 and H int,2kF , respectively. Consider first computing the scattering rate for the heavy particle using Fermi's Golden rule, where the perturbing Hamiltonian is H int,0 . Since the fermion density at small momentum transfer is simply, N 0 = (1/ √ π)∂ x θ, the interaction Hamiltonian, H int,0 , takes the form of an "electron-phonon" interaction. It is thus useful to introduce "phonon" creation and destruction operators, which create and destroy the Harmonic Luttinger liquid excitations. To this end, we expand the boson field as and introduce boson operators: where Π k denote Fourier modes of the conjugate momentum, Π(x) = ∂ x φ(x). The operators b k satisfy canonical Bose commutation relations. The Luttinger liquid Hamiltonian can be expressed as, with dispersion, ω k = v|k|. Finally, the small momentum interaction takes the form: The rate to scatter the heavy particle from an initial state with momentum k to a final state k ′ = k + q, with absorption or emission of a single phonon, can now be readily obtained using Fermi's Golden rule. After summing over all possible phonon modes, assuming they are in equilibrium at temperature T , the rate is found to be, (4.9) Here n q = (exp(βω q )−1) −1 is the Bose distribution function. These processes are severely restricted by energy and momentum conservation. For example, for zero initial momentum, k = 0, the above delta functions vanish unless, ǫ q = ω q , or q = 2M v F . But at this momentum, the heavy particle has energy, ǫ q=2MkF = (4M/m)E F . These processes will thus freeze out exponentially fast for temperatures below this energy scale. If these were the only processes present, the mobility would diverge exponentially in the T → 0 limit. But other processes will dominate at low temperatures, as we now discuss.
Consider next the 2k F scattering contribution, Unfortunately, to leading order this interaction does not contribute to the low temperature scattering rate. To see this, consider the scattering process which transfers 2k F momentum but zero energy to the heavy particle. Energy and momentum conservation require, ǫ k = ǫ k ′ and k − k ′ = 2k F , where k and k ′ are the initial and final particle momenta. Together, these imply k = −k ′ = k F , which corresponds to a large particle energy, ǫ kF = (m/M )E F . At temperatures below this energy scale, this process will freeze out. However, higher order processes which are generated by H int,2kF will contribute to the low temperature scattering. Specifically, consider the interaction term, which will be generated by H int,2kF at second order. The coupling constant isλ = U 2 0 /ǫ 2kF , where the denominator, ǫ 2kF , is the energy of the heavy particle in the "intermediate state". This interaction term can be readily bosonized using (2.7)-(2.8), giving (4.12) with λ =λ/4π. The scattering rate from the process H ef f can be computed using Fermi's Golden rule giving, with a g = λ 2 8Lg 2 v 3 (1+g 4 ) and ∆ǫ = ǫ k+q −ǫ q . As required, Γ satisfies a detailed balance condition, f 0 (k)Γ k→p = f 0 (p)Γ p→k , where f 0 (k) = (const)e −βǫ k is the equilibrium momentum distribution function for the heavy particle at temperature T . Notice that this rate has appreciable weight at small energy and momentum transfer, vanishing as a power rather than exponentially. This leads to a power law dependence of the mobility µ(T ) on temperature, as we now demonstrate.
The mobility can be obtained by solving a Boltzmann equation for the momentum distribution function, f (p, t), in the presence of an applied electric field E: (4.14) As usual, the "collision integral" is expressed in terms of the scattering rates, (4.13), into and out of the state p: We seek a solution of the form f (k) = f 0 (k)G(k), and determine G(k). The collision term can be re-expressed as, Due to the Bose factors in (4.13), the scattering rate Γ is a sharply peaked function of the momentum transfer, q, with width q ∼ vT . At low temperatures it is then legitimate to expand both f 0 (p+q) and G(p+q) for small q. Moreover, in the low temperature limit, ∆ǫ in (4.13) can be set to zero, and the scattering rate simplifies: This requires, where we have used the fact that v F q/T ∼ 1 and k ∼ √ M T . In this low temperature regime, the collision integral can be written, where we have defined A = q q 2 Γ q = (const)T 5 , (4.20) and used the fact that q qΓ q = 0. With this form for the collision integral, the steady state Boltzmann equation reduces to a differential equation for G(p): We now specialize to the linear response limit, for small electric fields. To linear order in E, the terms on the right side can be dropped, and the equation readily integrated to give, G(p) = (const)e Ep/A . The momentum distribution function, f = f 0 G, is then given by The linear response mobility readily follows , Since A ∼ T 5 , we deduce a mobility which diverges as µ ∼ T −4 . This result agrees with a strong coupling analysis [5] based on the Brownian motion of solitons and calculations of the diffusion coefficient in real space [12].
V. CONCLUSION
In this paper we have analyzed the dynamics of a heavy particle moving in a 1d repulsively interacting Luttinger liquid. The behavior of the particle's mobility depends on whether the temperature is larger or smaller than a crossover scale, T * ∼ (m/M )E F . Above T * the mobility is roughly independent of temperature and proportional to the conductance g of the Luttinger liquid. Below T * the mobility grows upon cooling, and diverges in the zero temperature limit as µ(T ) ∼ T −4 . At zero temperature, the heavy particle moves ballistically, with a renormalized mass.
We thank A.O. Caldeira, A.O. Gogolin, A.W.W. Ludwig and N.V.Prokof'ev for many illuminating discussions. We are also grateful to the National Science Foundation for support under grants PHY94-07194, DMR-9400142 and DMR-9528578. | 2018-04-03T02:23:39.527Z | 1995-10-17T00:00:00.000 | {
"year": 1995,
"sha1": "2be50355f815fef82f4fa7d0e3af7284f3394b1b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9510094",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2be50355f815fef82f4fa7d0e3af7284f3394b1b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
215410329 | pes2o/s2orc | v3-fos-license | Evaluation of a low-dose protocol for cone beam computed tomography of the temporomandibular joint
Objective: Evaluation of cone beam CT (CBCT) examination with a low-dose scanning protocol for assessment of the temporomandibular joint (TMJ). Methods: 34 adult patients referred for CBCT imaging of the TMJ underwent two examinations with two scanning protocols, a manufacturer-recommended protocol (default) and a low-dose protocol where the tube current was reduced to 20% of the default protocol. Three image stacks were reconstructed: default protocol, low-dose protocol, and processed (using a noise reduction algorithm) low-dose protocol. Four radiologists evaluated the images. The Sign test was used to evaluate visibility of TMJ anatomic structures and image quality. Receiver operating characteristic analyzes were performed to assess the diagnostic accuracy. κ values were used to evaluate intraobserver agreement. Results: With the low-dose and processed protocols, visibility of the TMJ anatomical structures and overall image quality were comparable to the default protocol. No significant differences in radiographic findings were found for the two low-dose protocols compared to the default protocol. The area under the curves (Az) averaged for the low-dose and processed protocols, according to all observers, were 0.931 and 0.941, respectively. Intraobserver agreement was good to very good. Conclusion: For the CBCT unit used in this study, the low-dose CBCT protocol for TMJ examination was diagnostically comparable to the manufacturer-recommended protocol, but delivered a five times lower radiation dose. There is an urgent need to evaluate protocols for CBCT examinations of TMJ in order to optimize them for a radiation dose as low as diagnostically acceptable (the as low as diagnostically acceptable principle recommended by NCRP).
introduction
In recent decades, cone beam computed tomography (CBCT) has become an essential examination tool in oral and maxillofacial radiology. CBCT produces multiplanar images of high spatial resolution, which for many diagnostic tasks give information that is unattainable with two-dimensional projection radiography. In the diagnosis and treatment planning of patients with temporomandibular disorders (TMD), CBCT plays an effective role in assisting the clinicians when performing bony changes assessments with high diagnostic accuracy. 1 In comparison to medical CT, no significant Evaluation of a low-dose protocol for CBCT of the TMJ Iskanderani et al 2 of 7 difference in diagnostic accuracy was found between the two techniques; moreover, it is a cost-and dose-effective alternative. 2,3 Hence, it is considered as an appropriate imaging technique for evaluating bony changes of the temporomandibular joint (TMJ). 4 However, the growing use of CBCT, partly because of its availability and easiness of use, demands a critical evaluation of the relatively high patient radiation dose. Many different scanning protocols are available, and sometimes the urge from clinicians to have more or less noise free images can increase the radiation dose to patients without adding more information to the specific aim of the investigation. Thus, with the many CBCT scanners on the market today, clinicians together with medical physicists should ensure that the scanner they are using is being set for the lowest possible radiation dose, yet capable of providing acceptable image quality for the particular diagnostic task. 5 In current practice, most clinicians use the manufacturer-recommended exposure settings for the scanner in all clinical applications. This practice usually produces acceptable image quality, but might raise questions concerning the associated radiation dose. Attempts to optimize the patient radiation dose without sacrificing diagnostic outcome include approaches, such as optimizing tube current (mA), tube voltage (kV), and voxel size, selecting field of view (FOV) suitable to the diagnostic task, and using 180° or 360° of rotation. [6][7][8][9][10][11] One practical approach of dose optimization is to adjust the scanner's tube current to a level that the dentist may obtain enough diagnostic information of a given task at the expense of acceptable loss of image quality. 11,12 The structures in the TMJs have the advantage of appearing in relatively high contrast on CBCT scans, and are thus less susceptible to a reduction in exposure than for low-contrast structures. The possibility to establish if a degenerative disease is present or not may be easier to establish, in comparison to a thin periodontal ligament for endodontic questions. Radiographic images of the TMJ do not have to be very sharp with a high signal to noise ratio. Moreover, advanced denoising algorithms may further improve the image quality of scans made using a low-dose protocol.
There is clearly a need for a broader understanding of reduced-dose protocols related to subjective image quality in specific dental CBCT applications. Such protocols have already been assessed for some dental situations but not in the context of TMJ imaging in vivo. The aim of this study was thus to evaluate a low-dose protocol with and without image processing in CBCT examination of the TMJ.
Our hypothesis was that the low-dose protocol differed significantly compared to the default, manufacturerrecommended protocol in regard to the radiographic diagnostics on TMJ.
Methods and materials
The regional ethics review board in Lund, Sweden, approved this study (Dnr 2017/434). The study enrolled 34 adult patients (5 males and 29 females with a mean age of 57 years, ranged from 20 to 74 years) who had been referred for CBCT examination of the TMJ from the Department of Orofacial Pain and Jaw Function at Malmö University, Malmö, Sweden. For a power of 0.8 with a significant level of 0.05, around 60 TMJs were required. The indication to perform a CBCT investigation of TMJ was related to the clinicians' questions after thorough history and clinical investigation. The main reason for the CBCT investigation was when the performed treatment failed. All participants underwent two consecutive CBCT examinations with different exposure settings. All study participants signed informed-consent forms. This form included information about the additional radiation dose, 20% extra, which is equivalent to less than one week of background exposure in Sweden.
Cone beam CT examination
Two CBCT examinations (Veraviewepocs 3D F40, J. Morita Corp., Kyoto, Japan) were performed both TMJs of each patient which resulted in a total number of 68 CBCT volumes. The applied exposure protocols were: the default, manufacturer-recommended protocol (exposure settings: 90 kV, 5 mA, and 9.4 s) and a lowdose protocol (90 kV, 1 mA and 9.4 s). All other parameters were kept constant throughout the examinations, i.e. FOV of 40 × 40 mm, 180° rotation, and head position with the Frankfurt plane paralleled and the midsagittal plane perpendicular in relation to the floor of the mouth. Patients were informed to sit still during exposure. Multiplanar data were reconstructed with a pixel size of 0.125 mm, 1 mm slice thickness, and 1 mm slice interval.
Image evaluation 68 reconstructed sagittal volumes were saved in DICOM format. These volumes were grouped into three stacks for display and processing using Image J (National Institutes of Health, Bethesda, MD): a default protocol stack, a low-dose protocol stack, and a processed low-dose stack with noise reduction (Figure 1). The processed stacks were created using an advanced noise reduction algorithm. 13 We anonymized, randomly coded, and stored 204 data sets using Microsoft Excel Worksheet (Office Professional Plus 2016, Microsoft Corporation, Redmond, WA). Four (three senior and one junior) oral and maxillofacial radiologists evaluated all images. Before the study began, observer calibration was done on CBCT sagittal volumes of the first 25 stacks with a consensus on the instructions for assessing the images. The observers were blinded to any clinical or radiographic information and independently assessed the visibility of five anatomical structures: the outlines of the condyle, the articular eminence, and the articular fossa and the trabecular patterns of the condyle and the temporal bone. All observers used the same type of viewing monitor, (Barco MDCC-6430, 8500 Kortrijk, Belgium) under the same viewing conditions with no time limitations. The observers were allowed to adjust brightness and contrast when necessary. A 3-point scale was applied to assess the visibility of each anatomical structure: 1 = definitely visible, 2 = questionably visible, and 3 = not visible. Furthermore, the observers were asked to an overall impression on subjective image quality ranked as: 1 = diagnostically acceptable, 2 = diagnostically questionable, and 3 = not diagnostically acceptable. To assess the radiographic finding, the observers were asked to record their level of confidence about the presence of degenerative joint disease (DJD) and were stated as 1 = definitely not, 2 = probably not, 3 = questionable, 4 = probably and 5 = definitely according to Diagnostic Criteria of TMD (DC/TMD). 14 Intraobserver agreement was determined by asking each observer to re-evaluate 40 TMJs at an interval of at least 14 days and under the same conditions.
Statistical analysis
The five anatomical structures were pooled together in all settings for each observer and compared pairwise. For instance, the amount of "definitely visible" in the default protocol in relation to the number of "definitely visible" in the low-dose protocol. The overall image quality was evaluated in the same way. The results were analyzed using the Sign test 15 with the level of significance at 0.05.
The default protocol of the CBCT unit was considered to be the reference standard to establish if DJD was present or not, by two observers (XS, KHH). A consensus was reached in cases of disagreement. Receiver operating characteristic (ROC) curves were used to analyze the radiographic findings concerning the presence of DJD for all the observers. 16 The area under the curves (A z ) were calculated. Intraobserver agreement was estimated using κ (κ) statistics. 17 The values were interpreted according to the guidelines of Landis and Koch 18 adapted by Altman. 15
Results
Visibility of the five anatomical structures and overall image quality, according to all observers, showed no difference (p = 0.00), for the two low-dose protocols compared to the default protocol. Only one observer found improvement in processing the images for both structures' visibility and image quality between lowdose protocol and processed protocol, respectively (p = 0.79 (visibility), p = 1.00 (image quality)).
The evaluation of the default protocol for the 68 TMJ cases showed that half of them were sound while the rest had DJD with varying osseous changes. When establishing the reference standard, the two observers disagreed in one of the cases, which had minor erosions on the superior surface of the condyle. Table 1 presents the different A z for each observer. The differences in the radiographic findings for the two low-dose protocols compared to the default one, as well as between them, were not significant. The average A z for the observers were 0.931 for low-dose protocol and 0.941 for the processed protocol. Intraobserver agreement among the radiographic finding were good to very good for all observers (κ values = 0.75, 0.80, 0.80, 0.85).
Discussion
Several studies in the literature have demonstrated that CBCT imaging exposed with low-dose protocol may provide diagnostically acceptable image quality for various dental indications. [6][7][8][9][10][11] The present study assessed a low-dose CBCT protocol for TMJ imaging, where tube current was lowered to just 20% of the manufacturer-recommended level in a clinical setting. In this study, the ethics review board approved that the potential benefit to future clinical practice for CBCT examination of the TMJ overcomes the increased radiation burden (equivalent to less than 1 week of background exposure in Sweden) for the patients included in the study. Therefore, we used an ethically approved study design that has been tested with different radiographic examinations. [19][20][21][22] It is well known that manufacturer-recommended exposure settings for the CBCT scanners designed for dentistry vary widely, which results in varying amounts of radiation doses. Tube current (mA) and exposure time (s) control the number of X-ray photons emitted; thus, higher mAs increases the measured signal from the sensor and decreases image noise. The mA setting is linearly proportional to radiation dose. 23 Hence, when tube voltage and exposure time are constant, lowering the mA setting by 50% reduces the delivered dose by half and consequently the quantum noise in the resultant image will be higher. Pauwels et al 12 reported minimal loss of image quality when reducing tube current compared to when reducing tube voltage. Another study by Sur J et al, 11 confirmed the potential to reduce dose through a reduction in tube current.
In line with the recommendations 24,25 for the use of any radiographic investigation, optimization should be done whenever possible in order to expose the patient to the lowest radiation dose according to the As Low As Diagnostically Acceptable (ALADA) principle, which is proposed as a variation of the acronym ALARA (as low as reasonably achievable) to emphasize the importance of optimization according to a given diagnostic task in medical imaging. 25 High image quality in the sense of being noise free and having high resolution is not necessary in all clinical situations. Structural changes in the few anatomical structures that comprise the region of the TMJ are quite visible at exposure levels well below those giving the highest image quality. Usually, we try to find osseous changes when any form of DJD is suspected; thus, lower radiation dose protocols are possible. A high resolution might be needed for assessing fine pathological changes, however. Thus, the diagnostic task determines the level of image quality needed and, in turn, the degree of exposure parameter optimization. 5 In our faculty, the demand for CBCT investigation of TMJ has decreased over time. Only patients where no explanation of their pain and no effect of treatment for TMD were found, and for those where other questions that possibly could be answered would be subject to a CBCT investigation. Thus, the CBCT examination of the TMJ were considered justified.
Our results demonstrated the ability of a low-dose protocol to image the TMJ sufficiently well without significant difference in diagnostic accuracy. Thus, this protocol is preferable, since the radiation dose was 20% of the dose delivered using the default protocol. Kadesjö et al 26 reported a 50% potential for dose reduction compared with the manufacturer's recommendation for the CBCT examination of TMJ. Comparable results were found for reduced dose protocols with other diagnostic tasks. 6,10,11 Reduction in dose is associated with degradation of image quality, principally due to the increased noise level. Noise influences both contrast resolution and spatial resolution and consequently, a representation of an object. However, in a sense of visual perception, the radiologist may still be able to see the object details and maintain diagnostic performance. When the diagnostic performance is influenced by a high level of noise, a noise reduction algorithm may be used to improve the image quality. Such programs filter out noise to varying degrees while preserving texture, contour, and fine details. 13 The denoising method that we used was reported to be the technique of choice when very fine details are not required. 27 We found that processing the images did not significantly improve the diagnostic accuracy (Table 1).
We used the default settings of the CBCT as a reference standard, although no "truth" can be obtained in this kind of clinical studies. We calculated the A z in order to determine the validity of the two reduced-dose protocols for detecting the presence of DJD. Our findings revealed that both low-dose and processed protocols showed comparable diagnostic accuracy in relation to the reference standard (average A z = 0.931 and 0.941, respectively), implying that the observers performance on detection of possible degenerative changes were comparable. A retrospective practice-based study found a similar result for pre-surgical implant assessment. 9 It should be noted that these results were obtained in a real clinical setting, including factors such as motion artifacts that affect image quality. However, the limitations of this study could be that the results cannot be generalized to all CBCT machines. The number of observers chosen could be another limitation. However, the interobserver agreement was good, and therefore, further observers would, in our opinion, probably not contribute to any significant difference. A previous study by Hintze et al 28 reported that the statistical power is dependent on the total number of evaluations, including the number of observers and surfaces evaluated for carious lesions, not on each separately. However, this kind of evaluation is known to show lower interobserver agreement due to the fact that these lesions are low contrast lesions. In comparison, TMJ structures appear with relatively high contrast.
As previous studies on low-dose protocols have also found adequate diagnostic image quality, our results could be considered together with these; it should be concluded that dose reduction achieved using a tube current below the recommendations of the manufacturer should always be investigated, in order to improve compliance with the ALADA principle. 25 More important, a low-dose protocol should be the self-evident choice for younger patients, as the developing tissues of young patients are more radiosensitive and thus at a higher risk from X-ray exposure than adult tissues. 29
conclusion
The hypothesis that low-dose protocol would affect the radiographic diagnostics on TMJ could be rejected, which means that for the CBCT unit used in this study, the lowdose CBCT protocol for TMJ examination was diagnostically comparable to the manufacturer-recommended protocol but delivered a five times lower radiation dose. There is an urgent need to evaluate protocols for CBCT examinations of TMJ in order to optimize them for a radiation dose as low as diagnostically acceptable (the ALADA principle recommended by NCRP). table 1 Area under the receiver operating characteristic curves (A z ) for assessing the presence of the degenerative joint disease, according to four observers as evaluated on cone-beam computed tomography images produced by three protocols: the low-dose (1 mA tube current) protocol, the processed (low-dose with noise reduction) protocol, and the reference standard ("default" manufacturer-recommended, 5 mA tube current) protocol
Funding
This study was financially supported by King Abdulaziz University.
Human and animal rights statement: All procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1964 and later versions. Informed consent was obtained from all patients for being included in the study. | 2020-04-08T19:07:39.856Z | 2020-04-06T00:00:00.000 | {
"year": 2020,
"sha1": "0c394137951162826f8485fc9d4b7b32352bdd4e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1259/dmfr.20190495",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "26e7ee1c6794125ace989133c11ef36c24df6a8d",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259059687 | pes2o/s2orc | v3-fos-license | Roadmap on nonlinear optics–focus on Chinese research
In nonlinear optical systems, the optical superposition principle breaks down. The system’s response (including electric polarization, current density, etc) is not proportional to the stimulus it receives. Over the past half century, nonlinear optics has grown from an individual frequency doubling experiment into a broad academic field. The nonlinear optics has not only brought new physics and phenomena, but also has become an enabling technology for numerous areas that are vital to our lives, such as communications, health, advanced manufacturing, et al. This Roadmap surveys some of the recent emerging fields of the nonlinear optics, with a special attention to studies in China. Each section provides an overview of the current and future challenges within a part of the field, highlighting the most exciting opportunities for future research and developments.
studies in China. Each section provides an overview of the current and future challenges within a part of the field, highlighting the most exciting opportunities for future research and developments.
Mengxin Ren and Jingjun Xu *
The Key Laboratory of Weak-Light Nonlinear Photonics, Ministry of Education, School of Physics and TEDA Applied Physics Institute, Nankai University, Tianjin 300071, People's Republic of China nearly all NLO phenomena that had been observed abroad were successfully reproduced in China in a short period of time. Until the middle of 1980s, Chinese researchers have achieved remarkable progresses in such as optical bistability, four-waves mixing, and stimulated Raman scattering (SRS), etc. By the turn of 21th century, nonlinear optics has become an important academic topic in China. A series of breakthroughs have been made in photorefractive effect and its applications in optical storage and computing, as well as the quasi-phase matching (QPM) frequency conversion and tunable lasers, etc. More progresses of nonlinear optics in China can be found in several review articles and books [5][6][7][8][9].
In this Roadmap, we highlight the status, current and future challenges, and emerging technologies in several research areas of nonlinear optics in China. This roadmap is divided into several topics, grouped thematically.
The first part of the roadmap covers the current progress of lasers, which is an essential tool to produce nonlinearities. In article 1, Pengfei Lan and Peixiang Lu describe the developments of attosecond lasers, which is an ideal tool to study the ultrafast nonlinear dynamics in matters. In article 2, Zhi-Yuan Li and Li-Hong Hong demonstrate an all-spectrum white laser, which has an extremely broadband spectrum like solar radiation but with coherent light. In article 3, Yulei Wang and Zhiwei Lv present an overview of the stimulated Brillouin scattering (SBS) effect and its application in manufacturing lasers with designer performance. The realization of quantum light sources based on nonlinear optics is discussed in article 4 by Zhiyuan Zhou and Baosen Shi. The second part of the roadmap addresses the field of nonlinear materials. In article 5, Yong Zhang, Shining Zhu and Min Xiao discuss the state-of-the-art three-dimensional artificial microstructures, which is a solid step towards nonlinear optics in three-dimensional space. In article 6, Satoshi Aya and Yanqing Lu introduce emerging polar liquid crystals (LCs) with tunable polarization structures, which are the first class of second-order nonlinear materials in the liquid form. In article 7, Huixin Fan, Min Luo and Ning Ye describe the design of the nonlinear materials for SHG in ultraviolet and deep ultraviolet (DUV). Zeyuan Sun, Weitao Liu and Shiwei Wu present in article 8 the atomically thin two-dimensional materials, which provide a unique opportunity to study various NLO phenomena. Furthermore, this part of the roadmap ends by Qingyun Li and Hui Hu with an introduction to the current status and perspectives of the lithium niobate thin films (LNTFs), which is a promising platform for the next generation integrated photonics.
The third part of the roadmap deals with new nonlinear effects and behaviors. In article 10, Yuanlin Zheng and Xianfeng Chen show the classical and quantum nonlinear frequency conversion by LNTF, which is essential to build all-optical functional chips. In article 11, Xiaoyong Hu surveys the research on ultrafast and giant third-order nonlinearity, which is essential to achieve high-speed and low energy consumption all-optical processing system. Chuanshan Tian describes in article 12 the surface-specific NLO spectroscopy and its implementation in probing the microscopic structure and dynamics at surfaces and interfaces. In article 13, Zixian Hu and Guixin Li review the spin-orbit interaction in the NLO processes, which has been proven as an efficient method to generate and control the angular momentum of the harmonic waves. In the last article of this part, Yi Hu and Jingjun Xu introduce some counterintuitive phenomena by nonlinear light interactions, which manifest synchronized acceleration of optical beams breaking the action-reaction symmetry.
The final part of the roadmap discusses the applications of the nonlinear optics. Kun Huang and Heping Zeng discuss in article 15 how parametric upconversion imaging acts as a promising strategy for mid-infrared (MIR) imaging, where infrared photons are detected by high-performance visible detector. In article 16, Zhenze Li and Hongbo Sun present the status of ultrafast laser nonlinear manufacturing, which represents a promising method to construct next-generation integrated photonic system. In article 17, Lei Dong introduces the application of photoacoustic (PA) effect in sensitive gas spectroscopic sensing. The whole roadmap concludes by Runfeng Li, Wenkai Yang and Kebin Shi with an important application of NLO microscopy for bio-imaging, which is an unparalleled tool for observing biological dynamics in-vivo with high spatiotemporal resolution and biomedical specificities.
Nonlinear optics is a flourishing field, which has grown far beyond what we expected at its birth. It not only provides us with new understanding or essence to the light-matter interactions, but also new technique to harness light. We shall not limit the nonlinear optics as an academic subject, but also recognize its power in driving economic development. Its advances have fostered innovations across a broad spectrum of applications in a diverse array of economic sectors. For example, the electro-optical effect, which enables information routing at a hyper-speed in fiber networks, makes the Internet economy thrive. Furthermore, NLO spectroscopy has become a standard method for material inspection, playing a vital role in many aspects of microelectronic industries. There is no doubt that the nonlinear optics will offer greater societal impact over the coming decades, but as highlighted in the following sections there are certainly many new challenges to overcome. The nonlinear optics in China has achieved remarkable achievements during the past decades. To maintain this encouraging trend and meet the challenges ahead, greater investment in national policy, financial support and intellectual resources are highly desirable in the future.
High order harmonics and attosecond pulse
Pengfei Lan and Peixiang Lu School of Physics and Wuhan National Laboratory for Optical Electronics, Huazhong University of Science and Technology, Wuhan 430074, People's Republic of China
Status
At the end of 1980s, high order harmonics were observed in the interaction of strong laser field with atomic gases [10,11]. Different from the normal perturbation nonlinear effect, the spectra of strong-field high order harmonic generation (HHG) present a broad plateau of nearly constant conversion efficiency, followed by abrupt cut-off. Such a non-perturbation nonlinear optics effect has revolutionized contemporary optics. On the one hand, it produces coherent extremely ultraviolet (XUV) or soft x-ray light sources. As opposed to synchrotron and free-electron laser, which need large-scale facilities, HHG-based x-ray can be much more easily accessed in a small-scale laboratory. On the other hand, the broadband spectrum enables to produce an attosecond ultrashort laser pulse [12,13]. Attoseconds pulse lies at the current frontier of ultrashort laser and ultrafast science. It enables the attosecond time-resolved metrology that is not possible before, so that the electron motion can be detected in real time. The fast development of attosecond time-resolved measurement has significantly advanced the fundamental understanding of the microscopic dynamics of laser-matter interaction in the past decades.
Generation of HHG and attosecond pulse with shorter duration, higher photon energy, higher single-pulse energy, and higher flux (i.e. higher peak power and average power) has being the important goal. Several methods, e.g. amplitude gating, polarization gating, two-color or multi-color gating, have been demonstrated to control the HHG so as to generate an isolated attosecond pulse. In the first decades of this century, HHG and attosecond pulse is usually produced by using a Ti: Sapphire femtosecond driving laser with a wavelength of 800 nm. In the most recent decades, the MIR optical parametric amplifier (OPA) become the mainstream driving source for HHG. Then the photon energy has been extended the 'water window' region (i.e. 284 eV-583 eV) and the shortest pulse duration is reduced to ∼50 as. However, as shown in figure 1(b), the pulse energy of the isolated attosecond pulse is usually at the nanojoule level and the repetition rate is 1 kHz. In recent two experiments, the pulse energy is boosted to 240 nJ and 1.3 µJ by loosely focusing a terawatt (TW) driving laser to the Ar and Xe gases, but the repetition rate is reduced to 10 Hz. The low pulse energy and photon flux is the crucial issue for the applications of attosecond pulses.
Current and future challenges
Applications of HHG and attosecond pulses have enabled lots of break-through achievements ether using attosecond pulse pump near infrared femtosecond pulse probe (or vice versa) and high harmonic spectroscopy. It is previously applied and developed in the field of ultrafast dynamics in atoms and molecules. In very recent years, considerable efforts are devoted to extending the attosecond spectroscopy to more complex systems, including solids and liquids. To this end, it requires to develop the more powerful attosecond laser pulse and attosecond techniques. Currently, it is still very challenging to increase the photon flux, pulse energy and photon energy of the attosecond pulse. The bottleneck is that the generation of isolated attosecond pulse generally requires an ultrashort driving pulse of few-cycle pulse duration and 100 TW cm −2 focusing intensity. Therefore, it is typically based on a Ti: Sapphire chirped pulse amplification (CPA) laser system, the pulse energy is at the milijoule level, and the relation rate is at the 1-10 kHz level. The generated attosecond pulse is typically at the nanojoule or tens of nanojoules level and the photon energy is limited to ∼100 eV. Using a MIR OPA driving laser pulse is a good way to increase the photon energy and produce the shorter attosecond pulse. However, the efficiency of HHG decreases dramatically as the yield declines approximately by λ −5 to −6 with increasing the driving laser wavelength.
On the other hand, circularly polarized HHG and attosecond pulse are very attracting for the study of ultrafast dynamics in circular dichroism of magnetic materials, chiral molecules. However, due to the 'recollision' mechanism [16], it is more challenging to produce bright circularly polarized high harmonics and attosecond pulses than the linearly polarized one.
Moreover, the observations of high harmonic generation in semiconductors [17], dielectrics, liquids [18] and even glasses pave the way to extend the high harmonic spectroscopy to probing the band structure, topologic state, and electron dynamics to more complex systems. However, it is very challenging to understand the physics underlying the interaction of solids, liquids with strong laser fields. For instance, the mechanism of solid high harmonic generation is still under active discussions [19]. To real these ultrafast processes, advanced time-resolved spectroscopy methods that can capture the electron and nuclear motion are desired. [14] and Takahashi et al [15]. Reproduced from Takahashi et al [15]. CC BY 4.0.
Advances in science and technology to meet challenges
The recent development of Yb-based CPA laser system (see the recent review [20]), e.g. fiber laser, slab lasers and ultrafast thin-disk laser, has opens a very promising way to produce the attosecond pulses with high-flux and high pulse energy. In compression to Ti: Sapphire CPA, the Yb-based CPA can deliver multi-millijoule, sub-picosecond laser pulse at 100 kHz or even several MHz, so that the average power can be improved by one-to-two orders of magnitude. In combination with the nonlinear pulse compression techniques, e.g. the gas-filled hollow-core-fiber, multipass cell, multipass thin plate, the pulse duration of the Yb-CPA laser can be reduced to several tens of femtoseconds or a few femtoseconds. Several groups have demonstrated the enhancement of photon flux using the Yb-CPA driving laser. Moreover, the coherent beam combination of ultrafast fiber lasers has advanced substantially in very recent years. It becomes an efficient power scaling techniques and enables to produce the 10 mJ pulse at 100 kHz and above 1 J at 1 kHz. Then it becomes possible to improve the single-pulse energy and average power of the attosecond pulse.
An alternative way to implement the high power Yb-CPA is to pump the optical parametric CPA, i.e. OPCPA. Several groups have generated sub-10 fs laser pulse at 100 kHz or MHz and produced HHG and attosecond pulse with OPCPA systems. Moreover, by using different nonlinear materials, OPCPA enables to produce the MIR laser pulse so that to extend the photon energy of HHG to more than 600 eV. It is expected to dramatically boost the peak power to tens TW and average power to more than 3 kW in a concept design, which will become a powerful engine for pushing attosecond science [21]. Another way to boost the photon flux is using cavity-enhanced HHG, which can increase the repetition rate to tens of MHz, but with limited single-pulse energy.
High brilliance attosecond pulses will enable numerous opportunities. By applying the table-top HHG source, one can extend the x-ray techniques, e.g. x-ray absorption spectroscopy, x-ray photoelectron spectroscopy, to the attosecond domain. The increase of the HHG photon energy will enable the time-resolved x-ray absorption near-edge structure and extended x-ray absorption fine structure spectroscopy, providing electronic as well as structural and chemical information with atomic resolution. The combination of the HHG source with the Angle resolved photoemission spectroscopy or photo-emission electron microscopy will provide a powerful tool to investigate the electronic band and structure change of the material. Great progresses can be foreseen in understanding the ultrafast dynamics in chemistry and condense matter physics.
Concluding remarks
Tracking and understanding the ultrafast dynamics in atoms, molecules and condensed matter has been a continuous hot topic of ultrafast science. The last two decades have witnessed the fast advances of attosecond science. New perspective about the hitherto immeasurably ultrafast electronic processes of laser-matter interaction have been obtained. Since HHG has been observed ranging from atoms, molecules to solids and liquids, the field of attosecond sciences becomes much broader and has attracted more interdisciplinary interests. More powerful attosecond laser sources e.g. extreme light infrastructure in Europe, Synergetic Extreme Condition User Facility in China and the attosecond pulse from free electron lasers in USA, are being available, a bright future of attosecond science can be expected.
Zhi-Yuan Li and Li-Hong Hong
School of Physics and Optoelectronics, South China University of Technology, Guangzhou 510640, People's Republic of China
Status
Since the invention of ruby laser in 1960 by Maiman, the realm of lasers has continuously advanced into an ever-increasing height and ever-expanding frontiers in terms of laser materials (gas, liquid, solid, semiconductor, and fiber), pulse duration (continuous wave, nanosecond, picosecond, femtosecond, and attosecond), spectral width, power, and energy, thanks to the cooperative efforts from laser technology and nonlinear optics communities. An ordinary laser machine, with fixed design of cavity, gain medium, and pump source, only outputs continuous-wave laser with one or several specific discrete wavelengths or ultrashort pulse laser covering only a limited spectral bandwidth. This shortcoming can be lifted by input these pump lasers into a nonlinear crystal or amorphous material with considerable second and third-order nonlinearity, where NLO interaction will convert part of the energy of pump lasers into new lasers of different discrete wavelengths or spectral bands, leading to significantly expanded windows and bandwidths of lasers. One sees the cooperative action of laser technology and nonlinear optics here.
As the second and third-order nonlinear coefficients are very small compared with the linear susceptibility, it requires the input pump laser beam to have sufficiently high intensity (i.e. power density) in order that the NLO conversion efficiency reaches a high magnitude. Besides, to enlarge the spectral window and bandwidth as much as possible, the pump laser should first have as large as possible bandwidth. A femtosecond pulse laser with pulse duration on the order of 50-100 fs has a bandwidth (measured by the full width at half maximum, FWHM) on the order of 20-50 nm, is suitable for this purpose. If, in addition, the femtosecond laser has its single pulse energy reaching the ∼1 mJ level, then the peak power can be as large as 10 GW, and when focused into a tiny laser beam with spot radius as 1 mm, then the power density (i.e. the optical intensity) can reach 1 TW cm −2 , a magnitude sufficiently large to induce both very strong second and third-order NLO effects, so that an input narrow band femtosecond pulse laser can evolve into an output supercontinuum laser with much larger bandwidth.
A great dream of this type of ultra-broadband laser is to construct a supercontinuum white laser possessing an extremely broad spectral bandwidth that covers ultraviolet-visible-infrared (UV-vis-IR), much like the ordinary solar radiation spectrum (which is a completely incoherent light source) does, as illustrated in figure 2.
Current and future challenges
The solar radiation not only has a large bandwidth, but also has a superflat spectral profile, where the FWHM (or 3 dB bandwidth) reaches about 640 nm (320-960 nm). A white laser with this brilliant degree of ultrabroad bandwidth and superflat spectral profile, which can be named as all-spectrum white laser, should represent a milestone in laser science and technology, and undoubtedly will induce a revolution in both basic sciences and practical applications. Yet, it is by no means an easy task to bring this dream into reality, on the contrary, it is a great challenge and difficulty. Notice that an ordinary Ti:Sapphire femtosecond pulse laser with 50 fs pulse duration only has 45 nm 3 dB bandwidth around the central wavelength of 785 nm, a natural question arises: how to expand the bandwidth of pump femtosecond laser into the UV-vis-IR all-spectrum white laser with the similar superflat spectral profile of solar radiation?
In the past history there have appeared many schemes toward supercontinuum white-light laser, but most of them utilize various third-order nonlinear effects (third-NL) (self-phase modulation (SPM), four-wave mixing (FWM), SRS, etc) of high-peak-power picosecond and femtosecond pulse laser interacting with amorphous solid materials (like silica, fluoride, chalcogenide glass) in the form of microstructured photonic crystal fibers or homogeneous plates, or with rare gases (He, Ar, etc) filled within hollow-core silica fibers [22][23][24][25]. However, these purely third-NL schemes always encounter certain limitations in the balanced performance of spectral bandwidth, spectral flatness, and pulse energy due to the tiny modal area or the dispersion properties of transport waves. Another more powerful means to expand the spectral range of the laser is various second-order nonlinear effects (second-NL) effects via the promising route of QPM scheme. However, these existing QPM routes also have difficulty in high-quality broadband laser generation with limited spectral bandwidth, not flat enough spectral profile, and reduced conversion efficiency. Frankly speaking, it becomes a great challenge to resolve these bad limitations existing in both second-NL and third-NL regimes and make the best of the both worlds. . Schematic illustration of a roadmap toward construction of all-spectrum white laser via synergic second-NL and third-NL. The second-NL is implemented by pumping a CPPLN crystal by a high peak power mid-IR fs pulse laser and igniting multiple HHG. The third-NL is used to broaden both the pump fs laser pulse and output HHH laser pulse, so that the spectral bands of all harmonics overlap to form a UV-vis-IR supercontinuum white laser.
The above analyses also indicate that if one can dig and explore to make the good figures of both second-NL and third-NL to fully cooperate and operate together, then the synergic action of these two effects might shine a light of explosive power to overcome and exceed all challenges and difficulties standing before the all-spectrum white laser.
Advances in science and technology to meet challenges
Following the roadmap shown in figure 3, the Li team has dug into the physics of ultra-broadband nonlinear optics and made gradual and stable progress toward all-spectrum white laser. In 2014 the team realized that ordinary PPLN with powerful QPM ability is suitable for single-wavelength pump laser (continuous wave, ns and ps) SHG, but not good for either simultaneous SHG and third harmonic generation (THG) for single-wavelength pump laser or broadband fs pulse laser SHG [26,27]. Yet, introduction of simple modulation to poled structures like chirping leads to a new chirped PPLN (called CPPLN) can largely lift such a limitation of PPLN so that 100 nm bandwidth of SHG and THG (via cascaded SHG and sum-frequency generation (SFG)) can be readily implemented to create red, green, and blue color [28]. In 2015 a mid-IR fs laser of 20 µJ pulse energy and central wavelength 3600 nm was used to pump the CPPLN with a multiple QPM bands covering an extremely broad bandwidth and simultaneous 2-8 harmonics was observed with high efficiency [29]. This was the first experimental demonstration of high harmonics generation in a single solid material. The 4-8 harmonics together comprises a UV-vis-NIR supercontinuum white laser with conversion efficiency of 18%. In 2021, the team showed that in addition to ordinary second-NL like SHG ad SFG, CPPLN also exhibits significant third-NL effect like SPM so that a pump NIR fs pulse laser has its bandwidth expanding from 100 nm to 300 nm when passing through CPPLN, making UV-vis-NIR white laser readily available via SHG and THG [30]. This first demonstration of synergic action of second-NL and third-NL in nonlinear optics has shined a new light on the route toward all-spectrum white-laser. In 2022 the team realized intense two-octave UV-vis-IR supercontinuum laser via high-efficiency one-octave SHG via a cascaded silica plate and CPPLN module against 0.5 mJ Ti:Sapphire pulse laser, with the former greatly expanding the spectral width around 800 nm due to third-NL effect and the latter further pushing the spectral band deep into UV and vis [31]. Very recently, a Ti:Sapphire fs laser with 3 mJ pulse energy was used to pump the silica plate-CPPLN crystal module, creating a brilliant white-laser with 1 mJ per pulse and 700 nm 3 dB bandwidth (385-1080 nm). All these studies confirm the power of synergic action of second-NL and third-NL in expanding the spectral width of pump fs laser into an unprecedented level.
Concluding remarks
Construction of all-spectrum white laser remains a great challenge in terms of physics and optics (nonlinear optics, ultrafast optics), laser sciences, and material sciences. Without the collaborative effort of expertise from these research fields, the great dream to build a UV-vis-IR all-spectrum white laser with simultaneous merits of high power, large pulse energy, extremely large bandwidth, superflat spectral profile, and high coherence, cannot become a reality. From the fundamental physics point of view, this dream is closely related with the ultra-broadband nonlinear optics, whose theoretical basis should be the following nonlinear Maxwell's equation describing various NL interactions of femtosecond laser with nonlinear crystals and materials.
Here E(r, t), P (1) (r, t), P (2) (r, t), and P (3) (r, t) are the electric field, linear polarization, second-order and third-order nonlinear polarizations, respectively. This nonlinear equation, seemingly simple at first glance, is subject to huge challenge and difficulty of solution as it involves complicated spatial-temporal evolution of electric field and wave under the influence of field-induced linear and nonlinear polarizations. This equation should be the physical basis for ultrabroadband nonlinear optics investigating various NL interactions of high-peak-power fs laser with nonlinear crystals and materials. Solution of this equation should teach us many things about the route toward all-spectrum white laser.
Status
As a third-order NLO process, SBS is generated by the interaction between light wave and acoustic wave in different kinds of media including solid, liquid, gas, and plasma [32]. The excited Stokes and pump are in opposite directions for unguided structures, as shown in figure 4. The parameters of SBS are mainly determined by the intrinsic properties of the gain medium and input wavelength. In general, the Brillouin frequency shift for most media is in the range of 0.1-100 GHz, and the gain linewidth is only 0.01-1 GHz, both of which are much smaller than that of SRS. Therefore, SBS can achieve a theoretical quantum loss of less than 0.1% and is of help to achieve a laser output with a very narrow linewidth. Therefore, the quantum defects during the SBS conversion are negligible, meanwhile, extremely narrow linewidth laser output can be generated. In addition, with the characteristics such as threshold, amplification, and phase conjugation, SBS has been widely applied in the fields of beam time-domain shaping, spectral narrowing, power scaling, sensing, etc.
In the past decades, high-energy and high-power laser technology has developed rapidly thanks to the introduction of SBS technology. Scientific researchers in China have performed well at this stage. As shown in figure 5(a), the SBS-based phase conjugation mirror (PCM) is used to compensate for the wavefront distortion caused by optical defects and thermal distortion, so as to obtain high-quality beam output [33]. In the process of opposite propagation between Stokes and pump, the self-amplification effect of SBS makes the pulse width of the Stokes significantly compressed compared with the pump, as shown in figure 5(b). For example, from several nanoseconds to hundreds of picoseconds, therefore, leads to the peak power of the laser increased by 1-2 orders of magnitude [34]. Brillouin amplification breaks through the bottleneck of traditional main oscillation power amplifier that is difficult to generate high-efficiency short-pulse amplification. It can realize the efficient energy transfer from the long-pulse directly to that of the short. At present, a 2.4 J, 200 ps laser pulse has been demonstrated via stimulated Brillouin amplification [35]. The serial beam combination technology which is also based on Brillouin amplification makes it possible to break the energy limit of a single laser beam while maintaining the operation repetition rate, as shown in figure 5(c). At present, 2.5 J nanosecond beam combined pulses at the repetition rate of 10 Hz have been obtained via non-collinear Brillouin amplification, which has shown great potential in the field of laser fusion drivers [36]. In addition, the Brillouin laser has been developed from the guided-wave structure to the free space with output power up to 20 W, which is 1-2 orders of magnitude higher than the guided-wave Brillouin lasers. It provides a new opportunity for realizing high-power narrow-linewidth lasers with a high signal-to-noise ratio [37].
Current and future challenges
Although SBS has made remarkable progress in the application and development of high-power laser technology, it still faces many challenges and outstanding problems. In order to realize the long-term stable operation of the SBS with a high-efficiency conversion efficiency, a gain medium with a high Brillouin gain coefficient, high damage threshold, high thermal conductivity, and high physical and chemical stability are usually required [32]. However, it is difficult to find a medium that takes into account all the above advantages, so it is necessary to make a choice according to the specific application scenarios. At present, the liquid medium that represented by heavy fluorocarbon is the most widely used SBS gain media, whose damage threshold is inversely proportional to the impurity content of the media. However, how to realize the production, filling and preservation of high-purity media is still a problem in engineering. Due to the narrow Brillouin linewidth of SBS gain media (viz. only a few tens of MHz for crystalline media), it is necessary to use a narrow linewidth (single-longitudinal-mode) laser as the pump source to excite the Stokes effectively [38]. This also means that in the process of non-collinear Brillouin amplification, the interaction angle between the pump and Stokes needs to be limited to a small range to avoid the reduction of energy conversion efficiency [36,39]. In addition, how to generate controllable Stokes outputs in space, time domain and frequency domain, and how to develop compact SBS generators, amplifiers, and oscillators, are also the focus of attention in realizing the transformation of high-power SBS technology into the industry [40,41].
Advances in science and technology to meet challenges
In order to expand the application of SBS technology in the field of high energy and high power, there is still room for improvement in theory, technology, and engineering. For the theory, it is critical to establish a multidimensional dynamic model which includes pump parameters, SBS medium parameters, focusing parameters and medium heat distribution, so as to provide an overall explanation of the process of SBS generation, amplification and beam propagation. For the technology, on the one hand, the influence of the characteristics of SBS gain medium on the performance of SBS needs to be further studied, and the performance of SBS medium should be investigated and optimized (viz. improving the purity, increasing damage threshold, reducing absorption loss, etc). On the other hand, active modulation in both time-and frequency-domain should be further explored to optimize the conversion efficiency and realize adjustable output parameters of SBS. For engineering, attention should be paid to increasing the power load of SBS media, as well as breaking through the output power and repetition rate limits. Potential paths include optimizing the SBS structure and adopting thermal management technology. In addition, the combination of SBS and other NLO effects deserves further study, which is an important step to expand its application field.
Concluding remarks
As an important physical technology to generate laser beams with specific spatial/time/frequency domain and power characteristics, SBS has played an irreplaceable role in the fields of high-energy high repetition rate PCM, high-energy beam combination laser, high-power Brillouin laser and optical frequency comb (OFC) generation at present. It is expected that in the next decade, with the progress of material science and the breakthrough of key technologies, more exciting new applications in high-power lasers are expected to be achieved for SBS. Meanwhile, SBS-based lasers are being developed for 'plug and play' portable devices.
Status
Quantum light sources are indispensable for applications in quantum information science and technology (QIST), typical applications include: quantum computations, communications, metrology, imaging and revealing basic principles of quantum mechanics [42,43]. Therefore to prepare and control the properties quantum light sources on demand determines the developing level of QIST. Generally, two rather different methods are invented for preparing quantum light sources, one is based on excitation-reemission of photon in semiconductor quantum dot, single defect in color centers or single atom; another common used method is based on spontaneous emission based on nonlinear processes [42,43]. Here, we describe and discuss quantum light sources generated based on nonlinear processes. Typically, two nonlinear processes are mostly used for preparing quantum light sources: spontaneous parametric down conversion (SPDC) and spontaneous four wave mixing (SFWM), which are based on second and third order nonlinear processes, respectively (see figures 6(a) and (b)). In both nonlinear processes, conservation laws of energy, linear momentum and angular momentum should be hold. In combining these conservation laws and making possible eigenstate of the system indistinguishable by quantum interference, entangled states in various degrees of freedoms can be prepared (figure 6(c)), some mostly prepared and studied entangled states are polarization, energy-time, orbital angular momentum, position-linear momentum, angular momentum and photon number and path [42]. For certain application scenarios in QIST, a specific quantum light sources can be used.
In SPDC, a pump photon of laser beam with higher frequency has the probability to split into two daughter photons with lower frequencies, which are usually called signal and idler photons [44]. While in SFWM, the annihilation of two pump photons creates signal and idler photons [45]. To generate frequency degenerate photon pairs, it is better to use SPDC process instead of SFWM. While for nondegenerate photon pair preparation, both SPDC and SFWM can be used. The most advantage for SFWM is photon pair generation in waveguide platform, which can greatly reduce the volume of the setups. For quantum light sources generated in SPDC and SFWM, the vacuum state is obtained most of the time for weak pump, while the photon pair state of interest is created with a low probability. A moderate pump intensity should be used to eliminate higher photon number states, this is a basic constrain for photon pair generated in nonlinear processes. As for the materials used for photon pair preparation, the mostly used materials in SPDC can be divided into two kinds depend on phase matching types: birefringent phase matching materials, such as LBO, BBO; QPM crystals such as periodically-poled potassium tit-anyl phosphate (PPKTP) and PPLN. For SFWM, the commonly used materials include: gas ensembles, guided-wave materials such as different kinds of fibers, silicon-on-insulator (SOI) waveguide [46]. Therefore, entangled photon sources can be prepared in various materials with different optical configurations and some basic measurement tool kits are developed to characterize the quality of the sources.
Current and future challenges
For quantum light sources, there are some parameters to evaluate the quality of the prepared sources, some typical parameters that of concerns are: spectral brightness, total photon flux, coincidence to accidental coincidence count ratio, single photon second order correlation function, heralded efficiency and single photon purity. To demonstrate entanglement between two photons, various available and faithful methods are devised for different entangled states in two-dimension and high dimension Hilbert space, these methods include: two-photon interference fringes, Bell CHSH inequality and quantum state tomography (QST). Two-photon interference fringe is much easier to measure, the quality of an entangled source can be evaluated by calculate the interference visibility. To fully know the content of a quantum state, QST for the measured quantum state should be performed. Bell inequality is another parameter to indicate that the prepared quantum state has non-classical properties and contains Bell nonlocality. Only a few parameters are of great concern for specific applications, for example, single photon purity is of great importance for interference of independent photons; the heralded efficiency is of important for measurement or communication with heralded single photon; for applications based on entanglement, a high entanglement quality and total photon flux is preferred.
The trend for preparing quantum light sources based on nonlinear optics is to engineer photon parameters on demand in a compact platform and simple configuration [47]. Following this trend, QPM guided wave platform based on nano-film LN and complementary metal-oxide-semiconductor (CMOS)-compatible integrated platform like SOI and silicon-nitride are developed for QIST. QPM crystals have the advantages of high spectral brightness, narrow bandwidth, which are frequently used in photon pair preparation in quantum optics [44]. Wave-guided PPLN or PPKTP can further enhance the conversion efficiency in SPDC, which can greatly reduce the pump power by comparing with bulk QPM crystals. CMOS-compatible integrated platform also has high nonlinear conversion efficiency, and the most promising aspect for guided platform is the ability to construct small footprint devices with versatile functions by integrated other optical component in the same chip. Some common challenge for quantum light sources prepared based on nonlinear optics are: to overcome the probability feature of the photon pair preparation in spontaneous emission processes; to prepare high quality multi-photon or high dimensional quantum state in a scalable manner aims in realizing more important applications; to greatly reduce insertion losses for guided-wave platform in order to extract quantum light from the chip much more efficiently; to find new kinds of phase matching configurations for entanglement preparation; to devise much more application scenarios that can take advantage of the quantum nature of the light sources.
Advances in science and technology to meet challenges
For the challenges described in the above section, some advances in science and technology should be achieved to meet these challenges. For pulsed heralded single photon source, the heralded efficiency and total photon count are important parameters, but the probability of single photon prepared per pulse is very low, which is a limitation of high flux photon pair preparation. This defect can be overcome by using time, frequency and OAM multiplexing to enhance the photon generation probability per pulse and total count rate [48]. When optical elements for multiplexing device have low losses, the heralded efficiency and rate can be increasing substantially after a few number of multiplexing (see figure 7(b)). For generating multi-photon and high dimensional states, advances in fabricating of low loss integrated optical elements is required. The single photon detector should have near unity detection efficiency. Moreover, some more efficient methods should be developed to measure high dimensional and multi-photon states. Currently, the measurement time increasing exponentially with photon number and dimension. To extract photon from chip more efficiently, optimal output coupling structure designs and high resolution fabrication processing process are indispensable. Recently, backwards QPM is developed, in this special phase matching condition, photon pair prepared based on SPDC are propagating in opposite directions, this will open new opportunity for preparing high quality photon pairs (see figure 7(a)), the bandwidth of photon pair prepared in back-QPM crystal is much narrower than traditional QPM crystals, also the photon pair prepared with the same polarization can be separated, which is not possible for traditional QPM crystal [49]. Though back-wards QPM crystal is promising, the challenge is to fabricate ultra-short poling period with high quality. To date, most demonstrations are based on third order backwards QPM waveguide, the advances in fabricating high quality devices with shorter poling period for first order QPM is very urgent. Currently, high quality quantum light sources can be prepared based different materials with different configurations, an important task in QIST is to apply these sources in different application scenarios, which can show its advantages over traditional methods. A few promising applications include: entanglement based quantum light spectroscopy, which will show different properties by comparing with laser spectroscopy [50]; to develop new measurement methods for measuring physical quantities that can not be achieved with laser sources; cross with other disciplines such as biology, chemistry and material science for various measurement tasks which can show distinct features of both fields [51].
Concluding remarks
In conclusion, most of the advances and progresses for preparation and manipulation quantum light sources in nonlinear processes are briefly reviewed, this chapter provides a glance at the current status and challenges remains to be solved in this field. We also give some discussions on possible advances in science and technology in order to meet these challenges. Another important task in this field is to explore and expand new applications based on quantum light sources by utilizing its unique properties that can not be achieved by using laser light sources. With the developing of micro-fabrication technology, high compact high quality chip scale quantum light sources and some special chip aims at certain applications will be realized. In the near future, we will see significant progresses in integrated nonlinear optics based quantum light sources and view various scenarios that make use of high quality quantum light sources.
Status
The conversion efficiency of a NLO process strongly depends on the phase mismatch between the interacting waves. Let us consider a SHG process as an example. The material dispersion results in different phase velocities at the fundamental and second-harmonic (SH) waves, leading to the energy oscillation between them. To solve this problem, one useful way is to utilize the crystal birefringence to achieve phase matching (i.e. BPM), in which the fundamental and SH waves have the same refractive indices but different polarizations. Generally, the BPM configuration cannot use the maximal nonlinear coefficient and the BPM condition can be satisfied within partial transparent band. In 1962, Armstrong et al [26] proposed the concept of QPM to enhance the conversion efficiency of a NLO process. The sign of χ (2) is periodically inverted under QPM configuration, which is equivalent to a reciprocal vector to compensate the phase mismatch. The reciprocal vector is determined by the period of χ (2) structure. This technique provides a powerful solution to overcome the limitation of natural material dispersion with artificial micro-structures.
In 1980, Feng et al [52] successfully developed a growth striation technique to prepare PPLN crystals (also called optical superlattice) for the experimental demonstration of QPM. Since then, PPLN crystals have become one of the most important materials for the applications of nonlinear and quantum optics. Many domain poling techniques, such as electric field poling, chemical indiffusion, scanning force microscopic poling, electron-beam poling, and light-induced domain inversion, have been developed to fabricate χ (2) structures in LN, lithium tantalite, KTP, and other NLO crystals.
In 1998, Berger [53] introduced the idea of nonlinear photonic crystal (NPC), conceptually extending the χ (2) structures from one-dimension (1D) to two-dimension (2D) and three-dimension (3D). The functions of high-dimensional χ (2) structures can be significantly expanded from the traditional laser frequency conversion to nonlinear beam shaping, nonlinear holography and high-dimensional quantum entanglement. Two years later, 2D χ (2) structures were experimentally realized by using the popular electrical poling technique. However, these traditional poling techniques cannot fabricate 3D χ (2) structures.
Current and future challenges
The 3D χ (2) erasing approach was demonstrated by Wei et al in 2018. Its mechanism is quite different from the traditional domain poling techniques. Instead of trying to invert the sign of χ (2) , a focused femtosecond laser beam is used to selectively erase χ (2) . The fabrication strategy is to promote strong light-matter interaction to reduce the crystallinity as well as χ (2) . Correspondingly, the laser parameters feature low repetition rate (∼1 kHz) and high pulse energy (∼100 nJ). It is relatively easy to obtain χ (2) erasing in experiment. For example, the QPM SH process can be observed when the pulse energy of laser writing ranges from 50 nJ to 300 nJ in LN crystal [54]. In experiment, Wei et al successfully fabricated 3D χ (2) -amplitude-modulated structures in LN crystals. Particularly, this technique is suitable for almost all the nonlinear crystals including ferroelectric and non-ferroelectric ones. For instance, it has been recently utilized to fabricate χ (2) structures in a quartz for QPM SH generation in deep-ultraviolet band [56]. The main challenge of such χ (2) erasing approach is how to reduce the value of χ (2) into 0 in an effective and controllable way. Also, χ (1) change is generally non-negligible under such strong laser illumination, which may cause extra scattering loss.
The laser-induced 3D domain poling [55] was realized by Xu et al in an x-cut barium calcium titanate (BCT) crystal. In this approach, nonlinear absorption of light produces a high temperature field, leading to the appearance of an electric field for domain creation. The experimental parameters differ from the χ (2) erasing approach. Typically, it requires a high repetition rate (∼80 MHz) and low pulse energy (∼3 nJ) for the poling laser. Such laser poling technique is applicable to ferroelectric crystals. In comparison to the above χ (2) erasing approach, the modulation depth of χ (2) in laser poling (from χ (2) to −χ (2) ) is doubled, so the conversion efficiency is quadrupled correspondingly. Besides, the χ (1) change can basically be neglected. Currently, 3D domain poling has been successfully demonstrated in ferroelectric crystals featuring low coercive fields including BCT and calcium barium niobate crystals. However, it is still a great challenge to expand its application to ferroelectric crystals with high coercive fields such as LN crystal.
Advances in science and technology to meet challenges
Many novel and important applications in nonlinear optics such as nonlinear beam shaping and nonlinear holography have been realized in 3D χ (2) structures. For example, through full 3D QPM configuration, vortex beams and Hermite-Gaussian beams at SH wavelengths have been efficiently generated in 3D laser-induced χ (2) structures [57,58]. Multi-channel nonlinear holography is achieved under QPM-division multiplexing scheme, which significantly enhances the capacity as well as the efficiency [59]. It also has been theoretically predicted that high-dimensional entanglement states can be realized through SPDC processes in 3D χ (2) structures [60].
To meet the increasing requirements of these advanced optical applications, the major challenge is how to fabricate large-scale 3D χ (2) structures with high precision, good repeatability and high stability. Currently, the typical dimensions of 3D χ (2) structures are limited within ∼200 µm. It is necessary to significantly expand the sample size for many practical applications. For example, it requires a sample length of at least 1 cm (along the light propagation direction) to improve the conversion efficiency to 10% and above. High-resolution and high-capacity nonlinear holography requires a large transverse area (∼1 mm 2 ).
The present laser writing of 3D χ (2) structures relies on a point-to-point fabrication strategy. Clearly, when the sample length is increased to 1 cm, the fabrication time will be increased to a level beyond the system stability. It is critical to develop parallel processing techniques such as multi-focal laser writing system for fast fabrication of 3D χ (2) structures at large scale. Another technical issue is the non-uniform structure along the depth direction. NLO crystals generally have large refractive indices (for example, 2.2 for LN crystal). When a laser-writing beam is focused from air into the crystal, the refractive index mismatch will induce strong spherical aberration and then distort the focal intensity distribution. For example, the focal spot will be elongated along the fabrication depth, leading to a considerable decrease in peak intensity. One may modify the pulse energies at different depths to partially compensate the effect of spherical aberration. Also, pre-shaping the laser writing beam can be an important way to correct the intensity profile at the focal point.
Concluding remarks
In comparison to 1D and 2D χ (2) structures, 3D structure has an additional χ (2) modulation dimension. This advantage makes it naturally capable of performing those complicated tasks that require high capacity, high coding rate, and high processing efficiency. Currently, the fabrication resolution of laser writing system is typically ∼1 µm, which is manly limited by the diffraction limit of light. In future techniques, it is possible to reduce the resolution of 3D χ (2) engineering down to 100 nm, which can be used in the generation of ultra-narrow-bandwidth quantum entangled light (matching the linewidth of quantum storage). Besides, the potential applications of 3D χ (2) structures can be further extended from nonlinear and quantum optics to terahertz wave manipulation, high-frequency acoustics and domain wall nano-electronics (figure 9). After 60 years of scientific research, χ (2) structures have been conceptually and experimentally extended from 1D and 2D to 3D. The researchers interested in χ (2) structures have expanded from nonlinear optics society to quantum optics, integrated photonics, and microwave/terahertz-wave photonics. 3D χ (2) structures provide a novel and powerful platform to perform fundamental research and develop practical devices.
Emerging nonlinear optical phenomena enabled by ferroelectric fluids
Satoshi Aya 1,2 and Yan-qing Lu 3
Status
As seen in the other sections of this Roadmap, diverse NLO effects arise in many different systems. Among them, SHG, as a special case of three-wave mixing mechanisms, is the most classic and important nonlinear phenomenon that brought the beginning of the field of nonlinear optics in 1961 [61]. SHG is the lowest-order NLO effect with the second-order nonlinear susceptibility χ (2) and an inherent up-conversion process, producing a doubled-frequency photon at 2f from two input photons at a fundamental frequency of f. The response is at least over ten orders stronger than the other higher-order NLO effects characterized by susceptibilities, χ (n) (n ⩾ 3) (figure 10) [62]. Such high-efficient wavelength convertibility has not only offered a technological solution for generating short-wavelength light that cannot be obtained from normal light sources, but also made it possible to probe symmetry-broken nano-to macro-structures [63].
Generally, SHG process becomes active when the inversion symmetry is broken. This condition puts a considerable limitation on the material space one can use for wavelength-conversion applications. In the past years of SHG development history, low-symmetry polar crystalline materials were the main candidates. To obtain strong nonlinear light from these materials, the fabrication of periodic poling structures is essential to satisfy the classic QPM condition [63][64][65] (figure 11). However, the limited processability and manipulability of domain engineering restrict the diversity of the polarization orders, hindering one from exploring the correlation of NLO phenomena with polarization structures and optimizing device performance.
These difficulties would be overcome by using soft matter systems. The self-organization capability makes them useful candidates for generating NLO structures spontaneously. However, the intrinsically high symmetry generally exists in soft matter, contradictory to the incidence of polar order and strong SHG property. Very recently, we have witnessed a game-changer: the liquid-like ferroelectric matter states, the variants of ferroelectric nematic LCs (N F and HN * LCs) (figure 11) [66][67][68]. These fluids exhibit spontaneous polarization with the local electric polarity coupled to the orientation of molecules. Unlike crystalline materials, their three-dimensional orientational structures can be easily 'assembled' and modulated by traditional LC alignment techniques and ultra-low electric field (E-field; ∼1 mV µm −1 ) [66]. This situation brings us to a new era in the quest of producing and manipulating various nonlinear light states on-demand through diverse designable polarization structures.
Current and future challenges
Conceptually, the emerging ferroelectric LCs add substantial richness and complexity to accessible structure and property spaces. Thanks to the sensitive electric-field response of ferroelectric LC structures, the material group offers an unprecedented playground for developing switchable and programmable NLO elements. The inherent fluidity further benefits fabricating flexible nonlinear devices. However, practically, neither the polarization structures nor the resultant nonlinear property is explored and developed. Especially, we have only aware that polarization LC states have a large pool of nontrivial topological structures distinct from those in traditional LC phases, which have even never been addressed in other polar systems such as spintronics and magnetic systems. The want of the relevant knowledge inevitably avoids one from studying the NLO effects in detail. In response to these circumstances, three fundamental challenges arise as follows.
(1) Understanding the polarization topology of ferroelectric LCs This challenge consists of two tasks: experimental and theory. Scanning probe microscopy techniques such as Kelvin probe force and piezoelectric force microscopies are known as the most popular methods for characterizing polarization topology in crystalline polymers and inorganics. However, the scanning probe microscopy does not apply to ferroelectric LC systems, since their fluidity necessitates alternative mechanically contactless measurement approaches. From the theoretical viewpoint, it is urgent to establish mean-field free-energy formalism to describe possible topological species and their structural stability. (2) Understanding polarization structure vs light interaction relationships Given that the fabricated polarization topology is determined as in (1), revealing the coupling between the polarization topology and NLO output is the next task. While experimentally investigating NLO effects by designing various polarization topologies through ferroelectric fluids is intriguing, developing a generalized theory that captures SHG processes in arbitrary polarization structures remains challenging.
(3) Extending from SHG to general second-order NLO effects SHG process is the starting point for extending applications of ferroelectric fluids to other second-order NLO effects, e.g. SFG, DFG, and optical parametric oscillator (figure 10). The essential step in bringing the SHG process to them is to systematically understand how phase-matching (PM) condition depends on polarization structure, and polarization, wavelength, and wave-vector directions of interacting lights. This dedicates the amplification rate and polarization state of generated NLO fields.
Advances in science and technology to meet challenges
To tackle the challenges listed above, interdisciplinary approaches must be taken to address both the fundamental and engineering issues. The essential bottleneck to clarifying polarization topologies in fluids is the deficiency of the observation method due to the powerlessness of traditional scanning probe microscopy tools. A feasible and potential alternative would be microscopy methodology based on SHG process, i.e. SHG microscopies. Since SH signal itself arises as a product of symmetry breaking of systems, the microscopy techniques probe the spatial distribution of the symmetry-broken structures like polarization topology. Yet, much remains to be understood concerning the analysis of 3D topological structures. Unlike traditional confocal microscopy techniques, the SH signal in a 3D volume depends on not only the orientation of polarity but also the optical phase between fundamental and SHG lights. The optical simulation and data fitting methodologies should be established and further improved [69]. Moreover, there is a limit of structural size above nearly-micron-scale dominated by the diffraction limit. Developing nonlinear microscopies with resolution down to tens of nanometers, similar to super-resolution microscopy technologies in the linear optical regime, would be a big challenge in this emerging field.
Going to identifying novel NLO phenomena, a long-term challenge would be the establishment of a database that correlates polarization structures to NLO properties. This can be linked with the machine-learning method, effectively predicting potentially-possible NLO effects. The experimental realization of polarization structures would be made by employing traditional LC fabrication techniques such as anchoring engineering, spatial confinement, structural template, and so on [70]. Yet, how to precisely control the polarization vector in 3D space remains elusive, and polarization engineering based on LC nature should be developed. The main difficulty arises because orientational engineering is optimized for only the so-called nonpolar director field, where the head-to-tail equivalence is present. Another obstacle is the limitation of traditional NLO theories. Since they assume linear polarization for fundamental optical wave and non-birefringent nature in most cases, the polarity-orientation-dependent effects like tensor of nonlinearity and optical retardation are ignored. This results in the incorrectness of calculation of NLO properties in birefringent NLO mediums, thus necessitating generalized theories for the effects. These combined efforts will push toward more exotic NLO phenomena.
Concluding remarks
Beyond the traditional solid NLO elements, the recent discovery of highly-fluid ferroelectric LCs, as the first class of liquid nonlinear materials, has generated considerable scientific interest and technological value in electronics and photonics. Particularly, the gift of desirability, controllability, and manipulability of polarization structures demonstrates a great potential for exploring many unknown NLO effects. In the next years, the community will be at the urgent stage of understanding the principle of how 3D polarization fields modify NLO fields and establishing the relevant database guided by both the theories and experiments. The facile polarization engineering in the ferroelectric LCs provides unique opportunities for amplifying nonlinear light through specifically-designed polarization structure and topology, and tuning nonlinear light actively by external fields. These approaches, on one hand, will keep offering new and exciting branches of fundamental science in nonlinear optics. On the other hand, they provide important implications for future NLO devices with compactness, flexibility, as well as fast and low-external-field responsivity.
Status
Over the past decades, the invention of the laser greatly promoted the development of modern science and technology. NLO materials as a crucial part of the laser science have attracted increasing attentions from researchers because of their widely applications in information storages and precise micro-manufacturing et al. An ultraviolet (UV) or DUV NLO crystal with excellent performances needs to fulfil three key properties: strong SHG responses, sufficient birefringence, and the UV cut-off edge as short as possible. Therefore, it is important to study the relationship between these three key properties and structures of materials.
Academician Chen Chuangtian proposed the anionic group theory [71,72] in the 1980s for the first time to study the structure-property relationship of these NLO crystals. According to the anionic group theory, BO 3 , B 3 O 6 and B 3 O 7 groups are considered to be the most core functional building blocks (FBBs) to construct UV NLO borates due to their π-conjugated structures and wide band gaps. Since then, borates as NLO materials [73] [79]. Amounts of NLO materials with these π-conjugated structures have been designed and exhibited excellent properties (figure 12). Therefore, exploring new NLO materials by expanding the π-conjugated structures is not only one of the most effective ways, but also the state-of-the-art research hotpot.
Current and future challenges
Over the past decade, the UV/DUV NLO materials have been developed prosperously. Amounts of famous NLO crystals have been developed to fulfil the applications in many civil and military fields. However, several issues are still under investigations. The three key properties which affect the applications of UV/DUV NLO crystals are large SHG responses, sufficient birefringence, and wide band gap. According to the structure-property relationships, these three key properties restrict each other, and it is difficult for them to be optimal in one crystal. Therefore, how to regulate these three key properties is a significant issue and huge challenge in the design of UV/DUV NLO crystals with excellent comprehensive properties.
Based on the anionic group theory, the π-conjugated structures are more conducive to having strong SHG responses due to their large hyperpolarizability. Furthermore, strong anisotropy of the FBBs in crystals is beneficial to improve the birefringence. If the birefringence is too small, that will make the NLO crystals unable to achieve the PM. Therefore, lots of efforts on expanding the π-conjugated FBBs with large hyperpolarizability and strong anisotropy have been spent. Furthermore, a wide band gap will help to expand the transmission range of crystal. Theoretical calculations exhibit that the band gap of a NLO crystal is determined by the range between the top of the valence band and the bottom of the conduction band. Therefore, the width of the band gap is related to the elements which make up to the NLO-active units. The large electronegativity difference is beneficial for broadening the band gaps of the NLO materials. In addition, according to the structure-property relationships, the dangling bonds in the structures have large influences on the band gap. Currently, the relationships among the three key properties of NLO materials have received a lot of attentions and continuous researches have been carried out on them. In-depth study of structure-property relationships of NLO crystals, and making use of the π-conjugated units to regulate the three key properties are still huge challenges.
Advances in science and technology to meet challenges
The wide applications of NLO materials bring huge challenges to the structural design of materials. Structural modification by molecular engineering is an important approach to design and synthesize novel structures. In the next decades, utilizing of structural design to regulate the three key properties of UV/DUV NLO crystals is still a huge challenge. What's more, the use of counter ions to regulate the arrangements of new FBBs could be benefit to acquire UV/DUV NLO crystals with excellent comprehensive properties (figure 13). Based on the deeper study of the structure-property relationship, the larger hyperpolarizability and stronger anisotropy with π-conjugated structures will beneficial to increase the SHG responses and birefringence. These π-conjugated structures are no longer limited to the inorganic units, but would have been extended to some organic units. Expanding inorganic NLO-active units with π-conjugated structures and designing the new organic π-conjugated NLO-active units have become the new approach and directions for the development of this field. These organic units often have stronger p π -p π connections, which will increase the hyperpolarizability and anisotropy of the units. Based on these considerations, it could be inferred that the stronger p π -p π connections may benefit to the stronger SHG responses in the organic FBBs. However, when organic units absorb UV radiation, their outer electrons will transition from the ground state to the excited state. Excessively large p π -p π connections will narrow the width of the band gap and further cause the red shift of the UV cutoff edge of the compound. Therefore, by introducing the terminal group, such as F − or OH − groups, the dangling bonds in the structure are effectively reduced, thereby increasing the energy band width, which is beneficial to the extension of the ultraviolet transmission range to the short wavelength direction.
Concluding remarks
The field of UV/DUV NLO material has aroused great interest in the scientific community over the past few decades. As a key component of laser frequency doubling technology, UV/DUV NLO materials lay the foundations for the applications of lasers in many civil and military sciences. With the continuous increase of UV/DUV laser applications, it is a necessary and state-of-the-art development direction to expand UV/DUV NLO materials with excellent performances. To balance the three key properties of the UV/DUV NLO crystals is a crucial challenge based on the in-depth study of the structure-activity relationship. Molecular engineering with organic FBBs on the ionic organic compound is an effective approach to developing new generation UV/DUV NLO crystals. With these considerations, we anticipate researches in the field of NLO materials would provide significant developments for both fundamental science and practical applications.
Status
Ever since the discovery of graphene in 2004, a growing number of van der Waals (vdW) two-dimensional materials have been observed and studied. As key member of low-dimensional materials, the atomically thin two-dimensional materials exhibit many intriguing and sometimes exotic properties and phenomena, driving quite a few important developments in fundamental science and technological applications in the past 18 years. The emergence of novel materials and properties also provide a new playground for the investigation of new NLO phenomena and potentially functional applications.
Because the light-matter interaction in vdW 2D materials is confined to the atomic thin layer, many NLO responses such as SHG, THG, sum-frequency or difference-frequency wave mixing, and even HHGs are extraordinarily enhanced, as we have witnessed in graphene [80][81][82], monolayer transition metal dichalcogenides [83][84][85] or monochalcogenides [86]. Moreover, some often negligible nonlinear processes become prominent. For example, SHG in centrosymmetric graphene, arising from electro-quadrupole contribution, was revealed [81]. For another example, giant nonreciprocal SHG in CrI 3 bilayer from interlayer antiferromagnetism was discovered [87], whose second-order nonlinear susceptibility was found about three orders of magnitude larger than that in bulk antiferromagnet.
With the expansion of 2D materials family, 2D materials exhibit various crystal structures and symmetries. More intriguingly, depending on how one monolayer sheet is stacked on top of the other, the crystallographic symmetry can be varied [82,[84][85][86]. Because of the weak van der Waals interaction between layers, such symmetry variations can be artificially engineered, without the restriction of lattice match for conventional materials. Nonlinear optics such as SHG is known for its extreme sensitivity to symmetry. Therefore, 2D materials host rich layer-and stacking-dependent NLO responses.
Perhaps the most fascinating part of 2D materials is the susceptibility to external controls such as carrier doping [80,81], electric and magnetic fields [87], strain [88] and pressure [89] ( figure 14). The NLO responses of 2D materials thus can be readily controlled. By tuning the chemical potential to switch on or off the interband transitions in gated monolayer graphene, the second-and third-order NLO coefficients were varied by several orders of magnitude [80,81]. By controlling the magnetic states in two-dimensional magnets with external magnetic field, the time-noninvariant SHG was turned on or off [87]. The tuning of NLO responses in 2D materials not only demonstrates the power of nonlinear optics for studying novel quantum materials, but also illustrates the potential of 2D materials for developing NLO devices, such as mode-locked fiber laser [90,91].
Current and future challenges
The emergence of novel NLO responses in 2D materials brings the excitement but also poses quite a few challenges.
Firstly, 2D materials exhibit intriguing physical properties and novel quantum phases, so that various NLO processes could appear. Furthermore, they are thinned down to the atomic scale and all of their atoms are exposed to the surface or interface, so it is challenging to identify the origin of NLO responses. It is known that the second-order nonlinear processes such as SHG are sensitive to the inversion symmetry breaking under the electric-dipole approximation. At a surface or interface, the inversion symmetry is intrinsically broken, and the SHG is electric-dipole allowed. If the 2D materials is noncentrosymmetric in view of a bulk, these surface vs bulk mechanisms of SHG would be both electric-dipole allowed. Even if the crystallographic structure of 2D materials is centrosymmetric, SHG could be produced under the stimulus of electric field, magnetic field, strain, or pressure. In particular, nonreciprocal SHG in magnetic materials with time-reversal symmetry breaking could appear. Therefore, the pinpoint of NLO processes requires special attention.
Secondly, the flatland of 2D materials is enriched by the moiré superlattices that are artificially created in the form of hetero-or homo-structures. Depending on the twisting angle and lattice mismatch, the period of these moiré superlattices is often in the range of 1-100 nm. This length scale is much larger than the lattice constant, but smaller than the diffraction limited laser spot in the visible or near-infrared wavelength for NLO measurements. Within the moiré period, the interface may undergo the atomic reconstruction, particularly when the twisting angle is small. The atomic reconstruction, along with the distortion of moiré superlattice inevitably induced by strain and disorders, further complicates the understanding of NLO phenomena that are measured on an ensemble of moirés. The study of nonlinear optics in 2D moiré superlattices urgently calls for ultrahigh spatial resolution. Last but not least, the limited sample volume and electronic states of 2D materials permit the possibility of nonperturbative NLO processes such as HHGs driven by readily available femtosecond pulsed lasers, although the damage threshold of 2D materials is relatively low. Such processes are not only subject to the symmetry restrictions but also the many-body coherent dynamic interactions. Unlike the three-step model applied for gas-phase high harmonic generations, new physical picture and theoretical framework are demanded to understand the nonperturbative ultrafast coherent process in ultrathin materials.
Advances in science and technology to meet challenges
To understand the emergent NLO phenomena in 2D materials, one can conversely utilize the tunability of their structures and properties. Therefore, the corresponding NLO measurements shall be compatible with various tuning techniques, such as cryogenic temperature, electric transport, superconducting magnet, hydrostatic pressure, and strain engineering. If so, the rich nonlinear optics of 2D materials can be deciphered in a neat way. The symmetry-sensitive NLO experiments would become a powerful tool to help resolve a series of important problems in condensed matter physics and materials science, such as pairing mechanism in unconventional superconductivity, exotic quantum phases in intrinsic multiferroics, topological materials and so on.
The advances of nonlinear optics in 2D materials have been greatly benefitted from the diffraction limited optical microscopic technique. To achieve higher spatial resolution for imaging the interior of moiré superlattices or various domains, tip-enhanced NLO microscopy is a better choice. This near-field technique collects the inelastic NLO signals such as SHG and THG. Because of the local electric field enhancement at the tip apex, the NLO signals could be enhanced by orders of magnitude so that spatial resolution can reach to nm or even sub-nm scale. So far such tip-enhanced NLO microscopy has been demonstrated at room temperature. The combination with low temperature, magnetic field and ultrahigh vacuum would largely extend its applications in revealing the NLO processes in nanomaterials.
The development and applications of nonlinear optics heavily rely on the pulsed laser sources. For the study of atomically thin 2D materials, the average laser power is around 1 mW or so with a diffracted limited focusing, otherwise the sample will be damaged. The ideal lasers are the wavelength tuneable femtosecond pulses with a repetition rate of 1-10 MHz. To push the applications of NLO effects in 2D materials, fiber-based portable lasers are also desirable.
Concluding remarks
The fascinating properties of van der Waals 2D materials provide a unique opportunity to explore various NLO phenomena in a neat manner. Likewise, the NLO technique has become an indispensable and powerful method for revealing symmetry and physics in 2D materials. By incorporating the various tuning knobs, device fabrication and scanning probe techniques used in condensed matter physics and material science, the understanding and development of nonlinear optics in 2D materials and related systems would be advanced to an unprecedented level. The real applications based on these novel NLO effects will emerge in a foreseeable future.
Qingyun Li and Hui Hu
School of Physics, Shandong University, Jinan 250100, People's Republic of China Status LN crystal has been widely used in photonics due to its wide transparent window, large electro-optic (EO) effect, and high second-and third-order nonlinear-optical coefficients. Classical optical waveguides (proton exchange and titanium diffusion) on LN bulk materials generally have a subtle refractive index contrast, resulting in large bending radius of the waveguide, which hampers miniaturization of devices and the further development of LN in integrated optics. LNTF (LN on insulator, [LNOI], or thin film LN) has a large refractive index contrast, which results in a tight bending radius of tens of microns and a waveguide cross-section of sub-microns, allowing high density photonic integration and strong optical confinement to enhance light-matter interactions. In 1998, single-crystal LNTF was obtained by the combination of ion implantation and lateral etching [92]. With the improvement of preparation technology, high-quality and wafer-size LNOIs have become commercially available, which facilitates the development of integrated LN photonics. One LNOI fabrication method uses a series of processes such as ion implantation, direct bonding, and thermal annealing, to physically peel the LNTF from the LN bulk material and transfer it to the substrate [93]. The lapping and polishing method can also produce high-quality LNOIs. This method has a smaller influence on crystal quality but puts strict requirements on thickness uniformity control. The LNOI not only retains the original physical properties of LN bulk material, but also has a single-crystal lattice structure, which benefits low light transmission loss.
Compared to other integrated optical material platforms (such as SOI, InP, and SiN x ), LNOI has low optical absorption, fast optical modulation, and efficient all-optical nonlinearities. Optical waveguide is one of the fundamental components in integrated photonics. LNOI waveguides achieved a propagation loss of 0.027 dB cm −1 [94], which was a milestone advancement implying large-scale photonic integration and single-photon level processing. Numerous photonic functions have been realized, such as high-bandwidth EO modulation, high-efficiency nonlinear conversion, and EO controllable OFC generation [95]. A series of LNOI materials have also been developed, such as rare-earth-doped LNOI for amplifiers and lasers, MgO-doped LNOI for high optical power applications, and double LNOI layers for nonlinear-optical processes [96]. Currently, 4 and 6 inch LNOIs have become routinely available, and have important applications in optical communications, microwave photonics, and quantum photonics.
Current and future challenges
LNOIs have become a promising material platform for integrated photonics, but several challenges remain.
1. Large-size LNOIs. Photonics device fabrication generally requires a photolithography resolution of tens of nanometers. A lithography system for large-size substrate (8 or even 12 inch substrate, for example) has a much better accuracy and resolution than those of small-size substrate (4 or 6 inch substrate, for example), and can be used to fabricate photonic devices with fine structures and good repeatability. In addition, large-size LNOIs can reduce the average cost of device fabrication. Hence, large-size (8 or 12 inch) LNOIs are preferred for better device performance and industry production. Depending on the fabrication process, the LNOI diameter is determined by the LN bulk wafer, which is limited by the crystal boule growth process.
2. Uniform-thickness LNOIs. To achieve a high efficiency nonlinear-optical process using χ (2) , a careful design and strict control of the waveguide geometry are needed to meet the phase matching conditions. A thickness variation of several nanometers may have a distinct effect on the joint spectral intensity of SPDC [97]. 3. Local rare-earth doping on LNOIs. Many important photonic devices (such as lasers, amplifiers, and optical quantum memories) are enabled by various rare-earth ions. Significant progresses in the developments of lasers and amplifiers have been made owing, for example, to Er 3+ -doped LNOIs [98]. However, light transmission absorption manifests in the rare-earth-doped areas of LNOIs. Therefore, local rare-earth doping on LNOI platforms is preferred for integrated photonic circuits.
4. Heterogeneous integration. Heterogeneous integration of materials (such as Si or III-V semiconductors) on LNOIs, which enables complementary material properties, is an area of interest. The heterogeneous integration can also provide light sources and detectors for LNOI photonics. Furthermore, LNTFs are usually supported by SiO 2 /Si substrate, which can survive a maximum annealing temperature of about 600 • C. Substrate materials such as sapphire, quartz, and SiC are sometimes preferred due to such features as MIR transparency, low radio frequency loss, high acoustic velocity, and high thermal conductivity. One challenge in heterogeneous integration is to overcome thermal mismatch of materials.
Advances in science and technology to meet challenges
The fabrication steps in semiconductor processing, e.g. direct wafer bonding, ion implantation, and chemical mechanical polishing, are mature technologies that can support large-size LNOI fabrication. LNOI size and physical properties are primarily decided by LN bulk crystal (LN wafer). Currently, the maximum size of commonly available optical-grade LN wafers is 6 inches. Such LN wafers have low optical absorption and uniform physical properties (such as refractive indices) across the wafer. MgO-doped and rare-earth-doped LN wafer can have diameters of 4 and 3 inches, respectively. They can all be X-cut, or Z-cut. The development of large-size LN bulk crystal is driven by market demands, and 8 inch LN and LNOI wafers are being developed. Figure 15(a) shows an 8 inch LNOI wafer. The LNTF is X-cut with a thickness of 400 nm. Figure 15(b) shows the diffraction peaks of the LNOI as measured by high resolution x-ray diffraction. The FWHM of the LNTF is 0.029 • , showing that the LNTF is mono-crystalline. The insert shows the schematic of the cross-section. The systematic optimization of the LNOI production process is aimed at improving the thickness uniformity of LNOIs and solving the thermal mismatch between LNTF and various substrate materials. LNOIs are expected to achieve a thickness uniformity of 1 nm and strong bonding strength with substrate. Meanwhile, physical property studies, such as photorefractive, dielectric relaxation, and loss mechanisms of LNOIs, are essential for many applications.
For nonlinear-optical processes, using the concept of optical superlattice, periodic polarization structures are realized on LNOI by several poling techniques to fulfill the QPM condition. A periodic poling period of sub-microns was achieved on an MgO-doped LNOI using multiple bipolar preconditioning pulses [99]. For industrial application, the techniques and mechanisms of poling must be further explored to improve yield and domain uniformity. Local doping of rare-earth metal on LNOI has been achieved by selective ion implantation and annealing [100]. Heterogeneous integration was also developed on LNOIs to obtain charge-injection lasers and detectors with excellent performance [101]. All of these processes require optimization of the heterogeneous integration process to overcome the complexity of fabrication and improve yield. Stoichiometric LNTFs are sometimes needed due to their low coercive field and high EO coefficient.
Concluding remarks
Benefitting from its excellent physical properties, large refractive index contrast, and low optical loss, LNOI comprises a promising material platform for the development of integrated photonics. Furthermore, LN is an industry-proven material with an annual usage of millions of wafers. Gradually industrialized LNOI devices will strongly guide and support LNOI material development. In the future, multifunctional integrated photonic systems with more diverse types and more complex structures are expected to be realized on the LNOI platform. With further improvement of the fabrication technology, LNOIs will be developed with large size, high uniformity, and heterogeneous integration to meet the needs of academic and industrial applications.
Yuanlin Zheng and Xianfeng Chen State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, People's Republic of China
Status LNTF is only in its teenage phase but has been considered as a revolutionary platform for integrated photonics. Multifunctional dense integrated photonics is directly accessible on LNTF thanks to the exceptional properties of LN and a high refractive index contrast of LN to buffering silica. Photonics integrated circuits (PICs) on LNTF, consisted of nanowaveguides, resonators and cavities for strong light-matter interaction, have shown unprecedented conversion efficiency and superior performance in terms of nonlinear wave mixing. The advancement has rejuvenated LN-based photonics in a plethora of science and applications in EO, nonlinear and quantum regimes [95,102,103]. Nonlinear frequency conversion processes based on second-order nonlinearity, such as SHG, SFG/DFG, OPA/OPO, SPDC and third-order nonlinearity, like FWM, supercontinuum generation and Kerr OFC have been successfully demonstrated. These progresses have significantly expanded opportunities to flexibly tailor, manipulate or convert light in a nonlinear manner with high efficiency. It has thus been straightforward to explore how to further enhance the light-matter interaction in the fundamental physics aspect and develop high-performance (nonlinear) photonic devices in the technological aspect.
Focusing on the domestic research, the experimental advancement has been developed in parallel with the international community. In the past few years, several seminal works regarding nonlinear frequency conversion on the platform have been reported. As shown in figure 16(a), efficient quadratic wave mixing by exploring novel broadband QPM mechanism in x-cut LNTF microdisks (Q factor ∼ 10 7 ) has resulted in simultaneous second-harmonic and cascaded THG with normalized conversion efficiencies as high as 9.9%mW −1 and, respectively [104]. Later, the Q factor has reached a record high of 10 8 , close to the theoretical prediction. It further paves the way for cavity enhanced nonlinearity for weak light. By designing an ultra-broadband edge coupler and high-performance PPLN nanowaveguides, efficient SHG with a fiber-to-fiber normalized efficiency of 1027%W −1 cm −2 , has recently been demonstrated [105], which is ready for user application ( figure 16(b)). A heterostructure cavity composed of a guided-mode resonance (GMR) resonator and distributed Bragg reflectors (figure 16(c)) showcases strongly enhanced electromagnetic field localization to boost SHG by three orders of magnitude [106]. LNTF metasurfaces in figure 16(d) have been shown to selectively boost SHG at different wavelengths via Mie resonances [107]. All these works and others have shown excitement in this field, suggesting the current research focus in China.
Current and future challenges
Applications of classical and quantum PICs on LNTF calls for full manipulation of light on all properties. Yet, challenges are still to be addressed to achieve the ultimate goal. These challenges include theoretical and experimental aspects. For the theoretical aspect, current coupled mode theory models could solve most problems in classical parametric processes. Kerr OFC with OFC exploiting quadratic cascading processes is anticipated to be more efficient yet more sophisticated, which requires detailed systematic investigation. When the interaction enters the few-photon or single-photon level, full quantum modelling is needed. Besides, new concepts in related research areas are to be adopted or considered when such type of nonlinearity is presented, such as twisted photonics, topological photonics, non-Hermitian photonics, and synthetic dimension. For the experimental aspect, efficient nonlinear wave mixings in LNTF requires perfect phase matching and QPM is still the best solution, where the condition is more stringent as dispersion gets stronger in the nanowaveguides. On one hand, strong waveguide dispersion provides more engineering flexibility in terms of dispersion management for wide bandwidth phase matching. On the other hand, although the electrical poling voltage for LNTF domain reversal is dramatically reduced to less than 1 kV, the required QPM period is also reduced and high uniformity is needed. This also makes the fabrication of QPM gratings in the LNTF nanowaveguides technically difficult, especially nano QPM gratings for backward SHG, mirrorless OPO or counter-propagating photon pair generation.
Further improving the conversion efficiency toward single-photon nonlinearity to manifest quantum effect would open new avenues for the fundamental research of light-matter interaction. Accessing single-photon nonlinear interaction is also a key step toward quantum photonic circuits for the flying qubit. It is in need to achieve high absolute frequency conversion efficiency, because not only single-photon nonlinearity is extremely weak, but also quantum optics is less tolerant to loss. Besides, to be integrated with superconduct nanowire single-photon detectors, working in cryogenic environment is also important. Reaching single-photon nonlinearity for upconverted detection to work at room temperature is also highly desired for quantum applications.
For LNTF nonlinear metasurface, the nanostructure requires higher precision etching demanding advances in LNTF nanofabrication. Simultaneous multidimensional manipulation with fast response and high efficiency is still a challenge, although novel mechanisms based on GMR resonators, Mie resonance and bound state in the continuum (BIC) [108] have been successfully realized. The nonlinear efficiency and tunability by the LNTF metasurface is far from satisfied.
Advances in science and technology to meet challenges
The development of photonics on LNTF relies on science and technology advancement on the platform, where the core techniques mainly involve high-precision LNTF fabrication, domain engineering and heterogeneous integration.
High-precision nanofabrication is important for dispersion management. The dry etching of LNTF is more favored and adopted among most research groups. But the sidewall loss should be mitigated by optimized plasma etching, with improved surface roughness or reduced material redeposition. Developing wet chemical corrosion method to remove the redeposition and smooth the sidewalls is also a good solution. Other than the plasma etching technology, the photolithography assisted chemo-mechanical etching has been proven to be a good method for fabrication of meter-long nanowaveguides or scalable high-quality PICs on LNTF, where surface roughness can be easily minimized via chemomechanical polishing.
Domain engineering is important for QPM frequency conversion. Generally, the poling technique on LNTF is less demanding than that on bulk LN. One way of direct electric poling the LNTF can achieve micrometer scale QPM periods as defined by ultraviolet or E-beam lithography. Experimentally, the SHG conversion efficient, over 20 folds of that in proton-exchanged PPLN waveguides, has approached the theoretical prediction, suggesting a high-quality matching. Direct electric poling of LNTF with sub-micrometer periods is still difficult. With the assistance of piezo force microscopy, a poling period down to 200 nm has been demonstrated [109], which is feasible for backward QPM in nanocavities or microresonators. Other novel configurations to achieve enhanced nonlinear wave mixing while avoiding domain reversal involves nontrivial mode matching in ultrahigh-Q resonators, GMR resonances and (quasi-)BIC conditions. These efforts have showcased giant enhancement by orders of magnitude improvement in experiments.
For system on a chip, integration is important yet cannot be achieve with pure LNTF. To realize on-chip classical light source, rare-earth ion doped LNTF has been developed, with successful demonstration of lasing and optical amplification. Double doping has also been realized to improve the energy transfer efficiency. Heterogeneous integration of LNTF with silicon, Si 3 N 4 , InP, GaN, etc, to form hybrid systems not only provides laser sources and detectors but also compatible operation with other platforms. The implementation will provide full functionalities for disruptive photonics, microwave photonics and quantum optics applications.
Concluding remarks
The past few years has witnessed an exponential growth in integrated photonics on the LNTF platform. The current trend is to improve the interaction efficiency from the physical and technical points of view, which is a natural direction in the nonlinear optics. The motivation to fully incorporate laser sources (both classical and quantum), optical devices (including frequency converters) and photodetectors (by heterogeneous integration) on the monolithic chip to tailor, manipulate or convert light at will for all-optical processing propels the research and technological progress in this field. In the future, new concepts, such as twisted photonics, topological photonics, non-Hermitian photonics, and synthetic dimension, can be incorporated with nonlinearity to explore new physics, new technology and new applications on LNTF. Challenges still exist on both physical and technological basis, where there are new opportunities in the same time. Research on integrated photonics on LNTF is anticipated to grow even more rapidly to provide opportunities for both optical physics and applications.
Status
Along with the rapid development of information technology, the need for ultrahigh-speed and lowenergy-consumption information processing chip is becoming increasingly urgent. Using photon as information carrier is a very promising way to reach ultrahigh-speed and low-energy-consumption information processing. Nanoscale photonic devices are essential basis for constructing integrated photonic chips. Two key indexes for nanoscale photonic devices are ultrahigh-speed of over 10 Tbit s −1 and low energy consumption of less than 10 fJ/bit [110]. Unfortunately, ultrafast response and giant third-order optical nonlinear coefficient are difficult to obtain for conventional optical materials and microstructures. This makes it a great challenge to reach weak light induced ultrafast and giant third-order optical nonlinearity [111]. This has greatly restricted the realization of ultrafast and low-power nanoscale photonic devices. More often than not, nanoscale photonic devices with high energy consumption and high speed (ultrafast time response of several femtoseconds order and large energy consumption of several nJ/bit order), or low energy consumption and low speed (slow time response of several microseconds and low energy consumption of several fJ/bit order) are relatively easy to reach. Recently, great breakthrough has been achieved in ultrafast and low-power nanoscale photonic devices, as shown in figure 17. In 2020, NTT company reported a nanoscale all-optical modulator based on plasmonic slot waveguide covered with single-layer graphene flake, with an energy consumption of 30/fJ bit and modulation speed of 1 Tbit s −1 [111]. However, great efforts are still needed to reach the target of nanoscale photonic devices with ultrahigh-speed of over 10 Tbit s −1 and low energy consumption of less than 10 fJ/bit [112].
Current and future challenges
High-speed and low energy consumption photon information processing is established on the method of light-control-light based on third-order optical nonlinearity. For example, based on the third-order NLO Kerr effect, where n, n 0 , and n 2 are the refractive index, linear refractive index, and nonlinear refractive index of nonlinear material, I is the pump light intensity, the refractive index of nonlinear material changes with the external pump light intensity. This makes it possible to reach that the propagation state of signal light could be controlled by the pump light based on the pump-light-intensity dependent refractive index change of nonlinear materials and nanostructures. For the conventional semiconductor and organic materials, nonresonant-excitation will result in small third-order nonlinear coefficient and ultrafast response time of several femtoseconds order, while resonant-excitation will lead to large third-order nonlinear coefficient and slow response time. Plasmonic microstructures could be used to enhance third-order optical nonlinearity response by using strong light confinement of plasmonic mode. Relatively large intrinsic Ohmic losses has limited the large-scale integration application of plasmonic microstructures in integrated photonic chips [113]. Therefore, this material bottleneck has greatly restricted the realization of ultrafast and low-power nanoscale photonic devices. Great efforts are needed to overcome this material bottleneck difficulty in the future.
Advances in science and technology to meet challenges
Various approaches have been proposed to obtain weak light induced ultrafast and giant third-order optical nonlinearity in the near-infrared and optical communication range. In the wavelength range from visible to around 800 nm, constructing organic composites and adopting excited-state charge transfer (or energy transfer) is an effective method to reach nonlinear refractive index of order of 10 -9 -10 -7 esu and fast response time of the order of several picoseconds [114]. On-chip triggered all-optical switch with a speed of 60 Gbit s −1 and energy consumption of 100 fJ/bit has been achieved [114]. In the optical communication range, constructing nanocomposite materials and adopting compound reinforcement of nonlinearity is an effective method to enhance third-order optical nonlinearity response [115]. Not only local-field enhancement effect and quantum size effect of nanoscale crystal grain could be used to enhance the third-order nonlinear coefficient, but also fast response time of dozens of picoseconds order could be achieved due to fast recombination of carriers induced by the lattice defects in the interfaces of nanoscale crystal grains. The value of nonlinear refractive index could reach the order of 10 -10 -10 -7 cm 2 W −1 . On-chip triggered all-optical modulator with a speed of 50 Gbit s −1 and energy consumption of 80 fJ/bit has been achieved [115]. Constructing nanocomposites based on epsilon-near-zero material is also an excellent method to obtain large nonlinear refractive index of the order of 10 -10 -10 -8 cm 2 W −1 and ultrafast response time of the order of hundreds of femtoseconds [116][117][118], as shown in figure 18. Recently, the method of signal light modulating imaginary part of refractive index induced parity-time symmetry broken (or inverse parity-time symmetry broken) was proposed and used to realize an all-optical modulator with a speed of 1 Tbit s −1 and low energy consumption of less than 10 fJ/bit [112]. New methods of enhancing ultrafast and giant third-order nonlinearity are still urgently needed to obtain the target of nanoscale photonic devices with ultrahigh-speed of over 10 Tbit s −1 and low energy consumption of less than 10 fJ/bit [119].
Concluding remarks
The realization of ultrahigh-speed and ultralow-energy-consumption photon information processing chips is a long-term goal full of challenges. Ultrafast and low-power nanoscale photonic devices, as a core of the photon information processing chips, still requires novel physics and effects to improve their performances, response time and energy consumption. Since great efforts have been put in this field, continuous breakthroughs could be expected in the near future.
Status
The trend of surface science has been directed more toward complex systems in recent years [120], in which the microscopic structures are in close relation to electrochemistry, catalytic chemistry, biology, and many other disciplines. However, experimental study of such interfaces is challenging, because most surface-specific analytic tools that found successful applications in vacuum encounter various difficulties in ambient/liquid conditions. At an interface between two adjacent media, the second-order NLO process is allowed because the inversion symmetry is necessarily broken. Thus, SHG and sum-frequency spectroscopy (SFS) are highly surface-specific [121]. These spectroscopic techniques have been used widely to probe surfaces and interfaces of many materials in various disciplines since their demonstration in 1981 for SHG and in 1987 for SFS, respectively. In comparison to other surface analytical tools, SHG/SFS allows in situ, remote and non-destructive probe of surfaces in real environments with sub-monolayer sensitivity.
Benefiting from the continuous advances in laser techniques, SFS/SHG has been improved significantly regarding its sensitivity, spectral range, spatial and time resolution, etc. One notable progress is the development of phase-sensitive SFS [122][123][124]. It can provide both the real and imaginary spectra of interfacial resonances, avoiding ambiguity in the spectral analysis. The first viable phase-sensitive SFS was developed in the Shen group in 2007 using a green beam and a tunable MIR beam as the pump derived from a narrow-band picosecond laser [123]. As depicted in figure 19(a), the pump beams propagate collinearly passing through the sample and a reference nonlinear crystal. Their relative phase is modulated through continuous rotation of a phase plate inserted in between. Alternatively, the phase information can also be measured using the multiplex scheme shown in figure 19(b) as demonstrated by the Benderskii group and the Tahara group based on broadband femtosecond laser system [125,126]. With the acquired complex spectra, one may not only obtain the complete spectral information of an interface, including the absolute orientation of a moiety, but the decomposition of different contributions is feasible in the experiment.
Current and future challenges
In the past 15 years, phase-sensitive SFS has provided new opportunities for surface and interface studies, arousing researchers' interest to revisit the structure interpretation of some solid and/or liquid interfaces. In particular, our understanding of a charged interface has been greatly advanced with the help of phase-sensitive SFS. Charged interfaces are ubiquitous, e.g. lipid membrane, photocatalysis, electrochemical interface, etc. Following the Stern-Gouy-Chapman model, a charged interface is composed of two sub-layers, as depicted in figure 20(a): an atomically thin Stern layer in which orientation and arrangement of molecules are directly influenced by bonding to the charged plane, and a diffuse layer away from the charged plane in which molecules can be reoriented by the dc electric field set by the surface charge and screening ions. It is the Stern layer that plays the most important role in controlling interfacial properties and functionality. Despite extensive studies in the past, little is known about the molecular structure of the Stern layer. The difficulty lies in the fact that the Stern layer spectrum is overwhelmed by and the large background from the diffuse layer. In 2016, we devised a viable experimental scheme that can obtain separately the vibrational spectra of the two sublayers using phase-sensitive SFS [127]. The method utilized the physical properties of the two sublayers that bear different response sensitivity with respect to variation of surface charge, ion concentration, and other factors. With the help of the Stern-Gouy-Chapman model (or modified Gouy-Chapman model taking into account the ion size effect) [128], the vibrational spectra of the two sublayers can be separately deduced for a charged interface with known surface charge density. Accordingly, the microscopic structure in the Stern layer can be deduced ( figure 20(b)). Alternatively, given the SF susceptibility of molecules in the diffused layer (polarized by the dc electric field within), one can obtain quantitatively the surface charge density of specific ions emerging at the interface, as shown in figure 20(c).
The technical progress of SFS sheds more light on the chemical reactions at various interfaces that are relevant to many environmental and energy issues. As mentioned above, it is highly desired to learn the microscopic reaction pathways and the intermediate products at the interface. However, probing the reaction radicals is much more difficult than detecting the average interfacial physical structure. It is because only very few portions of the surface moieties are chemically active, which demands the sensitivity of SFS to be better than at least 1%-0.1% of a monolayer, while the current sensitivity reaches around 3% of a monolayer. This calls for a revolutionary promotion of the SFS detection sensitivity in order to unravel the microscopic chemical processes at the interface of, e.g. photo-catalysis for water-splitting in fuel cells.
Another challenge is the extension of the spectral range of SFS. Limited by serious absorption of most nonlinear crystals for frequency conversion in the infrared, the generation of an intense infrared beam with a wavelength longer than 16 µm is difficult. Without the intense long wavelength pulse, the applications of SFS were mainly limited to the system composed of light elements, for instance, H, C, N, O, Si, etc, the vibrational resonances of which lie in the MIR. Many fundamental excitations at the interface of complex oxides that play a pivotal role in catalytic reactions and emergent physics are hardly touched thus far. Thus, there is an urgent need for a surface-specific NLO spectroscopic technique operating in the wavelength range beyond 16 µm. Such a technique is expected to benefit the studies of chiral vibrations of biomolecules and collective motions of hydrogen-bonding networks at interfaces as well.
Advances in science and technology to meet challenges
The detection sensitivity of SFS is mainly limited by the damage threshold intensity of the pump laser pulses. The pump beams derived from a high-power picosecond or femtosecond laser usually carry plenty of pulse energy. However, their intensities must be lowered to the values such that the properties of the target sample are not affected. Considering that the threshold intensity becomes higher for a shorter laser pulse, using an ultrashort pulse as the pump source, e.g. single-or few-cycle pulse, would allow stronger intensity shining on the sample. On the other hand, further promotion of the optical field in the interfacial region can be realized through proper design of the meta structure to enhance the local field.
The family of second-order optical crystals used in DFG to produce infrared light is not so large. The obvious advantage of the second-order process is its high conversion efficiency. But their phonon resonances often appear in the long-wavelength infrared range, causing severe absorption of the infrared. The third-order optical process is generally considered inefficient in comparison to DFG. However, given femtosecond pump pulses with strong intensities, the third-order frequency mixing can also be appreciable, especially when a Raman resonance involves in the process [129]. Moreover, the choice of third-order optical materials is essentially unlimited. Thus, depending on the desired infrared wavelength, one can readily find a transparent material without infrared attenuation. For example, liquid nitrogen could be a good candidate for the generation of the mid-and far-infrared beam.
Concluding remarks
We have outlined here some recent progress and challenges of SFS. Due to the limited length of this letter, some other advances are not covered, such as sum-frequency scattering spectroscopy [130,131], SHG/SFG microscopy [132], ultrafast dynamics at the interface [133], etc. Recent state-of-the-art laser technology and novel spectroscopic techniques can be harnessed to improve the capability of NLO spectroscopy and to broaden the scope of SFS applications. With continuous efforts, the field of surface-specific NLO spectroscopy may enter a new era. More and more detailed information regarding the physical structure and chemical reaction at an interface can be revealed in spatial, spectral, and time domains.
Controlling the angular momentum of harmonic waves with metasurfaces
Zixian Hu and Guixin Li * Department of Materials Science and Engineering, Southern University of Science and Technology, Shenzhen 518 055, People's Republic of China * Email: ligx@sustech.edu.cn
Status
The angular momentum is an important intrinsic property of light, which contains two degrees of freedom: spin and orbital component (SAM and OAM). SAM is associated with the circular polarizations of light and is represented by σ, where σ = ±1 corresponds to the left-and right-circular polarization. In comparison, OAM is manifested by the helical wavefront of light. The electric field of the OAM beam carries a factor of exp (ilφ ), in which l is the topological charge and φ is the azimuthal angle. Light with OAM is a special kind of structured light, the generation, manipulation, and detection of which have been attracting growing interests. In the regime of linear optics, metasurfaces have been utilized to control the angular momentum of light based on several phase manipulation mechanisms, such as propagation phase, resonant phase, and geometric phase. Recently, there are plenty of metasurface-based devices for practical applications having been reported. They realize various optical functions while manipulate the OAM of light, such as the fork grating [134], Cassegrain system [135] and holography [136]. Owing to the ability for localizing the incident light, metasurface has been proved to be a promising platform for nonlinear optics. Hence, as being extended to the regime of nonlinear optics, metasurfaces have also been used to control the angular momentum of the harmonic waves generated in NLO processes.
In 2015, the plasmonic metasurfaces were used for the generation of the second harmonic waves (SHWs), during which a nonlinear geometric phase related to the SAM of light is introduced [137]. Based on the mechanism of nonlinear geometric phase, we are able to control the OAM of the harmonic waves. As shown in figure 21(a), by judiciously arranging the meta-atoms with three-fold (C3) rotational symmetry, nonlinear phase singularities with predesigned topological charges were introduced [138]. Under the pumping of circularly polarized fundamental waves (FWs), SHWs with controllable OAM values were generated as a result of the spin-orbit interaction in the NLO processes. Moreover, if the circularly polarized FWs also carry OAMs, the analysis of the angular momentum of harmonic waves can be extended to a generalized situation [139]. The angular momentum state of light is expressed as (σ, l) ω , where σ = ±1 and l represent the SAM and OAM states, and the subscript ω is the angular frequency of FWs or SHWs. For a plasmonic metasurface designed as a q-plate, which obeys the conservation law of the OAM during the NLO process, the angular momentum states of FW and SHW are (σ, l) ω and (−σ, 2l + 3σq) 2ω , respectively, where q is the topological charge of the metasurface ( figure 21(b)). Apart from plasmonic metasurfaces, which consist of meta-atoms whose materials are noble metals, dielectric metasurfaces have also been applied to control the angular momentum of harmonic waves. By decorating traditional nonlinear crystals with dielectric metasurfaces, the on-chip wavefront engineering of the generated harmonic waves can be realized [140]. As shown in figure 22, the metasurface-decorated quartz crystal was first pumped by circularly polarized FWs, the SHWs were generated during the nonlinear process in the crystal. Then, the generated SHWs were manipulated by the metasurface fabricated on the one side of the crystal, which is a linear optical process. Such a hybrid system represents an alternative method to control the angular momentum of harmonic waves in an ultra-compact way.
Current and future challenges
Due to the various phase manipulation mechanisms and the ability for the localization of light, metasurfaces represent promising platforms for generating and controlling the angular momentum of the harmonic waves. As for the plasmonic metasurfaces, however, the ultrathin layer characteristic limits the length of the light-matter interaction, resulting in relatively lower nonlinear conversion efficiencies compared with that in nonlinear crystals. The low damage thresholds of plasmonic meta-atoms also restrict the potentials to improve the nonlinear conversion efficiencies by increasing the pumping power.
Advances in science and technology to meet challenges
There are several solutions to overcome the aforementioned challenges. The first solution is to increase the area of the plasmonic metasurfaces. Under the same pumping density limited by the damage thresholds, plasmonic metasurfaces with a larger area are able to endure higher pumping power. Therefore, the nonlinear conversion efficiencies are able to be improved. The second solution is to replace metals with other Figure 21. Controlling the angular momentum of the generated harmonic waves by using plasmonic metasurfaces. The plasmonic metasurfaces are composed of gold meta-atoms with three-fold rotational symmetry. (a) Schematic of the spin-controlled generation of the OAM beams during the second harmonic generation (SHG) process. By encoding phase singularities into the metasurfaces, the generated SHWs carry OAMs with designed topological charges [138]. (b) A generalized principle to predict the angular momentum state of the SHWs generated during the SHG process on the q-plate plasmonic metasurfaces [139]. Figure 22. Schematic of the crystal-metasurface hybrid platform. The wavefront of the harmonic wave generated in the nonlinear crystal can be artificially engineered by the dielectric metasurface [140]. materials, which have higher NLO susceptibilities or damage thresholds. Such as semiconductors or dielectric materials. The last solution is to employ the crystal-metasurface hybrid platform. The harmonic waves are generated in the nonlinear crystals rather than the layer of metasurfaces, providing a longer length for the light-matter interaction. The higher damage thresholds of the crystals and dielectric metasurfaces ensure that a higher nonlinear conversion efficiency can be obtained.
Concluding remarks
By using the concepts of geometric phase in the NLO regime, we are able to locally control the phase, amplitude, and polarization of harmonic waves from the plasmonic metasurfaces. Therefore, the geometric phase controlled plasmonic metasurfaces are versatile platforms to control the angular momentum of the generated harmonic waves. Recently, by combining conventional optical crystals and the linear optical metasurfaces, one can efficiently generate and manipulate the angular momentum of harmonic waves with higher efficiency and versatilities.
Yi Hu and Jingjun Xu *
The Key Laboratory of Weak-Light Nonlinear Photonics, Ministry of Education, School of Physics and TEDA Applied Physics Institute, Nankai University, Tianjin 300071, People's Republic of China * Email: jjxu@nankai.edu.cn
Status
Self-accelerating effect of wave packets was initially proposed in quantum mechanics [141]. It describes that a quantum particle can self-accelerate without any external field. Such effect was later introduced into optics, resulting in the pioneering self-accelerating optical beams, namely Airy beams [142], which brought about many fancy applications [143]. The acceleration is sustained by transforming the energy from a part of the beam to the other part that carries the peak intensity. Indeed, the center of mass of the beam still evolves along a straight line. People attempted to generate beams of true self-acceleration by means of nonlinearity [144], which was yet limited to the non-conservative type [145]. During the rapid development of Airy-like self-accelerating beams, a nonlinear effect, namely diametric drive self-acceleration was proposed [146,147]. Two optical pulses were designed to undergo an acceleration merely by means of their interaction in optical fibers. The center of mass of the paired pulses followed in this case a parabolic path in space-time space. This is attributed to the counterintuitive dynamics of effective negative mass, rather than to a non-conservative nonlinearity. The interaction of the two pulses, subject to inverted dispersion, is in analogy to the interplay between two objects having opposite mass signs ( figure 23(a)). Such a diametric drive self-acceleration was also observed in spatial domain [148], offering new ways for beam steering.
The Airy-like self-acceleration and the diametric drive self-acceleration received lots of attention, yet they were investigated independently. Quite recently, the two mechanisms were linked together [149]. Using this linkage, self-accelerating optical pulses with tunable number of peaks were demonstrated, featuring a more precise control of the acceleration. Such structured pulse complexes may be used to encode information beyond the binary coding limit.
By using the nonlinear interaction of optical fields, another scheme was proposed to induce self-accelerating effect. The basic idea was to break the Newton's third law in the beam interaction ( figure 23(b)), allowing the two involved optical fields to experience forces of the same direction [150]. Theoretically, this nonlinear effect was demonstrated in a LC.
The self-accelerating effect stemming from the light interaction is analogous to the self-propel effect or non-reciprocal interaction in active or living matters. Regarding that the self-propulsion motion is the key to various intriguing collective dynamics in active systems. The nonlinear self-accelerating effect of light is foreseen to bring about novel phenomena to many body physics in optics.
Current and future challenges
In most studies, the two optical fields forming a pair of diametric drive self-acceleration are incoherent to each other. They are designed well using the developed theoretical tools. Once the mutual coherence of them is considered, these theories become invalid for precise designs. As a result of the beating effect of the two involved beams, it is difficult to transform the associated dynamics into a static problem as done in the incoherent case. The brute-force way is to scan the beam parameters in simulations, which is time-consuming, and even unachievable for solutions with unknown shapes.
The concept of negative mass brings about new ways to realize localized NLO structures. These nonlinearly binding effect is reminiscent of soliton molecules that commonly appear in optical cavities or resonators. It is difficult to realize these soliton molecules in transmission fibers as the soliton interactions are not balanced well. Using negative effective mass of optical pulses may offer opportunities to tackle this problem. Future work will focus on how to design nonlinear localized states beyond the two-body model describing the diametric drive self-acceleration.
Diametric drive self-acceleration is mainly discussed in two dimensions, one of which is for light evolution. Extending it into three dimensions (for instance, two transverse plus one longitudinal dimensions) may bring about more novel phenomena that do not have counterparts in lower dimensions. But in this configuration, it is challenge to make the beam localized in both transverse directions. While the light spreading is suppressed by the mechanism of diametric drive self-acceleration in one dimension, the beam tends to broaden as a result of a self-defocusing nonlinearity in the orthogonal direction.
By combining the methods for generating Airy-like self-acceleration and diametric drive self-acceleration, localized structured pulses with tunable number of peaks can be designed. With the increase of the peak numbers, the difficulty in their experimental realization goes up. Thus far, the common method to produce ultra-short shaped pulses is to encode the spectrum information of a target pulse in a spatial light modulator (SLM). The modulator, generally made of LC, faces resolution limit to generate pulses having more complex shapes. Thus a device featured with high resolution modulation is on demand.
The proposed breaking of the Newton's third law during light interactions relies on special nonlinearities in LCs. It will be meaningful to put forward a general way of non-reciprocal interaction that is applicable to different nonlinear materials.
Advances in science and technology to meet challenges
In nonlinear light propagation, there exist many solutions that change their shapes during evolution, such as rogue waves in fibers or Floquet solitons in periodically driven photonic structures. The developed theories in these areas may be referred for finding new tools to design coherent diametric drive self-acceleration.
It is possible to construct nonlinear localized optical fields by involving more waves having both positive and negative effective masses. For instance, in optical fibers, three waves may bind in the following picture: the repulsion of two out-of-phase solitons is balanced by the attraction of a dispersive pulse experiencing normal dispersion, while the paired solitons offer a potential well to confine the dispersive pulse. The method to design diametric drive self-acceleration and the approach to solve soliton solutions may be combined for realizing nonlinear binding of multi-waves with signed effective masses. In order to overcome the resolution limit of the pulse shaper to generate these complex pulse structures, the LC may be replaced by metasurface.
In the design of three dimensional diametric drive self-acceleration, the light localization in both transverse dimensions may be achieved by using some special diffraction regions in optical structures. For instance, relying on the co-existence of normal and anomalous diffraction, an optical field is able to experience self-focusing and -defocusing nonlinearities in two orthogonal directions. While its self-defocusing spreading is suppressed by the mechanism of diametric drive self-acceleration, it is possibly localized in the orthogonal dimension via the self-focusing effect.
To break the Newton's third law in beam interactions, one may simply let the refractive index potentials induced by two beams are inverted. For this purpose, self-focusing and -defocusing effects are imparted to each beam. In this framework, the forces experienced by the two beams point to the same direction during their interaction. There are various materials where the two nonlinear effects co-exist. For instance, some photorefractive crystals exhibit self-focusing/-defocusing nonlinearity for positive/negative biased electric field, respectively. Applying an alternative field, the beams injecting into the crystal at different half-period of time experience inverted nonlinearities, and they can interact relying on the storage property of photorefractive effect. Atomic vaper is another candidate, in which two optical beams with a positive/negative detuning to the resonance line exhibit self-focusing/-defocusing nonlinearity, respectively. Both materials can offer a considerable nonlinear effect at a low power level, which is beneficial to studying optical many-body physics mediated by the self-accelerating effect.
Concluding remarks
The nonlinear self-accelerating effect of light discussed here is associated with the one that truly changes the center of mass of optical beams. It stems from light interactions, yet in two unconventional configurations. One is based on counterintuitive dynamics of negative mass, and the other one is in the framework of violating the Newton's third law. In both cases, two optical waves are able to show a synchronized acceleration merely via their nonlinear interplay. More unexpected phenomena are about to appear when this kind of interactions is extended into high dimensions and many-body interactions. In particular, the self-accelerating effect can intrinsically alter the interacting scenarios of a multi-wave system by making the photons 'active' and may bring about novel collective dynamics in nonlinear optics as inspired by the crucial role played by self-propulsion motion in active materials.
Status
The MIR spectral region is pertaining to favorable features to support molecular fingerprint analysis and cover the Earth's atmosphere transparent window, which attracts intensive attention in various fundamental and applied fields. Nowadays, MIR imagers based on narrow-band gap semiconductors with indium antimonide (InSb) and mercury cadmium telluride usually require high-cost fabrication process and stringent cryogenic operation, thus severely limiting the wider application and promotion. Recently, breakthroughs in using emerging low-dimensional materials have been achieved to implement sensitive MIR detectors at room temperature, albeit that the currently attainable noise equivalent power is still many orders of magnitude from the single-photon level. Besides of the sensitivity enhancement at elevated operation temperatures, parallel challenges for the MIR focal plane arrays lie in further improving the pixel number and the response time to realize high-resolution imaging at high-speed frame rates.
In this context, frequency upconversion imaging has been recognized as a promising alternative to the MIR imaging, where the infrared light is nonlinearly converted to the visible regime while maintaining coherent properties including the spatial modes and photon statistics. The resulting upconverted photons can be registered by low-cost and high-performance silicon-based cameras with desirable features of high quantum efficiency, low dark noise, megapixel sensor matrix, and high readout rate [151]. Consequently, the performance of the indirect infrared imaging is essentially determined by the conversion unit and silicon imager. Although the seminal implementation of upconversion imaging can be traced back to 1960s, yet rapid progresses have only been witnessed in the last decades thanks to the technological advances and fabrication maturation for high-efficiency nonlinear crystals, high-power pump lasers, and high-sensitivity silicon cameras. Specifically, the combination with ingenious designs of upconversion system has enabled to demonstrate passive and active MIR single-photon imaging based on cavity-enhanced continuous-wave pump and mode-locked ultrafast-pulse pump, respectively [152,153]. The unprecedented imaging sensitivity opens new possibilities for low-light-level applications. Moreover, the intrinsic PM requirement makes it possible to engineer the nonlinear conversion process to access the unique imaging capability in resolving information encoded in wavelength, phase, polarization, and time domains. Therefore, the superior imaging performances and novel imaging modalities render the upconversion imager a powerful tool in expanding interdisciplinary fields.
Current and future challenges
The core of the upconversion imaging system is the conversion unit, which typically involves a second-order nonlinear crystal and a high-intensity pump source as depicted in figure 24(a). Ideally, the conversion process is expected to have a unitary conversion efficiency for all spectral components, add no noisy photons under intensive pumping, and preserve imaging information with all spatial frequencies. Practically, these parameters are never perfect, thus imposing limitations in the achieved imaging performance. First, the efficient nonlinear conversion usually requires at least 100 W pump power even using efficient QPM crystals [152]. The required pump power would be much higher to attain a broadband operation with a shorter crystal, which particularly imposes a great challenge to prepare such a high-power continuous-wave light source. The situation is more severe for the upconversion of longer MIR wavelengths owing to limited options for suitable nonlinear materials. Second, the intensive pump field would inevitably induce severe background noises from the parametric fluorescence and Raman scattering. To identify the single-photon-level signal from the large dark noise, it is thus imperative to develop a high-efficiency filtering system with a high rejection ratio. Furthermore, some noise may locate within the spectral band of the upconverted signal, which merits the development of novel filtering techniques beyond the typical spectral domain. Third, the conversion process acts as a soft aperture for the upconversion imaging, which results in the loss of high-frequency spatial components. Additionally, the stringent PM requirement imposes a constraint on the monochromatic field of view (FOV) of the imaging system. Therefore, it remains another major challenge to simultaneously enlarge the FOV and enhance the spatial resolution. Generally, there needs a global optimization on various available parameters to maximize the imaging performance relevant to a specific application. For instance, a larger pump beam within the crystal favors a higher resolution, but [156]. CC BY 4.0, volume visualizations of ceramic stacks (d) Reproduced from [160]. CC BY 4.0, and histopathological inspection of biomedical samples (e). Reproduced from [159]. CC BY 4.0.
the reduced pump intensity leads to a lower conversion efficiency. A longer crystal helps to increase the efficiency, yet at the expense of a reduced operation bandwidth and a smaller acceptance angle.
Advances in science and technology to meet challenges
The cavity enhancement configuration is introduced to realize the sufficient pump power in the passive MIR upconversion imaging. The optical cavity confines an enhanced field within a resonant Gaussian spatial mode, leading to substantial improvement on the conversion efficiency [152]. Alternatively, the coincidence-pumping scheme is suitable for active MIR imaging, where the pulsed excitation provides a high peak power and a short temporal duration, thus favoring to increase the conversion efficiency and reduce the background noise [154]. In combination with a chirped-poling nonlinear crystal, a broadband MIR imaging is feasible based on the efficient adiabatic conversion [155]. The resultant large PM bandwidth also enables to access a wide FOV [156]. Moreover, the optical pump gating can be used to implement a high-resolution depth imaging by temporally selecting the reflected photons [156]. To reach longer working wavelengths above 5 µm, possible candidates can resort to nonlinear crystals based on BaGa 4 Se 7 , AgGaS 2 , GaAs, and GaP [157]. The last two optical materials are possible to be fabricated in orientation-patterned configurations for QPM, which hence hold great potential to achieve efficient conversion up to 12 µm. As an interesting trend for upconversion imaging, the involved nonlinear conversion process allows to manipulate the incident fields by engineering the pump properties in the spectral, temporal, and spatial domains. This unique feature has recently been exploited to implement the MIR edge-enhanced imaging [158]. Notably, as exemplified in figures 24(c)-(e), immediate applications are rapidly driven by the enhanced MIR imaging performance, such as label-free histopathological diagnosis by high-speed hyperspectral imaging [159] and non-destructive defect inspection based on real-time optical coherence tomography [160].
Concluding remarks
The upconversion imager inherits the superior performance of cutting-edge silicon cameras, and thus provides an effective solution to achieve favorable MIR imaging features of single-photon sensitivity, MHz-level frame rate, and massive resolvable elements. However, there is still much room for improvement especially in increasing the spectral bandwidth, reaching longer wavelengths, enlarging the acceptance angle, and enhancing the spatial resolution. Furthermore, the inherent nonlinear process can be engineered to implement multi-dimensional upconversion imaging with the resolving power for the wavelength, phase, polarization, and time, as illustrated in figure 24(b). In the future, integration and miniaturization are also expected to facilitate a field-deployable device for widespread use in practical scenarios.
Status
Over the past decades, ultrafast laser manufacturing has attracted extensive attentions in various fields benefiting from its intrinsic three-dimensional (3D) processing capability and broad material applicability. The advantages of ultrafast laser origin from the extremely nonlinear laser-matter interaction which is far beyond linear optics. By the strong nonlinear threshold effect of multiphoton ionization or Zener ionization, laser energy deposition can be confined to a sub-diffraction range of the focal spot without causing any redundant damage/modification in the light propagation path before the focus. Therefore, as shown in figure 25(a), the laser-processed voxel can penetrate deep into the material and enable free-form shaping of 3D structures with a precision of hundreds of nanometers [161]. To date, based on two-photon 3D nanoprinting and femtosecond laser refractive index modification of transparent materials, a large number of prototype functional devices have been successfully prepared, which exhibits strong application potential in various fields including smart micro-robots, optical storage (figure 25(d)), fiber sensing and integrated diffractive optics elements ( figure 25(g)).
In addition, the transient electrons excited by intense laser irradiation perturbate the atomic potential energy surface [162] and inject a large amount of energy into the lattice system through the electron-phonon coupling, driving a pronounced collective atomic displacement within tens of picoseconds ( figure 25(c)), which provides a unique technique for precise phase engineering of materials at nanoscale (e.g. ablation, evaporation, melting, crystallization, amorphization or defect center generation). For example, 3D NPCs [54] can be realized by femtosecond lasers selectively erasing of the second-order NLO coefficients in crystals ( figure 25(b)). Another interesting and important scheme is the NV color centers in diamond. By femtosecond laser irradiation, single diamond NV center can be directly introduced with impressive yields and 3D positioning accuracy [163], which is promising for applications such as quantum sensing and distributed quantum computing.
Meanwhile, since the pulse duration of ultrafast laser is much smaller than the characteristic time of energy exchange between electrons and phonons (several picoseconds), the thermal damage during laser processing can be effectively suppressed, which is very suitable for modern high-accuracy industrial processing of hard (or brittle) material, such as dicing, grooving and drilling of silica glass, sapphire or transparent ceramics. Combined with the current booming spatial/temporal light modulation technologies, the efficiency and accuracy of ultrafast laser processing can be further improved. Therefore, it is expected to build a cross-scale universal processing platform based on ultrafast laser technology.
However, despite these achievements, there are still some tricky problems to be solved for further promoting the application of ultrafast laser processing in science and industry. This roadmap aims to identify current scientific and technical issues and the necessary methods to be developed to address these challenges.
Mechanisms of laser-matter interaction
Exploring the interaction of ultrafast lasers and matter is not only of great significance in physics, but can also promote the improvement of laser processing technology. However, until now, these mechanisms have not been fully understood. The fundamental difficulty lies in that ultrafast laser processing is a complex multiphysics process spanning multiple spatio/temporal scales, covering the interdisciplinary knowledge from quantum theory, nonlinear optics, elastic mechanics, thermology and hydromechanics. It is a formidable challenge for both experimental observations and numerical simulations to record/resolve the complex physical and chemical processes at a nanoscale spatial scale while maintaining a femtosecond temporal resolution, which further hinders the establishment of the systematic theoretical model. A well-known case is the phenomenon so-called laser-induced periodic structures [164]. Although they were first observed decades ago and have been found important applications in surface texturing (figure 25(h)) and optical storage, its formation mechanism is still debated. The confusion of the theory leads to the lack of methodologies for high-quality structure preparation. Reproduced from [165], with permission from Springer Nature. (e) Far-field-induced near-field fabrication. The far-field beam was converted to near-field components which facilitates dynamical high-resolution direct laser writing. Reproduced from [166]. CC BY 4.0. (f) 3D non-Abelian photonic chip fabricated by femtosecond laser direct writing of waveguides in glass. Reproduced from [167], with permission from Springer Nature. (g) Diffractive elements integrated with the opto-fluidic components. Reproduced from Hu et al [170]. John Wiley & Sons. © 2021 Wiley-VCH GmbH. (h) Large-area, rapid surface nanotexturing by surface plasma imprinting technology. The surface plasmon standing wave forms due to the metallization of the fabricated material under the femtosecond laser excitation. Reproduced from [164]. CC BY 4.0.
Fabrication resolution
Another important question is the ultimate fabrication resolution that can be stably achieved by laser processing. Technically, the precision of laser processing directly determines the allowable complexity of the fabricated devices. Typically, the accuracy of current ultrafast laser processing is limited to hundreds of nanometers. Most of optical meta surfaces/materials or high-dimensional optical storage [165] working at visible wavelength consist of meta atoms/bytes with feature size around 100 nm, which accordingly cannot be directly fabricated by standard ultrafast laser processing technologies. Various techniques such as stimulated emission depletion (STED) can significantly improve the processing accuracy of two-photon polymerization, but these methods are not applicable to dielectrics or semiconductors. For solid materials, a recent proposed O-FIB technology [166] indicated that the polarization-controlled near-field optical components emerging from boundary conditions can play a key role to realize nanoscale free-form direct laser writing ( figure 25(e)). Further, can we use ultrafast lasers to excite specific electronic states to drive coherent phonon modes for atomic-level processing or manipulation? Therefore, chasing the ultimate precision of ultrafast laser processing is a still topic of great significance both in physics and technology.
Processing efficiency
Typically, ultrafast laser processing adopts a point-by-point scanning strategy, therefore the processing time increases cubically with the device volume and processing resolution. It often takes several hours or even dozens of hours to complete the processing, which hinders the practical applications of ultrafast laser processing in scientific research, medicine or industrial production. Conventional approaches to overcome this trade-off including spatial light modulation, multi-focus parallel scanning or projection optics, are limited by the current state-of-the-art capability of phase/amplitude modulation and the basic principle of diffractive optics, sacrificing part of the processing accuracy and making it difficult to realize complex device structures.
Tunable/reconfigurable devices and heterogeneous integration
The three-dimensional in-volume integrated devices directly written by the ultrafast laser is protected from the disturbance of the external environment and therefore can work well under high temperature or high humidity conditions. However, this also makes it difficult to tune or reconfigure these devices. For example, in-volume optical waveguides fabricated by femtosecond lasers [167] play a crucial role in three-dimensional integrated photonic quantum chips ( figure 25(f)). However, limited by the physical properties of glass, only the thermo-optic effect can be used for phase modulation, the modulation speed is still unsatisfactory and the waveguides deep in the body also cannot be modulated. For materials whose optical properties are easy to be electrically modulated, such as lithium niobite, however, these crystals are tricky to fabricate, and till now their ultrafast laser-based processing technique still needs to be further improved to realize high performance devices.
Another solution may circumvent this challenge is the heterogeneous integration. Thanks to the flexibility of two-photon polymerization, it can fabricate structures on optical fibers and electronic chips for direct integration. However, for most of the inorganic solid materials, the related bout solutions/methods are still lacking.
Advances in science and technology to meet challenges
To address the significant challenges to establish a universal, multiscale, efficient fabrication platform based on ultrafast laser, the following advances are required.
Ultrafast pump-probe techniques specific to laser processing should be further improved to reach higher spatio/temporal resolution for direct observation of the physical processes during the laser fabrication. At the same time, simulation techniques covering from the atomic scale to the mesoscopic scale should be established to better understand the laser-matter interaction, for example, the recently flourished time-dependent density-functional theory and multiphysics simulation based on nonlinear Maxwell equations, Navier-Stokes equations and thermo-elasto-plastic equations, which can cover from the energy absorption at the atomic level to the structural evolution at the micrometer scale.
Accordingly, advanced processing techniques which benefit both accuracy and efficiency are worth expecting. On the one hand, such advances could be attributed to a deeper understanding of the mechanism of laser-matter interaction. They may originate from cutting-edge concepts in nanophotonics, such as the STED technique or optical near field mentioned above, or from laser-induced multiphysics effects, such as heat (typical heat accumulation or ablation-cooling material removal), liquid, gas or plasma-assisted laser processing techniques. For example, a recent work has shown that the laser-induced micro-jet in a liquid environment can effectively improve the quality of laser processing, bringing off micro-optical elements with nanoscale roughness on the surface of normally difficult-to-process crystalline materials, such as sapphire, yttrium aluminum garnet, etc [168]. On the other hand, the technological advances of processing equipment/algorithm matter as well. For example, the development of more stable ultrafast lasers with higher repetition rates, SLMs with higher pixel density and faster refresh rates or light shaping algorithms with higher degrees of freedom.
Notably, the recent rise of scientific machine learning is promising to play an important role in ultrafast laser processing in the future, as it dominates various areas of science and engineering such as image recognition, automation, structure prediction. Based on artificial intelligence, automatic processing and in-situ inspection are expected to be realized in the future, which can greatly improve the efficiency and yield of processing. At the same time, machine learning has recently been proven to be applied to optical field modulation, inverse device design and even physical field reconstruction [169], that is, research on ultrafast laser processing mechanisms may also benefit from advances in artificial intelligence.
Concluding remarks
We expect that a universal processing platform based on ultrafast lasers will be established once these challenges are well resolved and contributes to the future 3D integration of next-generation functional devices. Correspondingly, industrialization of ultrafast lasers will be further boosted in the next decade. However, the advances to reach this goal require efforts across the photonics community, including laser, nonlinear optics, and the associate precision instrument technologies.
Status
The PA effect was initially discovered by Bell in 1881. However, the phenomenon was almost completely forgotten for over 50 years. Its revival started soon after the advent of laser light sources. In the 1970s and 1980s, photoacoustic spectroscopy (PAS), which is based on the PA effect, experienced a significant surge in popularity, primarily through its combination with gas lasers. PAS offers two significant advantages: first, it is excitation wavelength-independent, allowing for the same PAS-based detector to be used with any type of laser and at any wavelength, ranging from ultraviolet to terahertz, with identical performance. As a result, a single instrument can realize a multi-gas PA sensor that is based on a different wavelength. Second, the sensitivity of PAS is proportional to the incident laser power. This feature enables PAS-based detectors to benefit from the development of high-power semiconductor lasers with high wall plug efficiency or from the enhanced excitation.
Quartz-enhanced photoacoustic spectroscopy (QEPAS) is a variant of PAS that has been shown to be a sensitive spectroscopic technique for the chemical analysis of gases since its initial demonstration in 2002 [171]. In QEPAS, a quartz tuning fork (QTF), as depicted in figure 26(a), is employed as a resonant acoustic transducer, replacing the conventional broadband microphone. Consequently, QEPAS unites the main characteristics of PAS with the advantages of using a QTF, providing an ultra-compact, cost-effective, robust acoustic detection module [172].
The fundamental principle behind QEPAS is quite straightforward. When optical radiation interacts with a trace gas, it generates a weak acoustic pressure wave that excites a resonant vibration of a QTF. The vibration is subsequently converted into an electric signal by the piezoelectric effect. The electric signal, which is proportional to the gas concentration, is then measured by a transimpedance amplifier. Compared to conventional resonant PA spectroscopy, QEPAS has several advantages, such as the sensor's immunity to environmental acoustic noise, a straightforward design for the absorption detection module, and its ability to analyze trace gas samples of ∼1 mm 3 in volume.
To-date, the QEPAS technique has been employed to measure numerous molecules with well resolved rotational-vibrational lines in the near-infrared spectral range (e.g. NH 3 , CO 2 , CO, HCN, HCl, H 2 O, H 2 S, CH 4 , C 2 H 2 , C 2 H 4 ) as well as in the MIR spectral region (e.g. NO, N 2 O, CO, NH 3 , C 2 H 6 , and CH 2 O). QEPAS has also been demonstrated with larger molecules with broad, unresolved absorption spectra, such as ethanol, acetone and Freon. These results were shown in figure 26(b) [173].
Current and future challenges
Conventionally, the QEPAS technique assumes a linear relationship between the amplitude of PA signal and the excitation power, as well as between the amplitude of PA signal and absorption coefficient. However, it has been found that these linear dependences may not be universally applicable. Under certain conditions, the linear correlation no longer holds true and nonlinear dependence occurs. So far, several types of nonlinear mechanisms have been investigated and developed, including absorption saturation-based nonlinearity [174], thermal-based nonlinearity [175].
As mentioned earlier, one of the advantages of PAS is its proportionality of sensitivity to the incident optical intensity. However, this relationship is only applicable when the optical intensities are much lower than the saturation intensity. As the intensity approaches the saturation intensity, the absorption coefficient decreases to half of its original value, indicating a negative reciprocal function correlation with the incident intensity. With the continual development of the laser industry, the output power of commercial lasers is increasing. Fully utilizing these high-power lasers to achieve ultra-high detection sensitivity is now a significant challenge in the QEPAS technique.
In PAS, the temperature rise induced by the absorption of light generates thermal-elastic expansion. This expansion causes a pressure change and results in acoustic emission, which can be detected using a transducer. It is important to note that the thermal expansion coefficient exhibits the most notable dependence on temperature and needs to be considered when the temperature increase is significant. However, in most cases, the local temperature rise induced by excitation beam is often ignored by assuming that the thermodynamic parameters remain constant. This assumption is no longer valid when the temperature increase is significant and should be taken into consideration to ensure accurate measurements.
Advances in science and technology to meet challenges
To meet these challenges, it is necessary to provide new technological solutions on the one hand, and to improve the QTF performance to meet the requirement of nonlinear measurement on the other hand. In 2015, a power-boosted QEPAS sensor is developed for sub-ppm H 2 S trace-gas detection in NIR region [176]. A 1.4 W distributed feedback laser was employed as its optical source. The linearity of the sensor response to the laser power and H 2 S concentration confirms that the nonlinear effect did not occur since the 0.3 mm gap between the prongs of the commercially available QTF caused a larger background noise and limits the use of the higher optical power. Subsequently, a custom QTF with its two prongs spaced ∼0.8 mm apart was used to replace the commercial QTF [177]. Although the background noise was completely eliminated, the sensor system still worked in the linear region. In 2019, a PA sensor system for detecting ppb-levels of CO in sulphur hexafluoride decomposition was reported using a 10 W fiber-amplified NIR diode laser [178]. A saturated behavior of the PA signal was observed in this high-power system, indicating the further increases in optical excitation power would not benefit the PA signal. In order to increase the PA signal amplitude under high optical excitation power, more ground state molecules were supplemented into the PA cell. As a result, a portion of supplemented 'fresh' molecules were constantly excited to the excited state with strong optical excitation power. The PA amplitude was improved ∼3 times with the assistance of the gas flow. Therefore, the measurement of nonlinear effect requires a ∼10 W excitation optical power in PAS.
A QTF is easy to be broken in such a high optical power, especially when the prong gap is too narrow to accommodate the laser beam. These considerations determine the directions that should be followed for realizing QTFs optimized for nonlinear PAS. The prongs spacing should increase in order to minimize the noise signal due to fraction of the optical power that may hit the internal surface of the QTF; the quality factor must be kept as high as possible; a reduction of the QTF fundamental frequency down to a few kHz should be carried out in order to increase the QEPAS response in the slow-relaxing gases without adding any relaxation promoter. Some new QTF designs have been reported, as shown in figure 27 [179]. Concluding remarks PAS can be considered as a linear spectroscopy technique with a low excitation optical power. However, some nonlinear effects occur with the increase of the excitation optical power. To date, the output power of a single laser diode can be up to watt level. In order to make use of the high power, the nonlinear effect can not be ignored in the QEPAS technique. In the near future, more and more important breakthroughs regarding the QEPAS technique are being made [180,181].
Runfeng Li, Wenkai Yang and Kebin Shi
State Key Laboratory for Mesoscopic Physics and Frontiers Science Center for Nano-Optoelectronics, School of Physics, Peking University, Beijing 100871, People's Republic of China
Status
Nonlinearly generated optical signal has been utilized as one of the most important contrast mechanisms for bio-photonic imaging. There have been increasing demands for higher spatiotemporal resolution, larger FOV, deeper imaging depth and better biochemical specificity in bio-imaging research field, where NLO microscopy has shown several advantages. As one of the most attracting features, strong laser intensity required by nonlinear signal generation enables highly localized excitation volume, leading to intrinsic optical sectioning capability. Nonlinearly enabled multiphoton excitation further allows the use of infrared laser sources and consequently the better three-dimensional imaging capability of thick bio-samples. Moreover, NLO response of bio-samples can probe diverse signal characteristics intrinsically and plays critical roles in label-free imaging modalities, as exemplified by optical harmonic microscopy for probing structural information and nonlinear wave mixing microscopy with electronic/vibrational resonance for biochemistry imaging.
Currently, multiphoton excited fluorescence microscopy has been widely employed in biomedical research. Large penetration depth and intrinsic three-dimension optical sectioning capability make multiphoton fluorescence microscopy a unique imaging modality for neuron science as well as thick fluorescence sample studies. In contrast to the fluorescent emission, signal generation originated from second or third order nonlinear polarization in light-matter interaction has been demonstrated to support various label-free microscopy modalities without fluorescent markers. Second or third order nonlinear parametric processes such as SHG, SFG, THG and FWM have been reported for bio-photonic imaging. Yet the nonlinear parametric-process-based imaging modalities are often limited to only provide morphological/structural information. Non-parametric nonlinear processes with vibrational or electronic resonance excitation further enable chemically selective imaging modalities such like coherent Raman microscopy and resonance-enhanced SFG microscopies.
Driven by emerging challenges such as in-vivo biomedical research, brain science and tissue-level study, optical imaging systems are expected continuously to achieve higher spatiotemporal resolution, better signal to noise ratio, deeper imaging depth and lower phototoxicity/photo-damage. To this end, the NLO imaging community will foresee on-going efforts enabled by new NLO physics mechanisms, novel optical technologies, and the interdisciplinary inspirations from other research fields.
Current and future challenges NLO signal generation in bio-samples often requires high intensity thresholds to be produced by tightly focused beam with ultrafast pulses. Such excitation thresholds result in good performance in rejecting spatial background noise and consequently better optical sectioning capability. Yet, intense laser fields substantially lead to photo-damage, phototoxicity and physiological perturbations, which are particularly unwelcomed in live bio-sample studies. Label-free NLO imaging overcomes fluorescent phototoxicity by using intrinsically parametric or non-parametric NLO signals such as optical harmonics, nonlinear wave mixing without or with resonant excitations respectively. Despite the efforts in Raman/electronic enhanced nonlinear wave mixing microscopes, NLO signals are still limited in mapping diverse biochemical specificities in comparison to fluorescence imaging.
In short summary, the most challenging tasks in nonlinear bio-photonic imaging are centered to enhance photon efficiency and spatiotemporal resolution systemically while achieving less photo-damage and better biochemical specificity (as shown in figure 28), by developing novel excitation/illumination, nonlinear photon conversion, detection, and data analysis methodologies.
Advances in science and technology to meet challenges NLO imaging community has made substantial progresses to tackle the challenges based on different technical approaches. Notable efforts have also been reported by Chinese scientists recently. For example, to optimize the nonlinear illumination efficiency, an adaptive excitation source (AES) was proposed for high-speed multiphoton microscopy in 2018. In this method, only the informative parts of the sample were illuminated by outputting pulses only with specified region-of-interest, which will significantly reduce the thermal damage to the brain tissue, resulting in a 30-fold reduction in the power requirement for two-or three-photon imaging of brain activity with an imaging depth of 680 µm, a field-of-view of 700 µm * 700 µm at 30 frames per second. Imaging of awake mouse brain at video rate was demonstrated. A high-pulse energy, wavelength-tunable (1600-2520 nm) femtosecond fiber laser was also used with the AES scheme to show the promising applications for in vivo deep-tissue multi-photon fluorescence imaging [182]. In 2017, improved multiphoton structured illumination microscopy using adaptive optics scheme was reported, where a nonlinear guide star was used to determine optical aberrations and a deformable mirror to correct them. The system offers better spatial resolution, optical sectioning, and easier sample preparation comparing with conventional microscopy [183]. A gradient excitation technique to accelerate three-dimensional two-photon excitation fluorescence imaging was also reported recently, where the axial positions of the fluorophores can be decoded from the intensity ratio of the paired images obtained by sequentially exciting the specimen with two axially elongated two-photon beams of complementary gradient intensities. Compared with traditional two-photon excitation fluorescence microscopy, the technique improves volumetric imaging speed (by at least six-fold), decreases photobleaching (less than 2.07 ± 2.89% in 25 min), and minimizes photodamages [184]. As an impressive progress toward in vivo multiphoton fluorescence imaging, miniature two-photon microscopy for fast high-resolution, multi-plane and long-term brain imaging was reported in 2017. In 2021, the same group developed the miniature FHIRM-TPM 2.0, which was equipped with an axial scanning mechanism and a long-working-distance miniature objective to enable fast volumetric and multi-plane imaging over a volume of 420 * 420 * 180 µm 3 at a lateral resolution of ∼1 µm.
As complementary signal modality to fluorescence, non-parametric NLO signal with molecular resonance excitation such as coherent anti-Stokes Raman scattering and SRS has emerged as an important label-free chemical imaging contrast mechanism. There have been inspiring works reported recently covering technology development and biomedical applications. To realize 3D coherent Raman scattering imaging of scattering samples, Yang et al adjusted spatial-temporal overlapping of pump and Stokes pulses so that the two counter-propagating pulses meet at different depths of the sample, and combined with galvanometer scanning to form a pulse sheet. This technique can be used to image highly scattering bone tissue with lateral resolution of 16.4 µm and axial resolution of 24.5 µm over a large FOV [185].
With engineered material design, probes suitable for coherent Raman imaging have attracted great interest recently. Yang et al synthesized a water-soluble and biocompatible Raman probe with 100-fold enhanced vibrational signals in cellular Raman-silent region (1800-2800 cm −1 ) compared to conventional alkyne Raman probes. This ultra-strong probe enables functional SRS imaging of specific subcellular organelles [186]. Different from the above inert Raman probes, Ao et al developed a reversibly switchable vibrational probe. Based on an alkyne-labelled diarylethene, this probe undergoes reversible photoisomerization under ultraviolet or visible light irradiation, and the narrow Raman peak of the alkynyl group on the spectrum shifts accordingly, generating photoactive 'on' and 'off ' SRS images [187], opening exciting pathway toward super-resolution Raman imaging.
Combined with information science, high-quality coherent Raman imaging can also provide valuable quantitative imaging analysis for precision biomedical applications. By utilizing the classification algorithm of support vector machine, it was achieved to analyze the spectral and morphological characteristics based on Raman images of calcified parts of breast tumors [188].
It can be concluded from the recent progresses that interdisciplinary technique/application integration will be a promising strategy to further develop advanced NLO microscopy and their applications.
Concluding remarks
The future development of NLO imaging for bio-photonic research can be a diverse and interdisciplinary research field involving optical physics, material, computing, and biomedical sciences. Multiple elements of optical techniques such as adaptive optics, holographic recording, tomographic detection, and computational imaging principles can serve as potential candidates to be integrated with NLO imaging systems. For instance, adaptive optical principle can be applied on laser source, excitation field engineering and detection optimization. Wide-field-based holography and tomography can be combined with nonlinearly generated signal to achieve high speed imaging [189,190]. Rapid progresses in computation-based neuron network have enabled various machine learning tools, which will play promising roles for developing efficient NLO imaging techniques and big imaging data segmentation/analysis. More applications in clinical practices will be also possible for nonlinear imaging systems with the rapid advancing in small footprint nonlinear fiber laser sources and fiber optical systems. | 2023-06-04T15:06:32.251Z | 2023-06-02T00:00:00.000 | {
"year": 2023,
"sha1": "bb47b8e8d0a66e2c9c848eca98c62ac9f894a19e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2515-7647/acdb17",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "cbc1bcaf5b8ce5655ee7141dc74ebcadd9f03e05",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
13161626 | pes2o/s2orc | v3-fos-license | Inhomogeneous soliton ratchets under two ac forces
We extend our previous work on soliton ratchet devices [L. Morales-Molina et al., Eur. Phys. J. B 37, 79 (2004)] to consider the joint effect of two ac forces including non-harmonic drivings, as proposed for particle ratchets by Savele'v et al. [Europhys. Lett. 67}, 179 (2004); Phys. Rev. E {\bf 70} 066109 (2004)]. Current reversals due to the interplay between the phases, frequencies and amplitudes of the harmonics are obtained. An analysis of the effect of the damping coefficient on the dynamics is presented. We show that solitons give rise to non-trivial differences in the phenomenology reported for particle systems that arise from their extended character. A comparison with soliton ratchets in homogeneous systems with biharmonic forces is also presented. This ratchet device may be an ideal candidate for Josephson junction ratchets with intrinsic large damping.
The understanding of ratchet mechanisms is a very active field, of wide interest by its potential application in the design of devices with new transport properties.
The key feature of ratchet devices is their ability to rectify the motion of particles subjected to an external ac force with zero time average. Originally proposed as a toy model for molecular motors, in the last decade many proposals have been put forward for devices that use this ratchet phenomenon in different applications [1]. Ratchets working with extended particles (solitons) were subsequently studied as a generalization of particle ratchets [2]. Recently, a ratchet device driven by two ac forces was introduced by Savele'v and coworkers [3], in which the combination of the two drivings produced a variety of interesting phenomena and allowed a finely tunable control. In view of the rich behavior demonstrated by this system, a very natural issue is its extension to soliton ratchets, that have important applications which can benefit from this proposal.
Soliton ratchets under the presence of two ac forces have been extensively studied [4,5,6] in homogeneous systems, i.e., without an underlying ratchet potential. In this case, it has been shown that the ratchet mechanism works only for asymmetric biharmonic forces, where the appearance of a directed translational motion is a result of the effective coupling between the internal mode (oscillations of the soliton width) and the external driving force. In this paper, we focus on a soliton ratchet device recently studied by us [8,9], based on a nonlinear Klein-Gordon model with a lattice of delta-like inhomogeneities that induce a ratchet potential for the solitons. As we will see, one of the advantages of our model as compared to the homogeneous system is that it works irrespectively of the symmetry of the ac driving: Directional transport takes place for ac forces with commensurate frequencies. Moreover, in the homogeneous system the motion drastically decreases for increasing damping, due to the slowing down of the soliton width oscillations [6], while in the present system the ratchet phenomenon is present up to rather large values for the damping.
Our model, first introduced in [8], is given by where f (t) = A 1 sin(ω 1 t + θ 1 ) + A 2 sin(ω 2 t + θ 2 ), being A 1 , A 2 the respective amplitudes of forces, ω 1 , ω 2 the frequencies and θ 1 and θ 2 the phases of the harmonics (see e.g. [3]). Here we focus on the sine-Gordon model, although the same scheme can also be applied to the general nonlinear Klein-Gordon model [9]. For V (x), we choose a spatially periodic potential, where the unit cell is given by an asymmetric array of delta functions (inhomogeneities) in order to produce a ratchet-like phenomenon. Specifically, the unit cell, of length L, is defined by three inhomogeneities with the same intensity, the first one located at the beginning of the cell, the second one at a distance a from the first one, and the third one at a distance b from the second one, i.e., where The parameters (a, b, c) are chosen to be comparable to the static soliton width l 0 in absence of inhomogeneities. In addition, these should fulfill the conditions a, b < c with a = b.
As a preliminary step, we begin our study by analyzing how the system works in different regimes of damping. This is very relevant to applications such as Josephson junctions. While the standard Josephson junctions work usually at small damping, junctions with intrinsically high damping such as superconductor insulator normal-conductor insulator superconductor LJJ or high-T c LJJ technology can also be fabricated [7]. For this particular aspect, we look at the case of a single harmonic component in Eq. (1). In this case, according to [8,9] we have a directed motion whose direction is determined by the position of the inhomogeneities, and the dynamics reduces to a system very much like a rocking ratchet for a single particle [10]. Our results are shown in Fig. 1, where we have chosen a lattice of inhomogeneities whose configuration (a < b) yields a negative current [8].
Hereafter Ẋ means the time average of the velocity. The figure demonstrates the increment of the efficiency upon decreasing the damping β; however, nonzero values for the velocity are found for rather large damping values, and for the lower frequency there is still motion even for β > 1.
We note also that for these frequencies with lower damping values, the dynamics results depend on the initial conditions, which may lead to a chaotic dynamics.
Let us now move to the situation with two simultaneously acting ac forces. For the time being, we work with sinusoidal forces as in Eq. (1), and we will consider later the case of rectangular pulses as in [3]. With two harmonics present (doubly rocked ratchet) we have a system in which the symmetry can be broken both spatially (re- flection symmetry V (x) = V (−x)) and temporally (time shift symmetry f (t) = −f (t + T /2) with T = 2π/ω 1 ). As we will now see, it is possible to obtain current reversals irrespective of the symmetry of the biharmonic force. The reason is that the underlying mechanism of transport studied for homogeneous systems subjected to biharmonic forces, that required temporal symmetry breaking [6], is not responsible for the directional transport in our present model, as can be inferred from the fact that in the inhomogeneous system we have directed transport under large damping. We examine the dynamics of Eq. (1) in the partially adiabatic regime ω 1 /ω 2 ≪ 1 with ω 1 ≪ 1 and ω 2 < 1, setting A 1 = A 2 . In this case, multiple current reversals are possible in the particle ratchet system [3]. Figure 2 confirms clearly the existence of several current reversals in our soliton device. The results show that we can reverse the direction of motion by changing the force amplitudes, which, as in the particle case, opens the possibility to control the rectification properties in great detail.
In the previous analysis we have taken β = 1, i.e., a rather large damping where the inertial effects are small. However, from the above discussion for one harmonic, we expect changes in the behavior also in the case of a biharmonic force when the damping is reduced. Figure 3 exhibits indeed a different picture for the dynamics as compared with Fig. 2. The main differences are related to the shift of the windows of motion towards lower force amplitude values as well as an increase of the absolute value of the mean velocity for some force amplitudes. Interestingly, we note from this picture a non-typical behavior for the average velocity close to the region where the first current reversal takes place: We observe that, while for some values of the force amplitude the windows of motion are suppressed, for other cases, the absolute value of the average velocity is enhanced and even reversed. This issue can be further examined in Fig. 4, where the dependence of the average velocity on the relative phase of the drivings is plotted. Here we see that by changing the phases we reverse the direction of motion. This behavior is not only restricted to this singular region in Fig.3; in fact, it can be observed close to the regions where the currents are reversed in Fig.3. We stress that we have taken θ 1 = 0 and θ 2 = θ without loss of generality, since although the dynamics obviously changes with both phases θ 1 and θ 2 , one can always map the choice for θ 1 and θ 2 into the previous representation through the transformations θ ′ 2 = θ 2 − (ω 2 /ω 1 )θ 1 with θ ′ 1 = 0. A feature of the behavior for this doubly rocked ratchet system, which differs from the homogenous system case, is the quantization of the velocity dependence on the phase (see Fig. 4 and cf. Fig. 2 in [6]). This is yet another evidence that the ratchet mechanisms at work in the ho- mogeneous and inhomogeneous cases are not the same. Nevertheless, despite this quantized nature, it is feasible to control the direction of motion by tuning the phases of the biharmonic force as in the homogeneous case.
In view of the similarities with the single particle ratchet system in [3], it is important to assess the degree of similarity between the two systems. Savele'v and coworkers focus mainly on the case of rectangular wave signals as the asymmetry and nonlinearity-induced mixing are separable [3]. For such rectangular wave signals in the fully adiabatic regime (ω 1 , ω 2 ≪ 1) with a sawtooth ratchet potential, Savele'v et al. report changes in the dynamics only for a relation between the frequencies given by ω 2 /ω 1 = (2m − 1)/(2n − 1). As Fig. 1 shows, in our case the behavior turns out to be much more complicated. The average velocity fluctuates around the value indicated by a horizontal line, but it does exhibit very many peaks. Most importantly, unlike the single particle picture shown in [3], here the peaks appear not only for fractional harmonics ω 2 /ω 1 = (2m − 1)/(2n − 1), but also for harmonics that fulfill the relations ω 2 /ω 1 = (2m − 1)/2n and ω 2 /ω 1 = 2m/(2n − 1).
The reason for the difference between the particle and the soliton doubly rocked ratchets can be traced back to the extended character of our solitons. It is well known that the soliton dynamics is affected by the deformation of the soliton width that accompanies its motion along an array of inhomogeneities [9]. This in turn modifies the effective potential arising in the description of the soliton as point particle, as well as in the variations of the corresponding effective force. Accordingly, there is a large degree of feedback between the soliton width and the soliton motion. To obtain some insight on these issues, it is necessary to study the evolution of the degrees of freedom that are involved. In order to do so, we resort to the use of the collective coordinate approach [11] which involves the soliton width as an additional degree of freedom. In doing so, we find that the soliton can be described by two collective variables X and l, whose expressions are given by Eqs. (4)-(5) in [9], with the difference that now f (t) contains the two rectangular wave signals whose expression appears in the caption of Fig. 5. This is in contrast to the single particle doubly rocked ratchet of Savele'v et al., because of the appearance of the width degree of freedom. This collective coordinate approach explains in full detail the results of the simulations, as shown in Fig. 6 for the fully adiabatic regime. Again, as in the simulations of Eq. (1) we note not only the existence of peaks for the frequency ratios ω 2 /ω 1 = (2m−1)/(2n−1), but also for some frequencies ratio that fulfill the relation ω 2 /ω 1 = (2m − 1)/2n and ω 2 /ω 1 = 2m/(2n − 1), absent in the single particle picture [3]. This result is a clear demonstration of the role of the soliton width in the dynamics.
To conclude, we have generalized the results obtained for doubly rocked particle ratchets [3] to extended systems, finding new phenomena that arise from the intrinsic width of the solitons. It was shown that the direction of motion can be modified by changing the relation between the frequencies ratio, the phases of the harmonic forces as well as their amplitudes. However, in the frame of the fully adiabatic regime of our soliton ratchet we find many more peaks than in the particle ratchet system of Savele'v et al., as we see special velocities for several types of frequency ratios. We have been able to show that this is thoroughly accounted for by the interplay of the soliton width and motion. We have also compared to doubly rocked soliton ratchets in homogeneous systems [4,5,6] and verified that, although the soliton width is involved in both cases, the mechanism for the appearance of the ratchet effect is different. Aside from the fact that homogeneous soliton ratchets arise only for asymmetric biharmonic drivings, further important differences include the damping dependence of the velocity and the quantization of the dependence of the velocity on the relative phase. We emphasize that our ratchet system can be straightforwardly implemented in a Josephson junction device [7]. In that case, the very many possibilities for the motion we have reported here would allow for a highly controllable device that can be tailored to fit different specific applications. The property that the ratchet phenomenon is observed even in the presence of large damping makes this system preferable to a homogeneous one driven by a biharmonic force, and makes it much more suitable for real life applications. Finally, we note that an analysis similar to the present one can be extended to incommensurate ac forces with irrational values for the relation ω 2 /ω 1 . However, our preliminary results show that this choice gives rise to new phenomena that require careful attention, and therefore it will be the object of a future investigation. | 2017-04-09T17:05:02.141Z | 2005-09-02T00:00:00.000 | {
"year": 2005,
"sha1": "b6b7b18f6f04ccdcb74c5c7ed0e71e3534360b7d",
"oa_license": null,
"oa_url": "https://e-archivo.uc3m.es/bitstream/10016/15156/1/inhomogeneous_PRE_2006.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b6b7b18f6f04ccdcb74c5c7ed0e71e3534360b7d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
238800489 | pes2o/s2orc | v3-fos-license | Effect of Reaction Time and Hydrothermal Treatment Time on the Textural Properties of SBA-15 Synthesized Using Sodium Silicate as a Silica Source and Its Efficiency for Reducing Tobacco Smoke Toxicity
Effect of Reaction Time and Treatment Time on the Textural Properties of SBA-15 Synthesized Using Sodium Silicate as a Silica Source and Its Efficiency for Reducing Smoke Toxicity. Abstract: The synthesis of SBA-15 has been optimized using sodium silicate, an inexpensive precursor of SBA-15. In this work, the influence of synthesis times of the precipitation and the hydrothermal treatment steps, on the textural properties developed as well as for reducing the toxic compounds generated in tobacco smoking, has been studied. The hydrothermal treatment has been proved to be necessary to obtain materials with adequate performance in this particular application. Twenty-four hours of hydrothermal treatment provide materials with the best properties. Although the reaction stage usually involves the mixing of reagents during 24 h, 40 min is enough to obtain a material with stick-like morphology and typical textural properties. Moreover, between 1 and 2 h of reaction time, the material proved to have the best performance for the purpose of reducing the toxicity of the products generated during the tobacco smoking process. These results are of great significance for an eventual scaling up to industrial scale of the SBA-15 manufacturing process. Results of a pilot plant experiment in a batch of 4 kg of SBA-15 are reported.
Introduction
According to the IUPAC, mesoporous materials present pores in the range of 2-50 nm. Such pores can have different shapes such as spherical, cylindrical, platelets, sticks or fibers, and can be arranged in varying structures [1]. In 1992, a new family of ordered mesoporous materials was synthesized by Mobile Corporation laboratories [2] including MCM-41, with hexagonally ordered cylindrical pores, and MCM-48 with cubic pore structure. These materials are characterized by having an ordered porous system with a narrow distribution of diameters, high surface area and pore volume. In 1998, Zhao et al. [3,4] developed a new mesoporous siliceous material with a hexagonal structure. This material was named Santa Barbara Amorphous (SBA)-15 where 15 is a number corresponding to a specific pore structure and surfactant (hexagonally ordered cylindrical pores synthesized with a specific the surfactant).
The synthesis of this mesoporous material occurs through a sol-gel process called liquid crystal templating (LCT) [5], where the structures can be formed by the interaction of organic template molecules (surfactant) and inorganic species (silica source). The surfactant used is Pluronic P123, a triblock copolymer of poly(ethylene oxide) and poly(propylene oxide), in aqueous solution. When the concentration of the surfactant in solution reaches a threshold value, called micellar critical concentration, the molecules form aggregates (micelles). Then the micelles are grouped forming supramicellar structures, which vary depending on the concentration and temperature. In general, at moderate temperatures, precursor. The objective of the present work was to synthetize SBA-15 materials using sodium silicate as silica source (an easier handling and lower cost reactant as compared to TEOS) at different hydrothermal treatment and reaction times. This study has been developed in order to provide more insight on the effect of the SBA-15 synthesis conditions on its effect as a tobacco toxicant reducer. In this case, to maximize the possibilities of reducing the manufacturing costs, we have focused on short times. On the other hand, scaling up of the SBA-15 production process is rarely considered in literature and very few studies comment on it and report results of such trials. Moreover, the scaling up attempts reported batches of tens of grams [43,44]. Thus, another objective of the present work was running a scaling up experiment in a pilot plant scale of a batch of around 4 kg of SBA-15, making it possible to check the viability of the process at the industrial scale. Figure 1 shows SEM images of samples obtained at different hydrothermal treatment times. As can be seen, when no hydrothermal treatment was applied (H0), the material shows thicker and not well-formed sticks. Moreover, when the time of hydrothermal treatment increases, well-formed sticks are formed with increased size and length, yielding larger particles.
Results and Discussion
The nitrogen adsorption isotherms at 77 K for the four samples are shown in Figure 2. All samples present a type IV isotherm, according the IUPAC classification, with a hysteresis loop at a relative pressure of 0.6-0.8, which is characteristic of the mesoporous materials with cylindrical channels in a hexagonal arrangement [20,22]. All samples present an H1 type hysteresis cycle, typical of solids with a narrow distribution of uniform mesopores. Moreover, the shape of the desorption branch of the isotherm indicates that most of the mesopores are emptied at a single relative pressure, what is characteristic of materials with ink-bottle pores [45,46]. When no hydrothermal treatment was applied (H0), the material presents a lower hysteresis loop that moves to lower values of P/P 0 which would reveal a loss of properties. When the hydrothermal treatment time increases, the hysteresis loop increases, with the curves of the samples prepared at 6 and 15 h being very similar (H6 and H15 in Table 1) and the H24 sample showing a significant increase in the hysteresis cycle, indicating an improvement of the adsorption properties. Table 1 shows the textural properties derived from the isotherms, i.e., BET surface area, external surface area, specific pore volume and average pore size. As can be observed, the samples prepared at intermediate hydrothermal treatment times (H6 and H15) have similar properties, and an increase of all the properties is observed for the sample prepared after 24 h (H24). As expected, the sample without treatment (H0) has the lowest pore size (4.9 nm) since the hydrothermal treatment promotes the PEO chains to become more hydrophobic, resulting in pore widening [7]. Consequently, not applying such treatment would not allow this process to occur. Table 1 also shows the apparent density that is related to the morphology of the particles and to the mode in which they can be dispersed on the strands of the tobacco. This parameter has been used by other authors such as Nagata et al. [47] to characterize a similar material such as MCM-41. As can be observed, the apparent density decreases when increasing the hydrothermal time, affecting the performance of this material in the application studied here, as will be shown. [3,20]. The definition and position of these peaks is related to the mesoporous hexagonal symmetry, i.e., lower values indicating a more ordered structure [17,48,49]. The relative position of the peak of the plane (100) of sample H0 is shifted to higher values of 2θ, while the peak (110) practically does not appear and the (200) peak moves to higher values, indicating a poorly organized structure. When the hydrothermal treatment time increases, all planes move to lower values of 2θ and the intensity and definition of (110) and (200) peaks increase, showing that more ordered materials can be obtained when increasing the time of this stage. In addition, from the relative position of the peak of the plane (100), it is possible to obtain the d-spacing, while the peaks (110) and (200) are related to the 2D hexagonal reorganization of the pores [50]. For the calculation of the unit cell parameter, a 0 was obtained as a 0 = λ/( √ 3sinθ). The unit cell parameter increases when the time of hydrothermal treatment increases reaching a maximum value of 13.1 nm for sample H24 (Table 1). Sample H0, where hydrothermal treatment was not applied, shows a value of unit cell parameter of 11.0 nm. Therefore, we can conclude that the hydrothermal treatment time, as expected, is closely related to the structural stability of the resulting material. Figure 4 shows the relation of the textural properties and apparent density with the time of hydrothermal treatment. Three different tendencies can be observed in Table 1. BET area (S BET ) and the total pore volume (V t ) are shown in Figure 4a as a function of the hydrothermal treatment time (V meso and V micro show the same tendency) and present convex behavior with minimum values at intermediate hydrothermal treatment times. On the other hand, the apparent density ( Figure 4b) shows an almost linear decreasing behavior. Finally, the pore size and the unit cell present an increasing behavior up to stable values after 15 h of treatment (see Table 1).
Catalysts 2021, 11, 808 6 of 21 plane (100), it is possible to obtain the d-spacing, while the peaks (110) and (200) are related to the 2D hexagonal reorganization of the pores [50]. For the calculation of the unit cell parameter, a0 was obtained as a0 = λ/(√3sinθ). The unit cell parameter increases when the time of hydrothermal treatment increases reaching a maximum value of 13.1 nm for sample H24 (Table 1). Sample H0, where hydrothermal treatment was not applied, shows a Table 1. BET area (SBET) and the total pore volume (Vt) are shown in Figure 4a as a function of the hydrothermal treatment time (Vmeso and Vmicro show the same tendency) and present convex behavior with minimum values at intermediate hydrothermal treatment times. On the other hand, the apparent density ( Figure 4b) shows an almost linear decreasing behavior. Finally, the pore size and the unit cell present an increasing behavior up to stable values after 15 h of treatment (see Table 1). Table 2 shows the reductions (i.e., the amount of a compound obtained when smoking 3R4F tobacco without a catalyst minus the corresponding amount obtained when smoking tobacco mixed with a catalyst, divided by the amount obtained when smoking 3R4F tobacco without a catalyst, expressed as %) obtained in total particulate material (TPM), nicotine and CO for the different materials synthesized. As can be seen, when the material is not subjected to hydrothermal treatment (H0), poor results are obtained. This material only reduces 13.5% in TPM and 18.4% in nicotine. Moreover, H0 does not reduce the formation of CO, contrarily its yield was increased. When time of hydrothermal treatment increases an increase in the effective reduction of TPM, nicotine and CO is observed, reaching TPM reductions higher than 60% in the case of sample H24. Taking into consideration these results (Table 2), we can conclude that to obtain high reductions in the products evolved in tobacco smoking, it is necessary to apply the hydrothermal treatment and that the higher the duration of this stage, the better the reductions in TPM, nicotine and CO. Figure 5a shows the reductions of TPM, nicotine and CO vs. the time of hydrothermal treatment, where a linear increasing trend can be observed for TPM and nicotine, while CO remains almost constant from 15 h of treatment on. Figure 5b shows the reductions in nicotine, tar and CO vs. V t , (plots, not shown, vs. V meso and V micro show similar trends). As can be seen, this graph shows an apparently abnormal behavior for the sample H0 (i.e., that with V t = 0.771 cm 3 /g). Nevertheless, this sample has a very high apparent density ( Figure 5c) and a poorly developed mesopore structure, though other textural parameters are relatively high. Additionally, the micrographs of this sample show different morphology of the particles as compared to the rest of the samples ( Figure 1). Consequently, this catalyst would involve a poorer contact with the tobacco, thus explaining the poor reductions observed, despite the high pore volume. These facts are in good agreement with the "straw spreading" mechanism described by Wei Gang Lin et al. [37]. According to these authors, the apparent density is related to the ability of the material to spread on the tobacco fibers, and showing the importance of considering the apparent density and morphology in the performance of this type of catalyst in smoking applications. Moreover, reductions in nicotine, tar and CO show a linear decreasing behavior with apparent density, showing again the clear correlation with this variable (Figure 5c). Nevertheless, this variable alone cannot explain the behavior of these catalysts in this application, as will be shown when studying the influence of the reaction time. Table 3 shows the reductions determined for the compounds analyzed and identified in the gas fraction. Sample H0 shows many compounds increasing their yields. Only important reductions in the formation of acetonitrile can be observed. When the time of hydrothermal treatment increases, the reduction is higher. Several compounds, such as toluene, present reductions larger than 80% with 15 and 24 h of treatment (H15 and H24). To facilitate the analysis, the different compounds have been grouped by their functional groups into five families, i.e., paraffins, olefins, aromatics, aldehydes and "others". As can be seen in Figure 6, the sample with no hydrothermal treatment (H0) only presents low reductions in aromatic compounds and the rest of the families have negative reductions (i.e., yields higher than those obtained when smoking tobacco). When the time of treatment is low (H6), no reductions were observed in aldehydes and "others" families and when the time increases, higher reductions in all the families appear, being the sample H24 the best one in all the families with the exception of the aldehydes group, that presents a somewhat smaller reduction than the H15 sample. Table 4 shows the reductions of the compounds obtained in the condensed fraction. Again, the sample with no hydrothermal treatment (H0) presents the worst results and as in the case of the gases, many compounds were generated in larger proportion than in the case of tobacco with no catalyst (negative reductions). This effect is especially remarkable in the case of hydroquinone, N-propyl-nornicotine, limonene and farnesol, which are very favored. When the time of hydrothermal treatment increases, large reductions can be observed with sample H24 being the one that presents the highest reductions. It should also be noted that at low hydrothermal treatment time (H6) several compounds, as in the case of the sample H0, present no reductions, as for example cotinine, farnesol and hexadecanoic acid-ethyl ester. In addition, many compounds are not detected as a result of the hydrothermal treatment, as is the case of phenol, o-cresol, 1H-Indole, hydroquinone and N-propyl-nornicotine (this compound only disappearing for the H24 sample). The compounds analyzed in Table 4 were grouped in six families, nitrogenous (where nicotine was excluded, since it is the major organic compound and has been discussed separately), carbonyls, epoxies, phenols, aliphatics and "others". The corresponding reductions are shown in Figure 7. When no treatment was applied, no reductions are observed in phenolics, aliphatics and "others" families and low reductions in nitrogenous and carbonyl families were observed. At low times (H6), no reductions were observed in aliphatics and "others" families. When the time of hydrothermal treatment increases, high reductions were observed in all the families, obtaining the largest reductions after 24 h of treatment (H24) although very similar to those obtained in the case of sample H15 for carbonyls, phenols, aliphatics and others.
SBA-15 Characterization
For this study, the hydrothermal time was set at 24 h since it was the material that showed the best reductions (H24) and the reaction times studied were 20 min (1/3 h), 40 min (2/3 h), 1, 2, 4, 6 and 24 h (samples R1/3, R2/3, R1, R4, R6, R24). Note that the sample R24 is the same as H24 as indicated in Table 1. Figure 8 shows images obtained by SEM of the different samples obtained at different reaction times. As can be seen, when the time of reaction is low, R1/3, no sticks can be observed. After 40 min (R2/3), short sticks appear, and after 2 h (R2) the resulting material shows long sticks. If the mixing time is increased up to 4 h (R4), the sticks obtained are longer and wider, more similar to those obtained after 6 and 24 h (R6 and R24). The analyses of the images obtained by SEM permits determining that reaction times longer than 2 h yield SBA-15 materials with long stick morphology that become bigger at longer reaction times.
The nitrogen adsorption isotherms at 77 K are shown in Figure 9. All samples present the typical type IV isotherm, with an H1 hysteresis loop at a relative pressure of 0.6-0.8, characteristic of the mesoporous materials. All isotherms show similar curves, with the exception of R1/3 which shows an isotherm with a low and wide hysteresis loop with a two-step desorption branch. This behavior was analyzed by Van Der Voort et al. [51], who concluded that the adsorption-desorption behavior is consistent with a structure comprising both open (first desorption step) and blocked (second desorption step) cylindrical mesopores, which indicates the presence of disordered pore structures. The textural properties of the samples are shown in Table 1. As can be observed, the sample R1/3 has the lowest values of BET area, total pore volume and micropore volume, while samples R2/3, R1, R2, R4, R6 and R24 have similar values without a clear trend. In addition, these samples have a pore diameter between 6.0 and 6.2 nm, but R1/3 sample presents much narrower pores of only 3.8 nm, showing again that this sample presents lower structural development. Apparent density decreases with increasing reaction times, as observed in the study of the hydrothermal treatment time. The sample R1/3 presents a very high apparent density far out of the trend observed for the rest of the samples. All these data confirm that such a short reaction time is not enough to obtain a material with well-developed properties.
The XDR spectra of these samples (Figure 10) show the three characteristic peaks of the diffraction planes (100), (110) and (200). The position of the peaks of the three planes are very similar for all of them. Nevertheless, in the case of samples R1/3 and R2/3, the peak (100) is not resolved clearly. When the reaction time increases, the peak (100) moves slightly to lower values of 2θ and the peaks of planes (110) and (200) appear better defined and move to lower values of 2θ. Thus, it seems that by increasing the reaction time, a more ordered mesoporous material is obtained. The unit cell parameter, calculated with the position of plane (100), is shown in Table 1, and increases with the reaction time, reaching an identical value for the R2, R4, R6 and R24 samples. Figure 11 shows the relation between the reaction time with the textural properties and the apparent density. Again, three different tendencies can be observed in Table 1. BET area and the total pore volume (micropore and mesopore volume show the same tendency) show an increasing trend until a maximum at 1 h of reaction and then decrease reaching a constant value (Figure 11a). The apparent density presents an exceptional high value at 20 min of reaction time and then decreases slightly with the reaction time until a stable value is reached at 24 h, being this decrease more moderate than that observed in the study of hydrothermal treatment time (Figure 4b and Figure 11b). Finally, the pore size and the unit cell parameter ( Vt BET (a) Figure 10. XRD spectra of the samples prepared at different reaction times (samples R2/3, R1, R2, R4, R6 and R24 are offset vertically by 50,000, 100,000, 150,000, 250,000 and 300,000 counts, respectively). Figure 11 shows the relation between the reaction time with the textural properties and the apparent density. Again, three different tendencies can be observed in Table 1. BET area and the total pore volume (micropore and mesopore volume show the same tendency) show an increasing trend until a maximum at 1 h of reaction and then decrease reaching a constant value (Figure 11a). The apparent density presents an exceptional high value at 20 min of reaction time and then decreases slightly with the reaction time until a stable value is reached at 24 h, being this decrease more moderate than that observed in the study of hydrothermal treatment time (Figures 4b and 11b). Finally, the pore size and the unit cell parameter (Table 1) grows rapidly and stabilizes after short reaction time (2 h). Table 2 and Figure 12a show the reductions obtained in total particulate material (TPM), nicotine and CO vs. the reaction time. As can be seen, the reductions in TPM increase with the time of reaction up to the sample R2 and then decrease slightly. The reduction observed in nicotine follows a similar trend as TPM, thus increasing with increasing reaction time and reaching a maximum at 2 h, although sample R1 shows practically the same value of reduction then sample R2 (see Table 2). When the time of reaction is higher than 2 h, the reductions obtained in nicotine slightly decrease. In addition, the reduction in CO shows the same behavior, but the samples R1, R2 and R4 show similar values. At higher reaction times R6 and R24, the reduction in CO decreases. Figure 12b shows the reductions in TPM, nicotine and CO yields vs. the BET area. All the textural properties show a very similar behavior as BET area, except the apparent density that shows a slight decrease after 20 min of reaction. The variation of the reductions with respect to the textural properties shows a positive correlation with a certain degree of dispersion. Thus, it can be concluded that the time of reaction produces a maximum both in the development of the textural properties of the synthesized material and in the corresponding behavior in the tobacco smoking application, and that there is a positive correlation between such behavior and the developed textural properties. In this case, the apparent densities of all samples are similar and this property seems not to be so dependent on the reaction time as it is on the hydrothermal treatment time. Consequently, the effect of this property on the smoking application is less marked than in the previous series. Table 3 shows the reductions in the compounds analyzed in the gas fraction. As can be seen, samples present a similar tendency to that shown in Table 2. The reduction in the total gases increases with the reaction time until a maximum is obtained. However, the material that shows the greatest reduction in the total gases is R24 but with values very similar to R1. As can be seen, at low reaction times, only low reductions in acetonitrile and crotonaldehyde can be observed. Acetonitrile presents reductions higher than 70% for all samples except R1/3 and R1/6, with a reduction of 85% for the R1 sample. Other compounds that present important reductions are crotonaldehyde, toluene and methanethiol.
Products Generated during the Smoking Process
The reductions obtained for the functional families analyzed in the gas fraction for samples R1/3-R24 are shown in Figure 13. The principal reductions generated in the gaseous fraction are in aromatics and aldehydes families, and in minor proportion for paraffins and olefins. Figure 13 shows that at low reaction times (R1/3 and R2/3) no reductions are observed for either family, rather increasing the yields of most of the families. When the reaction time increases, the reductions in all the families increase and provoke a maximum in the reductions of all functional groups normally at R1, with the exception of the olefins, that present the maximum for the R24 sample. Table 4 shows the reductions obtained for the compounds retained in the condensed fraction. As can be seen, the presence of all the materials provokes the reduction, in different proportion, of the vast majority of compounds. As in the gaseous fraction, the reduction of the compounds of condensed fraction increases when the time of the reaction stage increases, until reaching a maximum, normally located between the samples R2 and R4, and then slightly decreases to increase again at 24 h of reaction time. Many compounds are not detected as is the case of 3-ethyl-2-hydroxy-2-cyclopenten-1-one, phenol 2,4-dimethyl, 2,3-bipyridine and phenol 4-ethyl. In contrast, other compounds such as hexadecanoic acid and ethyl ester show smaller reductions and only present good reduction at high reaction times.
The compounds analyzed in Table 4 were grouped by their functional group. The corresponding reductions are shown in Figure 14. As can be observed, all samples synthesized, regardless of the time of reaction, present important reductions for all the families reaching reductions higher than 80% for the nitrogenous and epoxies families. As in the case of the gas fraction, a maximum in reductions is observed at reaction times of 1-2 h, but in this case all samples lead to positive reductions.
Scaling Up Process
This section shows the results obtained with SBA-15 synthesized at the 4 kg pilot plant scale. Table 5 contains the structural properties and the main characteristics of the resulting tobacco smoke obtained including this SBA-15 in the tobacco cigarette (TPM, CO 2 and CO). The textural analysis of the samples highlights some differences with respect to the materials synthesized at the laboratory scale. It could be observed a slight decrease in all the parameters calculated from this sample with respect to the samples synthesized at laboratory scale. Moreover, it seems that the modifications on time at the hydrothermal treatment have greater influence on the variation of these parameters. The stick morphology of the materials was as expected, as shown in Figure 15, but the fibers are shorter. Probably, it is due to the reduction time in both steps of the synthesis and because the agitation in the equipment used for such a high quantity of material makes the orientation of the fibers difficult. The results obtained in the smoking tests are acceptable and very promising. The reduction capabilities presented are lower than those obtained when using TEOS as the silicate source [41,42] and also lower than those obtained with sodium silicate at lab scale, thus highlighting the need for a tuning of the scaling process.
Synthesis of Materials
Different stick-like morphology SBA-15s were synthesized in our laboratory according with the method described by Kosuge et al. [26] including some modifications. The first one was the inclusion of the hydrothermal treatment and the variation of the duration of this step. The second one was the study of the effect of the reaction (precipitation) time. Pluronic P123 was used as a surfactant and sodium silicate as the source of silicon; 3.7 g of Pluronic P123 from Sigma-Aldrich (Schnelldorf, Germany) were added in a solution of 127.8 g of HCl 2M (using HCl 11,32 M, from Merck). After mixing during 1 h, a solution of 7.84 g of sodium silicate, 1.37 g/cm 3 density from Scharlau (Barcelona, Spain), dissolved in 74.72 mL of distilled water, was added. The solution was stirred at 30 • C during 24 h. Then, the hydrothermal treatment was carried out at 80 • C during different times, i.e., 0, 6, 15 and 24 h (samples H0, H6, H15 and H24, respectively). The white solid product obtained was collected by filtration, dried and then calcined at 550 • C in air during 5 h for removing the organic template. Additionally, several reaction times have been studied, 1/3, 2/3, 1, 2, 4, 6 and 24 h (samples R1/3, R2/3, R1, R4, R6 and R24, respectively). All these samples were hydrothermally treated at 80 • C during 24 h.
The scaling process was carried out in a 500 L reactor that allows obtaining up to 4 kg of SBA-15. The reactor has submersible heating elements and is thermally insulated. The temperature is controlled with a PID controller. The synthesis process was similar to that described for the laboratory scale. To carry out the hydrothermal treatment stage, the excess of the mother liquor was removed by filtration and the product was transferred to polypropylene containers. Finally, the SBA-15 was washed, filtered, dried and calcined. Scaling up was developed at 35 • C during 6 h at the reaction step and 80 • C during 15 h for the hydrothermal treatment.
Catalyst Characterization
A JEOL JSM-840 (Tokio, Japan) scanning electron microscope was used to obtain SEM micrographs operating at 15 kV, after covering the samples with gold employing a metallizer (Au)/evaporator (C) BALZERS, SCD 004 model (Balzers, Liechtenstein). The textural properties were obtained from the N 2 adsorption isotherms at 77 K, measured in an automatic Quantachrome AUTOSORB-6. The isotherms were recorded and the surface area was obtained according to the BET method. The pore size distributions were obtained applying the BJH model with cylindrical geometry of the pores. The total volumes were determined from the N 2 adsorbed at P/P 0 = 0.965 and micropore volumes were calculated according to the Dubinin-Raduskevich equation. The X-ray diffraction (XRD) patterns were obtained using a Bruker CCD-Apex monocrystals XR diffractometer, which employed Ni-filtered CuKα radiation (λKα = 0.15406 nm), between 0.5 and 5 • (2θ) with a step size of 0.0131 • and time step of 18.87 s. The unit cell parameter was calculated as a 0 = λ/( √ 3sinθ). Apparent density was measured determining the volume occupied by a given mass of SBA-15.
Smoking Experiments
Reference tobacco 3R4F supplied by the Center for Tobacco Reference of Kentucky University was used in this study.
Before performing the smoking experiments, 100-200 cigarettes were disassembled and the tobacco, the filter and the paper were weighed separately. The tobacco was mixed with the SBA-15 materials and the cigarettes were reassembled. Cigarettes were prepared mixing manually around 96% conditioned tobacco with 4 wt% catalyst. To promote the adhesion of the SBA-15 to the tobacco fibers, EtOH (Montplet, Barcelona, Spain) was added during mixing. Vigorous mixing was carried out manually until the fibers were well homogeneously covered with the SBA-15. Then, the cigarettes were conditioned for at least 48 h at 22 • C and a relative humidity of 60%.
Smoking experiments were performed as described elsewhere [41], following the ISO 3303 smoking regime. The gaseous fraction of tobacco smoke was collected in a Tedlar bag and analyzed by CG/TCD (CO and CO 2 ) and GC/FID FID (not condensed fraction) in an Agilent 6890N chromatograph with a GS-GASPRO column. The total particulate matter (TPM) condensed in the trap located before the Tedlar bag was extracted with isopropanol (Fisher Scientific, Madrid, Spain) following the ISO4387 standard and analyzed by GC/MS in an Agilent 6890N chromatograph with an HP-5-MS column. The identification of the different compounds was done by comparison with the Wiley MS library. In this work, 30 compounds were analyzed in the gaseous fraction collected in the Tedlar bag and 40 compounds were analyzed in the TPM. Synthesis experiments, materials characterization and smoking experiments were duplicated and results presented correspond to the average of the two runs. Reproducibility was different depending on the experiment and parameters determined and were within the usual range expected in this type of experiment.
Conclusions
The results obtained show that, despite obtaining a material with acceptable textural properties when eliminating the hydrothermal treatment, this stage is necessary to obtain good reductions in the compounds generated in the tobacco smoking process. Moreover, when the time of hydrothermal treatment increases, a more orderly material is obtained with lower apparent density. The reductions in the generated compounds in the smoking experiments when the synthesized compounds are mixed with tobacco fibers increase with the hydrothermal treatment time, reaching the best results at 24 h of treatment. For this application, reductions obtained present a positive correlation with the textural properties developed and a negative correlation with the apparent density. Apparent density appears as a useful complementary variable for understanding the correlation of the catalyst performance in smoking application and its textural properties.
Textural properties present a maximum with reaction time as well as the reductions obtained in the TPM, nicotine, CO and most families of compounds, both in the gas and liquid fractions. Short reaction times (20 min) are not enough to obtain adequate textural properties nor reductions in compounds generated in smoking experiments. Reaction times around 1-2 h yield materials with fully developed textural properties and the best performance in the tobacco smoking application.
For the first time on record, a system that allows obtaining SBA-15 of the order of kg (4 kg) has been carried out. Materials with textural properties and behaviors in the smoking process close to those at a laboratory scale can be obtained. These results constitute a very promising basis for the development of an industrial scale process. Optimizing the time and replacing the TEOS precursor usually employed in the synthesis of SBA-15 with sodium silicate, results in energy saving of the process, reducing cost and toxicity of the precursor, which makes the synthesis process more environmentally friendly. | 2021-09-01T15:04:31.760Z | 2021-06-30T00:00:00.000 | {
"year": 2021,
"sha1": "d4408f20bbbd045fca351e3e4b6b58108211a73f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4344/11/7/808/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "eb5e5c04254da45c9fd5e7ac61221fede80a6e8e",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
49645307 | pes2o/s2orc | v3-fos-license | Physical Activity and Physical Fitness of Adults with Intellectual Disabilities in Group Homes in Hong Kong
Adults with intellectual disabilities (ID) typically have a sedentary lifestyle and higher rates of overweight and obesity. This study describes the habitual daily physical activity (PA) and the health-related physical fitness (PF) of adults with mild and moderate ID who resided in four group homes and worked in sheltered workshops. We also assessed the contribution of PF variables towards PA levels and sedentary behavior of this population subgroup. Adults with mild and moderate ID (N = 114) were assessed on PF tests (percent body fat, waist and hip circumferences, 6-min walk (6MWT), arm curl, and sit and reach). PA and sedentary behavior on weekdays were determined using Actigraph accelerometers. Results showed these adults averaged 2% of their daily time (or 10 min) engaged in moderate-to-vigorous PA (MVPA) and 67% of the time (495 min) being sedentary. No significant differences between mild and moderate ID were found for any PA or PF variable. Linear multiple regression analyses showed 6MWT to be the only significant PF variable contributing to the variance of PA and sedentary behavior. In conclusion, adults with ID reside in group home have low PA and low fitness levels. Among fitness variables, the walking test (i.e., cardiovascular fitness) had the highest positive association with participants’ daily PA, MVPA, and negative association with sedentary behavior. Future intervention studies in promoting PA and fitness for adults with ID are warranted.
Introduction
Intellectual disability (ID) is a disability characterized by significant limitations, both in intellectual functioning and in adaptive behavior, that begins before adulthood [1]. In Hong Kong, individuals with ID are classified into mild, moderate, severe, or profound grades through clinical assessment before the age of 18. In January 2015, there were 71,000-101,000 people with ID in Hong Kong, approximately 1.0-1.4% of the population [2]. This figure is similar to the estimated prevalence rate for high-income countries (0.9%) [3].
Adults with ID have been associated with having multiple health conditions and tend to have higher morbidity rates and a shorter life expectancy [4][5][6] than the general population. Nonetheless, the lifespans of adults with ID have increased over recent decades [7]. As people with ID often experience health problems associated with aging at earlier ages and at higher rates, researchers have stressed the importance of health promotion interventions for this population subgroup [8,9]. Increasing physical activity (PA), because of its proven association with favorable health outcomes (e.g., reduced risk of heart diseases, hypertension, cancer, diabetes, and obesity), is a recommended intervention/tactic [10]. The World Health Organization (WHO) (2010) recommends that adults aged 18-64 should do at least 150 min of moderate-intensity PA throughout the week [11]. Meanwhile, most studies of adults with ID show they have very low PA levels and have a sedentary lifestyle. Review has shown that only 9% [12] of adults with ID met the PA guidelines based on PA data from objective and subjective measures compared with 30% to 47% of the general population [13].
In addition to engaging in limited PA, adults with ID tend to have low cardiovascular fitness that starts at young age and deteriorates with age [14]. Additionally, compared to the general population, they tend to perform poorly on physical fitness tests [15,16]. A longitudinal study of physical fitness (PF) showed a greater decline over 13 years for adults with ID than with those without ID. There were greater increases in body mass and percentage of body fat in males and females with ID and there was greater decline in cardiovascular fitness and sit-ups in the females [17].
On the other hand, individuals with ID that are only capable of basic self-care who lack adequate daily living skills to live independently in the community may apply to live in the group homes [18]. Therefore, the group home is an important setting for individuals with ID which provides 24-h care and support. However, it is a restrictive environment in which residents have to obtain permission to go out from the home and this may reduce opportunities for PA and maintaining PF. There are currently 37 group homes offering 2092 residential occupancy for adults with mild and moderate ID in different districts in Hong Kong [18]. The majority of the group homes (78%, 1744 residents) are paired with sheltered workshops that are located nearby or in the same buildings. Sheltered workshops provide full-time employment for individuals with mild and moderate ID after their completion of education in the special schools at the age of 15 years old.
To our knowledge, there are very limited number of studies (N = 2) related to the PA of Hong Kong adults with ID and no study has been conducted on both PA and PF in this special population in Hong Kong. Chan and Chow [19] examined the PA of 30 adults with mild to moderate ID from a group home in Hong Kong. The residents were asked to wear pedometers for recording PA in 14 consecutive days. The results indicated that the group home residents' mean PA level on weekdays (7650 ± 2347 steps per day) was below the recommended guidelines from the Hong Kong Medical Association (i.e., 8000 steps per day), and their walk steps were less in the working days as compared to the non-working days of weekends. Another study found that 38 adults with ID working in a sheltered workshop had high percentage of obesity and they were considered as "low active" PA level (5000-7499 steps per day) [20]. However, studies with larger sample size and wider scopes need to be sought to provide more information on PA and PF for this population. Therefore, the purpose of the present paper is to describe the status of PA levels, PF, and sedentary behavior of adults with ID (i.e., differences between sex, ID severity, nonoverweight/obese vs. overweight/obese, low vs. higher risk in central obesity) living in group homes. The secondary purpose is to determine how health-related PF components explain the variance in the PA levels and sedentary behavior of this sampled population.
Methods
Residents from four selected group homes as convenience samples in Hong Kong that were associated with sheltered workshops (14% of total group homes) were recruited for study. These group homes received Hong Kong government subventions and were managed by non-government organizations. The mean number of residents in the four group homes was 67 adults with ID (standard deviation (SD) = 6.9). Each of the homes had limited indoor space available for physical activities, the smallest having a dining room and an activity room (total area: 135 m 2 ) and the largest having a dining room, an activity room, and a dance room (total area: 438 m 2 ). Inclusion criteria of study participants were: (i) adult aged 18-65; (ii) diagnosed with either of mild or moderate ID (based on clinical assessment records); and (iii) able to walk without an assisting device. Those eligible were invited to participate and informed consent was obtained from their parents or guardian. Approval for the study was provided by the University's Human Ethics Committee (FRG2/14-15/062) and the group home managing organizations. Data collection took place between October 2015 and May 2016.
Participant Physical Characteristics
Measures followed the American College of Sports Medicine (ACSM) testing guidelines included body height (assessed by a portable stadiometer, seca gmbh & co. kg., Hamburg, Germany, body weight (TBF-410 Body Composition Analyzer; Tanita Corp., Tokyo, Japan), body mass index (BMI) derived from weight and height, and waist and hip circumferences [21].
Physical Activity Measures
Physical activity levels were assessed using accelerometry (wGT3X-BT Activity Monitor; Actigraph LLC, Pensacola, FL, USA), which has been used extensively in PA studies of adults with ID [22][23][24][25][26][27][28]. In the presence of a test administrator at beginning of each measurement day, the participants put on an elastic belt holding the Actigraph activity monitor around their waists. Group home staff were instructed to remind them to continually wear the monitor during waking hours, with the exception of bath time and bedtime. Participants wore their monitor for at least five consecutive weekdays to a maximum of six weekdays. The monitors were not worn at weekends. Accelerometer data were captured in 60-s periods and non-wear time was defined as ≥60 consecutive minutes of zero recording. To be included in analyses, participants needed four days of accelerometer data with at least 10 h of recording each day [29]. Based on the Freedson et al. cut-offs, moderate-to-vigorous PA (MVPA) was determined at >1951 counts·min −1 , while sedentary time was defined as <100 counts·min −1 [30]; any activities in between were classified as light-intensity PA (LPA). Data were summarized as percent of daily time spent in light PA (LPA %), MVPA %, and sedentary behavior (SB %) as well min·day −1 spent in each PA level.
Health-Related Physical Fitness Measures
Health-related components of PF include body composition, cardiovascular endurance, muscular strength and endurance, and flexibility and measures of health-related PF are closely linked with health promotion and disease prevention [21]. Percent body fat, as one of the indicators of body composition, was assessed by the body fat analyzer (TBF-410 Body Composition Analyzer; Tanita Corp., Tokyo, Japan). Cardiovascular endurance was assessed using the 6-min walk test (6MWT). The 6MWT is a field test for cardiovascular endurance and functional exercise capacity that measures the distance covered when walking on a flat surface (e.g., 20-50 m distance; American Thoracic Society (ATS) Statement 2002 [31]). It has been tested with clinical [31] populations as well as adults with ID [16,[32][33][34]. The test has been shown to have both acceptable validity and reliability [32,33,35] for use with adults with ID [32,33,36]. Testing procedures followed the Nasuti et al. [33] study protocol in which each participant walked one trial of walking in laps of a 25-m distance in 6 min provided with regular verbal encouragement prompts and 1:1 pacer. Most studies use a 30-m distance for 6MWT, however, we used 25 m because of available space. We conducted a pilot study with 27 participants tested in 30-m and 25-m distances separately with seven days between and the result showed acceptable intra-class reliability coefficient (R = 0.8) for the 25-m distance. Muscular fitness was assessed using the arm curl test for upper body muscular endurance, which is a fitness test battery for Chinese adults with ID [37] and a test item of the Senior Fitness Test [38,39], previously been shown to be a valid and reliable test with seniors [38,39]. Upper muscular endurance is needed to perform common everyday physical activities that are often difficult to perform in later years. Each participant performed one trial of the arm curl test by having a test administrator counting the completions of the dominant hand in the elbow flexion and extension movement in a 30-s by either moving a 8-lb dumbbell (male) or a 5-lb dumbbell (female) [38]. Lastly, flexibility, being a component of health-related PF, is important to carry out physical activities of daily living [21]. The sit-and-reach test, used to assess the flexibility of trunk flexion has previously been used in studies with people with disabilities [17,37,40,41]. Testing procedure involves two trials of sit-and-reach test with bare feet being placed against a sit-and-reach box with score marked at 23 cm at the level of the feet based on ACSM testing guidelines [21].
Data Analysis
Data were analyzed using IBM SPSS Statistics for Windows, Version 24.0 (IBM Corp., Armonk, NY, USA). Using criteria for Asian adults, participants were categorized into BMI groups (Nonoverweight/Nonobese: BMI < 23; Overweight/Obese: BMI ≥ 23) [11] or groups of low risk vs. higher risk in central obesity based on their waist circumference (low risk: men < 90 cm, woman < 80 cm; higher risk: men ≥ 90 cm, woman ≥ 80 cm) [42]. Independent t-tests were computed to determine differences in PA, sedentary behavior, and PF variables between males and females, mild vs. moderate ID, BMI groups, and normal vs. higher risk obesity groups. In view of conducting multiple statistical t-tests simultaneously, alpha level was adjusted using the Bonferroni correction. Analysis of covariance was computed to determine differences in PA levels, sedentary behavior, and PF variables between adults with mild ID and moderate ID while controlling for age and sex. Linear multiple regression analysis by two-block entry separately for each dependent variable of PA (LPA %, MVPA %) and sedentary behavior (SB %) was computed. The first block of predictor variables included demographic information (i.e., age, sex, ID type, BMI groups) and all (except age) were inputted by dummy codes. The second block of predictor variables were PF variables including percent body fat, 6MWT, arm curl test, and sit-and-reach test.
Results
Participants were 71 males and 43 females (36 mild ID, 78 moderate ID) with mean age of 41.7 years (SD = 9.5, range = 18-64 years). The mean number of study participants from the four group homes was 28 (SD = 4.2). PA data from 24 individuals were excluded from analyses (refusal to wear accelerometer, N = 5; equipment failure, N = 5; <4 valid recording days, N = 14), hence, analyses for PA were based on 90 adults. Data analyses for linear multiple regression were based on 81 people with complete dataset (N = 6 absent on testing day on 6MWT) and sit-and-reach test (unable to straighten the leg, N = 3). Prior to statistical tests, the PA and PF data were checked for normality with MVPA % and MVPA time being two variables having non-normal distribution (MVPA %: skewness = 5.7, kurtosis = 28.0; MVPA time: skewness = 6.1, kurtosis = 42.0). Therefore, natural log transformation for normality on MVPA% and MVPA time was conducted. Table 1 shows the means and standard deviations of the physical characteristics of the participants as well as PA and PF data. Mean BMI was 24.2 (SD = 5.0) and 55% of the participants (males: 46%, females: 69%; mild ID: 50%; moderate ID: 57%) were overweight or obese (i.e., BMI ≥ 23). Mean waist circumference for males was 83.8 cm (SD = 11.8) and for females was 85.8 (SD = 8.9) and 49% (males: 56%, females: 37%; mild ID: 47%; moderate ID: 50%) were at higher risk in central obesity. Results of anthropometric variables showed that there were significant sex differences in mean scores of height (p < 0.001), BMI (p = 0.002), hip circumference (p < 0.001), and percent body fat (p < 0.001) (see Table 1). Furthermore, there was significant difference (p = 0.006) in height between adults with mild ID (mean = 1.59 m, SD = 0.12 m) and moderate ID (mean = 1.52 m, SD = 0.10 m). For the accelerometer data, the average daily wearing time was 726.2 (SD = 96.4 min) for the whole sample (see Table 1). Overall, participants spent their largest percentage of time in sedentary behavior (mean = 67.3%, SD = 12.0%), followed by LPA (mean = 31.1%, SD = 10.8%) and MVPA (mean = 1.6%, SD = 3.4%). Over 80% of the participants (N = 74) had no vigorous-intensity PA (VPA). Of those having VPA, they engaged in extremely minimal time (mean = 2 min, SD = 1.8) over the whole data collection period. There were no sex differences in either PA level or in sedentary behavior. For PF variables, with the exception of percent body fat, there were no sex differences in 6MWT, arm curl, and sit-and-reach test. In addition, there were no significant differences in PA levels, sedentary behavior, and PF variables between nonoverweight/nonobese (BMI < 23) and overweight/obese (BMI ≥ 23) participants as well as between those low risk and higher risk in central obesity (based on waist circumference). Similarly, an analysis of covariance by controlling for age and sex showed that there were no mean differences in all PA, sedentary behavior, and PF variables between adults with mild ID and moderate ID (see Table 1).
Regression analyses results showed that the explained variance by R 2 in the final block separately for LPA %, MVPA %, and SB % were 0.20, 0.36, and 0.24, respectively (see Table 2). Moreover, the significant predictor variable of PF associated with all PA levels and sedentary behavior was 6MWT after controlling for demographic variables. Apart from 6MWT being significant, age was found to be inversely associated with MVPA%. In other words, the older the individual, the less daily time engaged in MVPA.
Discussion
This study focused on the PA and PF status of adults with ID residing in group homes. Obesity is associated with adverse health outcomes related with metabolic syndrome [42] and half of the participants were overweight or obese (based on BMI data). This is much higher than the general population of HK (39%) [43], consistent with Hong Kong study of 332 adults with ID [44]. This prevalence rate is close to the obesity rates reported of 30% to 50% in this special population in USA [45,46]. Females had higher BMI than males, consistent with Barnes's findings [23]. In addition to BMI, we used waist circumference as a measure of metabolic and cardiovascular risk. Our results showed that 49% of the participants were at risk, which is higher than the 32% of adults at risk in the general population in Hong Kong [47]. Previous findings have suggested obesity rates for those adults with ID living in restrictive environments such as group homes are higher than those living in institutional or natural family groups [46], which may explain the higher percentage of overweight or obese and central obesity risk in this study.
We found no sex differences and no differences between ID severity groups in terms of PA levels, which are contradictory to other studies that have shown males with ID to be more active than their counterparts [48,49] and adults with mild ID to be more active than those with moderate ID [50,51]. We found no differences in PA levels and sedentary behavior between nonoverweight/nonobese vs. overweight/obese groups. This contrasts with Barnes et al. [23] who found differences time spent in MVPA by BMI status (BMI < 25 vs. BMI ≥ 25) in adults with ID. The Barnes study, however, included adults with ID living in diversified settings while all our participants lived in group homes. Previous research has shown that living in common care settings and not having daytime PA opportunities were independent predictors of low PA [48]. Our participants were all residents of group homes that had limited indoor activity space and all attend sheltered workshops during weekdays (9:00 a.m. to 4:30 p.m.) where most of their work was done while sitting. Living in similar, restrictive environments may explain our finding of no sex or severity of ID differences in PA levels because all residents share similar daily lifestyle patterns.
Our study found that, on average, the participants spent little time in MVPA (10 min daily), lower than that reported for adults with ID in previous studies (from 13 to 36 min·day −1 ) [24,52,53] and over 80% of them had no VPA. Although of similar mean age (44 years old), participants in Oviedo's study did not reside in group homes and the study used different cut-off criteria for MVPA [52]. The very low MVPA% in our study could possibly be due to most residents walking only a very short distance to the sheltered workshop that was in the same building or because measurements were taken only on weekdays. It was not known whether adults with ID would engage in more PA during weekends particularly for those spending weekends away from their group homes. We have no data on how active the participants were when away from the group homes on weekends. Nevertheless, having an average of 10 min of MVPA per weekday suggests the participants will fall far short of the WHO recommendation of accumulating at least 150 MVPA min of MVPA in a week.
The participants spent 67.3% of awake time in sedentary behavior (495 out of 726 min·day −1 ) and they are mostly sitting during work in the sheltered workshop, at meal-times, and in leisure time after dinner. Overall, this amount was lower than the 522 min to 643 min assessed by objective measures in other studies of adults with ID [54]. Nonetheless, the proportion of time participants were sedentary when wearing accelerometers (67.3%) was within the range of 63.0 to 87.5% reported by a review paper on sedentary behavior in adults with ID [54]. In particular, this result was identical to those two studies (63-64%) that also adopted Freedson's cut-off point for sedentary behavior (<100 counts·min −1 ) [22,55]. When comparing time in sedentary behavior with USA National Health and Nutrition Examination Survey (NHANES) data involving adults without ID, participants in this study sample had a slightly higher sedentary time than those without ID (495 min vs. 479 min), however, NHANES data had 14 h of accelerometer wear time or 2 h more than the present study [56]. This high proportion of being sedentary is substantiated by anecdotal accounts by group home staff that the majority of residents were not interested in engaging in physical activity or exercise during their leisure time.
The health-related components of PF have a strong relationship with overall health including a lower prevalence of chronic diseases [57]. The participants with ID performed much lower on the health-related PF components than their counterparts without ID did in previous studies [21,58]. Additionally, on the 6MWT (walking capacity) the participants scored lower than adults with ID in Nordstrøm's study [22] (434 m vs. 481 m). This difference could perhaps be explained by the participants on our study being older (18-64 years vs. 16-45 years). Results for the arm curl test were similar to another study from Hong Kong in 166 adults with ID [37] but was at the 25th percentile for older (65-69 years) Hong Kong people without ID [58]. The study participants are inflexible in trunk flexion and their mean value is comparable to the lowest category of "needs improvement" of 40-49 age group adults from the general population [21]. Results of poorer cardiovascular fitness, upper body muscular endurance, and trunk flexion flexibility may affect the participants' ability to perform certain physical activities in daily living.
In the determination of PF variables contributing to the percentage of time spent in PA levels and sedentary behavior, the 6MWT test measuring cardiovascular fitness was found to be the only significant predictor of time spent in PA levels and sedentary behavior. Cardiovascular fitness has an inverse relationship with risk of all cause premature death [59,60] and higher levels of cardiovascular fitness are associated with higher levels of habitual physical activity in the general population [61]. The results from the multiple regression analyses were consistent with Williams' conclusion [61] of the direct positive relationship between 6MWT (cardiovascular fitness) and more time spent in habitual daily PA. Walking has been shown as the most common PA engaged in across all levels of ID [23,48]. However, adults with ID do not habitually walk fast or for long periods of time [48]. Encouraging and providing support for adults with ID to walk more has potential as an intervention strategy. A multi-component weight loss intervention study with adults with ID found that providing targets of daily steps and pedometers significantly increased LPA and decreased sedentary behavior [54].
In view of our current findings of low PA and low fitness levels in this population subgroup, intervention studies can set goals to increase both PA and PF for adults with ID. Indeed, interventions targeting PA and PF found significant improvements for PA and for fitness outcomes such as aerobic fitness and muscular strength [62,63] in adults with ID. Considering the study participants being residents in group home setting where they spend most of the day time in sitting, interventions for PA might target lower intensity PA in which adults with ID seem to have a higher preference over MVPA [4]. Although the WHO's PA recommendation focuses on 150 min of at least moderate intensity PA per week, recent meta-analytical study has concluded that LPA could improve adult cardio-metabolic health and reduce mortality risk [64]. Emerging evidence suggests that LPA can convey health benefits and that starting to participate in PA of lower than moderate-intensity could yield positive health benefits for inactive individuals [65,66]. Nonetheless, it is still not known how well adults with ID may adhere to either low-or moderate-intensity PA programs. Further study is needed to determine the intervention efficacy of different intensity levels for this subgroup.
A strength of this study is its assessment of PA using accelerometry; nonetheless, it has some limitations. Participants were a convenience sample from four group home locations in HK. PA data were limited to weekdays and about half the participants did not stay at the group homes during weekends meaning their PA could have varied on these days. The Freedson cut points for PA levels were used in the study as in a previous study among adults with ID [23]; however, there is no consensus on accelerometer cut-off criteria. During testing a few younger participants (e.g., age around 20) inquired about running rather than walking for the 6MWT, so future studies involving younger adults might consider a field test requiring higher intensity (e.g., shuttle run test) [54] for assessing cardiovascular fitness. The implications for this study are that for (i) administrators and practitioners working in care settings for people with ID, they should be made aware of low PA and PF and employ effective strategies to address this and for (ii) researchers, future studies are needed to further investigate the activity patterns of adults with ID during weekends as well as to determine if there will be differences in their activity patters between those living in group homes of different physical sizes.
Conclusions
This study provides PA and fitness profiles of a sample of adults with ID residing in group homes. It shows that adults with ID are extremely sedentary in their homes during weekdays and they have lower fitness levels compared with previous studies of adults without disabilities. PA and low fitness levels could be attributed to the lifestyle including a seated job environment at a sheltered workshop, a relatively restrictive living environment residing in a group home, and the close proximity of the work place to the group home. Future PA interventions are needed to target for effective strategies such as PA promotion during breaks at work and leisure time after work that can increase time for LPA and MVPA and decrease time for sedentary behavior for adults with ID. Furthermore, future study should determine whether intervention program targeting at increasing physical fitness of adults with ID can also improve their habitual PA levels. | 2018-07-12T07:59:37.221Z | 2018-06-29T00:00:00.000 | {
"year": 2018,
"sha1": "ac6eb31a82114753f46d67b8bc130061bf1d526c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/15/7/1370/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac6eb31a82114753f46d67b8bc130061bf1d526c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5534853 | pes2o/s2orc | v3-fos-license | Model-based Verification: Guidelines for Generating Expected Properties
Use of any trademarks in this report is not intended in any way to infringe on the rights of the trademark holder. Internal use. Permission to reproduce this document and to prepare derivative works from this document for internal use is granted, provided the copyright and " No Warranty " statements are included with all reproductions and derivative works. External use. Requests for permission to reproduce this document or prepare derivative works of this document for external and commercial use should be addressed to the SEI Licensing Agent. Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. The Government of the United States has a royalty-free government-purpose license to use, duplicate, or disclose the work, in whole or in part and in any manner, and to have or permit others to do so, for government purposes pursuant to the copyright license under the clause at 252.227-7013. For information about purchasing paper copies of SEI reports, please visit the publications portion of our Web site
Introduction
Model-Based Verification (MBV) is a systematic approach to finding defects (errors) in software requirements, designs, or code [Gluch 98].The approach judiciously incorporates mathematical formalism, in the form of models, to provide a disciplined and logical analysis practice, rather than a "proof of correctness" strategy.MBV involves creating essential models of system behavior and analyzing these models against formal representations of expected properties.
The artifacts and the key processes used in Model-Based Verification are shown in Figure 1.Model building and analysis are the core parts of Model-Based Verification practices.These two activities are performed using an iterative and incremental approach, where a small amount of modeling is followed by a small amount of analysis.A parallel compile activity gathers detailed information on errors and potential corrective actions.
Project Level Activities
An essential model is a simplified formal representation that captures the essence of a system, rather than provide an exhaustive, detailed description of it.Through the selection of only critical (important or risky) parts of the system and appropriately abstracted perspectives, a reviewer, using model-based techniques, can focus the analysis on the critical and technically difficult aspects of the system.Driven by the discipline and rigor required in the creation of a formal model, simply building the model, in and of itself, uncovers errors.
Once the formal model is built, it can be analyzed (checked) using automated modelchecking tools.Within this analysis, the user identifies potential defects both while formulating claims about the system's expected behaviors and while formally analyzing the model using automated model-checking tools.Model checking has been shown to uncover the especially difficult-to-identify errors: the kind of errors that result due to the complexity associated with multiple interacting and interdependent components [Clarke 96].These include embedded as well as highly distributed applications.
A variety of different formal modeling and analysis techniques are employed within Clarke 96]).The choices are based upon the type of system being analyzed and the technological foundation of the critical aspects of that system.This decision on the technique(s) involves an engineering trade-off among the technical perspective, formalism, level of abstraction, and scope of the modeling effort.
The specific techniques and engineering practices of applying Model-Based Verification to software verification have yet to be fully explored and documented.A number of barriers to the adoption of Model-Based Verification have been identified including the lack of good tool support, expertise in organizations, good training materials, and process support for formal modeling and analysis.
In order to address some of these issues, the Software Engineering Institute (SEI) has created a process framework for Model-Based Verification practice.This process framework identifies a number of key tasks and artifacts.Additionally, the SEI is working on a series of technical notes that can be used by Model-Based Verification practitioners.Each technical note is focused on a particular Model-Based Verification task, providing guidelines and techniques for one aspect of the Model-Based Verification practice.Currently, the technical notes that are planned address abstraction in building models, generating expected properties, generating formal claims, and interpreting the results of analysis.This technical note addresses the search and definition processes (generation process) for expected properties.Expected properties are natural language statements about the characteristics of a system's behavior-characteristics that are consistent with user expectations.The generation activity involves the participation of domain experts as well as software engineers and requires a systematic approach not unlike requirements elicitation techniques.Once generated, expected properties are expressed as formal claims.These formal claims are compared against a model of the system using model-checking tools.
Characterizing Expected Properties
Building essential models and analyzing models against expected properties are fundamental practices in Model-Based Verification.The automated analysis process of model checking is shown in Figure 2. The results of the analysis are a confirmation of the correctness of the properties or a statement that the properties are not true for the model.In many cases when a property is not true, a counter example showing a violation of the property is included in the output.Model-checking tools (e.g., the Symbolic Model Verifier [Clarke 95,McMillan 92]) leverage the formal aspects of models to automatically analyze them, determining whether or not specific system properties, expected properties, are valid for the model.
Figure 2: Model Checking
Expected properties are natural language statements about the behavior of a system.In model checking, models can be thought of as formal representations of behavior or structure 1 and expected properties as the characteristics of that behavior and structure.Expected properties are complementary to models of a system and often describe properties that reflect the requirements but are not explicit in a model.
As an example of the distinction between models and expected properties, consider a system that involves the sharing of resources among different functional components.A requirement may be that only one of these components at a time should access a given resource.A design approach may be to use an algorithm that defines how processes should coordinate their access to a shared resource.In a Model-Based Verification analysis of the design, a model that reflects this algorithm may be built.The model portrays the behavior that is dictated by the algorithm in some formal modeling representation.But nowhere in the model is there an explicit statement that (1) only one process should have access to a shared resource at a time 1 We have focused on behavioral models.Models that portray the structure of a system can be checked for the integrity of their structure as well as for the behavioral implications of the model (e.g., Alloy [Jackson 00] and NitPick/Ladybug [Jackson 96a,96b]).or that (2) if a process requests access, it eventually will get access to the shared resource.Nevertheless, mutual exclusion and fairness would be expected properties of such a system.
Through model checking the model should demonstrate them in its portrayal of the system's behavior.
Characterizing expected properties is not simple.Expected properties complement models in that expected properties portray system behavior in a different, often more general or abstract way.The next subsections discuss the special nature of expected properties and how they complement models.This characterization will help identify expected properties for a given essential model.
Implicit vs. Explicit Statements
The implications of an explicitly stated requirement are good candidates to become expected properties.These implied attributes of a system are often not recorded in any document.Some of these attributes involve general engineering common sense (the system should not crash every five minutes).Others are related to the use of standards (Internet Inter-Object Request Broker Protocol of the Object Management Group) or underlying technology (Message Oriented Middleware), which are omitted just to make the specification readable.
Finally, there are domain-specific properties that are deemed not necessary for inclusion, as they are well known by all developers.For example, in a transactional information system, transactions must be fully committed or fully rolled back; a state in which part of the transaction is committed and part rolled back is not acceptable.
As another example, consider the functional specification for caching information among processes [Clarke 95].The requirements specification may state that when a process accesses the same cache entry as other processes, it will always indicate to the other processes when it modifies its local copy of that entry.The specification may also include a statement that all of the other processes sharing that line must invalidate their copies when another process has signaled its modification.In addition, other statements about invalidating, and so forth, may be included in the requirements specification.For a complex distributed system, these requirements may exist in very different parts of a large document or in multiple documents.Collectively, these requirements imply that "there is no case where a process thinks it has an unmodified copy when one or more other processes have modified the cache line."But a statement reflecting this condition (which can be considered an implied consequence) may not appear anywhere in the documentation.It can be useful to rely on personnel experiences and engineering expertise to identify properties that may not be explicitly stated in the requirements or other specification, but are important properties to maintain.
Another dimension to this issue is that the expected-property-generation process can uncover requirements that are not stated appropriately, are implied, or are sometimes omitted, but nevertheless should be included in a requirements document.For example, statements about and responses to exception conditions during operation are often implied rather than explicitly stated in a requirements specification.In a requirements specification, there may be a statement that a data update process shall read the integer value between 0 and 10, inclusive, and modify it in some way.The value is sent by another process, but nowhere is it stated what to do if the value is outside the range or even not an integer.Similarly, in the mutual-exclusion example described earlier, the requirements document might explicitly state that: "no more than one process should access a resource at one time."But a statement of fairness (e.g., that both processes must be able to gain access to resources when requested) might be omitted.This omission as well as the omission of the exception cases in the previous example should be viewed as deficiencies in a requirements specification 2 .
Application Domain Considerations
A particularly rich source of expected properties is application domain knowledge.These properties are founded upon general characteristics of systems that are common across the operational environment for the broad application domain of the system.
An interesting application domain for MBV techniques is the concurrent system.Concurrent systems have multiple processes running in parallel or interleaved.Each process has a particular behavior that may or may not conflict with the expected properties of the system as a whole.The following is a list of typical (and often implicit) global expected properties of concurrent systems.
• Fairness -All expected functions can execute or all processes have a chance to execute.
• Absence of deadlock -Deadlock is the condition where no progress is being made; for example, where processes are waiting for resources that are held by other non-executing processes and as a result none of the processes can execute.• Absence of livelock -Livelock involves the condition where execution is occurring but no progress is seen; for example, a system does something but is not interacting with the environment.• Absence of starvation -Starvation is the condition where one process dominates, not allowing any others to progress.The others are starved out.
Generic vs. Detailed Statements
As a software project evolves, the development team generates increasingly concrete specifications through successive refinements (or "realizations" in Object Management Group [OMG] terminology) at different levels of abstraction [OMG 01].The user's needs are refined into requirements; the requirements are refined into designs and so on.Model-based verification is used at different project phases to verify different artifacts.In each 2 In current practice these implied requirements are often not included in a requirements document.
CMU/SEI-2002-TN-003 development phase, in addition to other sources such as requirements, users, customers, and so forth, expected properties should be generated from the source on which the modeled artifact is based.The design is used as the source of expected properties when verifying the code; the requirement specification is used to verify the design; and input from the users is used to verify the requirements.
In addition to the antecedent documents, general statements within a single specification can be used for generating expected properties.A general statement on which one or more detailed "refined" statements are based is not necessarily found in a predecessor document.
Often, there are multiple levels of abstraction coexisting in the same document.For example, the very generic statement "no unauthorized user will be granted access to the system" found in the introduction of a requirements specification can be expanded into one or more sections of that specification with detailed descriptions of authentication protocols and standards.
Typically, the detailed statements can be used as a source for models and the generic statements as a source for expected properties.
Whole vs. Part Statements
Modeling an entire system in detail is extremely difficult and generally impractical.For this reason, systems are normally decomposed into parts and each part is modeled separately.In modeling for a requirements specification, for example, different models are built for different parts of the system; each individual model represents the requirements belonging to that particular piece of the puzzle.Thereafter, we can argue that models are representing the individual parts and not the whole of the subject artifact.
If individual attributes (from parts) are the source for the models, we can use global attributes (for the whole) as well as implied characteristics of the individual parts as sources for expected properties.This approach verifies that the individual parts contribute to reach the global objectives.Returning to the security requirements specification example, we could verify that the individual security requirements of the different subsystems comply with the global security requirements for the system.
This strategy is particularly useful for Component-Based Software Engineering (CBSE) [Wallnau 00].Component-based systems are ensembles of potentially numerous individual components.As the number of components increases, so do the number of possible interactions, which makes it very difficult to infer the properties of the ensemble from the properties of its components.MBV can be used to model how each individual component contributes to the behavior of the ensemble, thereby helping to verify that the expected properties of the ensemble hold.
Generating Expected Properties
The generation of expected properties is part of an overall verification and validation strategy for the product.Expected properties focus model-based analyses on the critical system considerations: those aspects that are of special interest to the client, user, or other stakeholders, or those aspects of the system that are risky (e.g., safety or special reliability or other quality requirements).
Focusing the Generation Process
In the context of Model-Based Verification, the identification of expected properties should be coordinated with the generation of the statement of scope, formalism, and perspective [Gluch 01].
• Scope: the portion of a system that is to be modeled and analyzed is its scope.The critical (important and/or risky) aspects of the system and its development, including both programmatic and technical issues, are used to define the scope.• Formalism: the modeling approaches and tools to be used.Modeling techniques that can be employed in Model-Based Verification include state machine representations, process algebras, and rate monotonic modeling.• Perspective: the context for representing the behavior and characteristics of a system.A perspective could be the user's view of a system or it could be the representation of a specific feature, function, or capability of a system.
The perspective and scope focus the analysis efforts on the important or risky elements of the system.Expected properties specifically identify the behaviors associated with those aspects and parts of the system highlighted in the perspective and scope.
Scope and perspective can be used to make the expected-properties-generation process iterative.Initially, expected properties for a small component (scope) related to a particular perspective can be created.Later, more components can be added or more perspectives can be considered.This incremental and iterative approach can be followed until a satisfactory coverage of the system is achieved.
Partitioning of Expected Properties
To further structure expected properties and their elicitation and capture process, the system can first be examined with respect to 1) what is expected or desired of its behavior, and 2) what is not desired in its behavior.
1. Desired (what should happen) -Examine the behavior scenarios of the system and identify how it should be working and what must happen for the system to work properly, capturing specific statements of this type of behavior.
2. Undesired (what should not happen) -Examine the behavior scenarios of the system and explicitly identify what things should not happen and what could happen to result in undesirable system behavior.
Capturing Expected Properties
The process of capturing expected properties resembles that of capturing requirements [Siddiqui 96].Part of the elicitation flow for generating expected properties is shown in Figure 3. Expected properties can be collected directly from stakeholders employing similar techniques to requirements elicitation.Alternatively, expected properties can be extracted directly from requirements documents and related specifications.In addition to a similar pattern, both processes involve multiple stakeholders, can be facilitated by team processes (consensus-based practices), and are directed towards getting the key aspects of the system identified.As with requirements elicitation and capture, it is important with expected properties to ensure that a variety of views and stakeholders are represented in the process, regardless of the specific method used.The following list enumerates different sources for expected properties.
1. Requirements documents: If the project has followed sound engineering practices, requirements should be the most reliable source of expected properties, as they are the consensus of what the system is supposed to do.One important value of generating expected properties from requirements specifications is that it can uncover requirements that are not stated appropriately, are implied, or are omitted.
2. Users of the system, customers, and operations and maintenance personnel: An excellent source of information about what the system is supposed to do is the collective knowledge of stakeholders including users, customers, and operations and maintenance personnel.Members of these groups can provide first-hand insight into behavioral aspects of systems.If sound engineering practices were used in the original development effort, all of these groups would have had a role in defining the requirements documentation.
Domain experts and quality assurance personnel:
We have grouped in this category sources that are not specific to the system.There are expected properties common to individual application domains.For example, every entry in the assets column of a balance sheet should be followed by an entry in the liabilities column.Other expected properties represent quality attributes common to any software system.For example, all internal errors should produce some defined and identifiable external manifestation.
Developers and implementation technology experts:
In theory, technical details of the implementation are not needed to define the expected properties of the system.In practice, however, an understanding of the internals of a potential or actual implementation of the system can be useful in generating effective expected properties.
MBV experts:
The activity of translating specifications into models can result in errors.
Even if these errors do not affect the quality of the system, they do hamper the capability of models to faithfully reflect the behavior of the system.There are expected properties that address the correctness of the models in contrast to the correctness of the system.For example, every state in a state machine should be reachable under some sequence of valid input.If a state is not reachable it is cluttering the model without reflecting any feasible system behavior.MBV experts are software engineers who are trained and experienced in applying Model-Based Verification techniques in a variety of application domains.They can help to craft expected properties that are more readily expressed as formal statements, thereby facilitating and reducing errors in the translation of expected properties into claims.
Standards:
There are expected properties induced by the standards with which the system must comply.For example, if the system uses Common Object Request Brokers Architecture (CORBA) Messaging v. 2.4.2 with order policy set up to ORDER_TEMPORAL, the messages from the client should be guaranteed to be processed in the same order in which they were sent.
Expected Properties and Claims
To be used in automated model checking, the natural language statements of expected properties must be expressed as formal statementsclaims.The various translations among requirements and claims are shown in Figure 4.There are three basic paths to formal claims.
In most cases natural language statements of expected properties, based primarily upon domain expertise, are generated first.Software engineers who are experts in the relevant formal language translate these statements into formal expressions for model checking 3 .This translation is often not a one-to-one mapping of statements.The richness and ambiguities of natural language, in contrast to the precision and constraints of a formal style, often make the translation difficult and error prone.Consequently, the translation process should also involve domain experts and others who helped to develop the expected property statements.Their involvement can be either in direct support of the process or as active reviewers.In an active reviewer role, these individuals interact with software engineers to establish a shared interpretation of the formal statements, one where all parties agree on the consistency of the intent between the natural language statements and their formal expression(s).This collective involvement will help to ensure that subtle differences between the languages do not result in formal statements that misrepresent the expected property statement.
Occasionally, it can be valuable to restate the expected property in a form that is more amenable to direct translation.This approach can be used as a method to clarify the meaning for domain experts who are not conversant with the formal language.This may be a semiformal structured expression.An intermediate representation can facilitate a review process.It can also enhance the understanding of the domain issues by software engineers involved in the process, and of the formal expressions by other domain participants in the process.For example, consider an expected property: "All alarms will be displayed at the system console."This may be broken into an intermediate statement: "If an alarm condition is detected then it must be displayed."This is readily translated into the Computation Tree Logic (CTL) expression AG(alarm -> AF display).In addition it exposes the need to confirm that, in the model, the alarm can eventually occur (i.e., EF alarm).
The most direct path is where formal claims are developed from the sources used to generate expected properties (e.g., a system description or requirements document).In this case software engineers, expert in the relevant formal language (e.g., CTL or Linear Temporal Logic [LTL]), formulate the claims, often in cooperation with domain experts.This approach 3 For more information on the issues associated with the translation of expected properties into formal representations, see the technical note by Comella-Dorda and associates [Comella-Dorda 01].
can work if the software engineer generating the claims is also an expert in the domain.As is the case with the other paths, it is important that other individuals be involved, either directly or as reviewers of the natural language statements and their formal representations, to help to ensure the correctness of the claims.
Figure 4: Expected Property Transformations
This technical note addresses the creation of the natural language statements (expected properties).Since expected properties are natural language expressions, they can have arbitrary structures and semantics.The transformation of expected properties into claims is not covered in this report.However, we do include an overview of the relation between expected properties and claims since understanding this relation is important in the effective analysis of formal models.For additional information on this relationship and the generation of formal claims consult the technical note: Model-Based Verification: Claim Creation Guidelines [Comella-Dorda 01].
The Relationships Between Expected Properties and Claims
General statements of what is expected of the system are often not readily translatable into meaningful claims.Any kind of expression that defines what is good (or bad) for a system can be considered an expected property.For example, the expression "the system works properly under any condition" is a valid expected property; unfortunately it cannot be immediately formalized into a realistic claim to be verified by a model-checking tool.Only by defining specifically, in the context of the model, what the correct operations are under each specific condition can a useable expected property and resultant claim be produced.
Other variations that can be realized between expected property statements and claims involve the practice of iteratively exploring expected properties by using the flexibility in the form of the claims.For example, in analyzing a model it is often useful to start with a claim that is weaker than the expected property and create additional stronger claims until reaching the semantic level of representation of the expected property.For example, suppose that an expected property is "The value of variable E must always decrease."In checking a model we can start with the claim: EF(E.decreasing) = The value of variable E must be able to decrease at least once After verifying the weak claim, we can strengthen it to read AGAF(E.decreasing)= The value of variable E must be able to decrease multiple times (an infinite number of times) If this is confirmed as true, we will reach the semantic level of the original expected property AG(E.decreasing)= The value of variable E must always decrease
Classifying Expected Properties and Claims
When trying to classify expected properties, it is useful to focus on the range in which the property (and the claim) applies.According to this classification, we identify the following types of expected properties.
• invariants (global and local) -conditions that remain unchanged throughout (globally) or conditions that remain unchanged in a particular portion or under specific circumstances (locally) • assertions -conditions that apply at a single point in an execution 4 • pre-and post-conditions -condition pairs that apply before and after something occurs in a system.These can be viewed as assertion pairs.• constraints -statements that apply across all changes or a subset of changes that occur in the system This classification and the process of generating expected properties are influenced, in part, by the realization that the natural language statements must be translated into formal representationsclaims.Consequently, while the generation process should rely on the guidelines outlined earlier, using the four classifications defined above during the identification process can help to tailor the expression of expected properties into forms that more readily translate into formal expressions.Invariants, assertions, pre-and post-conditions, and constraints are propositions, in that they can be evaluated as either true or false when applied to a model.Thinking about the systems' behavioral characteristics in the context of these propositional forms can help in phrasing natural language statements that are more amenable to direct translation into a formal representation.For example, in a traffic light control system that encompasses a large metropolitan area, it is clear that the system should control all intersections so that only one traffic flow direction is permitted through that intersection at a time.In thinking about what should not happen with respect to this property, one can identify an invariant stating that in any intersection in the system: "It will never be the case that the north-south traffic light and the east-west traffic light are both simultaneously green."This is readily expressed as a formal claim in CTL: AG !(N_S = green & E-W = green). 5Employing a usage-based approach complemented by considerations of the ultimate need to formalize the expected properties can facilitate the model-checking process.
Invariants
Global invariants remain unchanged throughout all possible behaviors of the system.These forms can be used to express characteristics of the system that remain constant throughout all executions [Bensalem 96].Often these are not explicitly stated in a requirements specification.For example, in a flight control system, the forward loop gain may take on many values.These may depend on the airspeed or some other factor, but whatever the value, it is never negative or equal to zero.. Thus, an invariant of the system is that the "Forward control loop_gain is always greater than zero."Another example, in a complex system consisting of multiple operating modes, might be that it is always possible for the system to return to the idle mode.
As another example, consider a process control system and the statement that while the system is in mixing mode, the secondary and primary flow control systems are operational.The entire statement is true throughout all executions.It is a global invariant, but it can be viewed as consisting of a local invariant with its restricting condition.Local invariants are statements that apply only at certain times, over a range of states or executions of the system.
In the example, the local invariant is the statement "the secondary and primary flow control systems are operational" which is restricted by the condition that "the system is in mixing mode." One approach for identifying invariants is to look for distinct conditions or collective (related) conditions among components that must be true no matter what happens.This may involve looking at system operational scenarios (e.
Assertions
Assertions are statements that apply at some specific point (instant) in the execution.For example, when the airspeed is 100 knots and the altitude is 1000 feet, the forward loop gain variable is 0.8.Assertions are widely used in programming languages to check the values of variables during the execution [Drabent 98,Rosenblum 95].As an example consider the simple assertion "assert( speed >= 0)" inserted in a flight navigation program.It checks that the variable "speed" is non-negative at that point of execution.
Pre-and Post-Conditions
Pre-and post-conditions are the best-known kind of assertions.They are used to assess the impact of some computational or execution element in a system.Their semantics are the following: "If the pre-condition is true before the execution, then the post-condition must be true after the execution.If the pre-condition is not true, the result is undefined."As a simple example, consider driving.A pre-condition of leaving a parking space would be that the engine is running.A post-condition for leaving a parking space would be that the car is not parked anymore.
Constraints
Constraints are similar to invariants but they operate over the transitions of the system rather than over the states.They describe restrictions on changes that may occur in the system.For example, a constraint for a telecommunications system is that when the number of clients increases, the number of active server connections must increase.As with invariants, constraints can be applied globally or locally over executions of the system.
One approach for identifying constraints is to look for limits on the variations that are possible for critical system parameters throughout the system's execution.As an example, consider a chemical-process-control-system specification.The specification requires that while the system is in the mixing mode, a key feedback parameter affecting process concentrations must never decrease.A constraint over all mixing-mode transitions would be that the feedback parameter must always remain constant or increase.
Conclusions
Defining the characteristics of behavior in natural language is a difficult task.Often, no single individual has a clear idea of all of the expected properties of the system being developed.While requirements specifications are one of the most prominent sources of expected properties, important system properties can be poorly defined in, or completely missing from, a requirements specification.Consequently, effective expected-propertygeneration processes require • collaboration among all of the stakeholders involved • recognition of various levels of refinement and scope throughout a development effort
• considerations of what might not be explicitly documented
• what is generally implied by the application environment or a specific user need The inherent challenge associated with identifying expected properties is further complicated by the need to eventually represent them as formal statements for an automated modelchecking tool.Consideration of this need by thinking about formal expression categories and forms within the generation process can facilitate the entire model-checking activity.These considerations will enable easier translation and can help to guide the identification process (e.g., looking for what is true always).
Throughout all of the activities involved in defining expected properties, the main assets for the practitioner are a thorough understanding of both the system and the domain, and a reliance on sound engineering principles that guide the development of any successful software system.
Taking the time to define expected properties can provide deeper insight into a system and its design, and identify potential defects, even if no formal model checking is performed.This technical note presented a number of guidelines and techniques to facilitate the critical activity of expected property definition.As Model-Based Verification practices mature, modifications and extensions to the guidelines introduced in this technical note will be developed.
Figure 1 :
Figure 1:Model-Based Verification Process and Artifacts
Figure 3 :
Figure 3: Requirements and Expected Properties Elicitation Processes
4 A
point invariant is equivalent to an assertion and is a single-line local invariant.A local invariant may span more than one state or point of execution, e.g. a loop invariant in a program.Invariants talk about things that don't change and assertions are about what is true at a single point in an execution.
g., detailed use cases in O-O representations) and questioning what characteristics should remain unchanged.This questioning is embedded in the two-fold usage-based procedures of iteratively considering what is desired and what is undesired in the behavior. | 2018-05-08T18:32:26.540Z | 2002-01-01T00:00:00.000 | {
"year": 2002,
"sha1": "9561e16ad55b71983399ffed60570d57d5eb0bc1",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/report/Model-Based_Verification_Guidelines_for_Generating_Expected_Properties/6575624/1/files/12062225.pdf",
"oa_status": "GREEN",
"pdf_src": "CiteSeerX",
"pdf_hash": "9561e16ad55b71983399ffed60570d57d5eb0bc1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
252666527 | pes2o/s2orc | v3-fos-license | Theoretical Modeling of Magnetoactive Elastomers on Different Scales: A State-of-the-Art Review
A review of the latest theoretical advances in the description of magnetomechanical effects and phenomena observed in magnetoactive elastomers (MAEs), i.e., polymer networks filled with magnetic micro- and/or nanoparticles, under the action of external magnetic fields is presented. Theoretical modeling of magnetomechanical coupling is considered on various spatial scales: from the behavior of individual magnetic particles constrained in an elastic medium to the mechanical properties of an MAE sample as a whole. It is demonstrated how theoretical models enable qualitative and quantitative interpretation of experimental results. The limitations and challenges of current approaches are discussed and some information about the most promising lines of research in this area is provided. The review is aimed at specialists involved in the study of not only the magnetomechanical properties of MAEs, but also a wide range of other physical phenomena occurring in magnetic polymer composites in external magnetic fields.
Introduction
Magnetoactive elastomers (MAEs) are composite materials consisting of micro-or nanometer-sized magnetic particles embedded into a complaint elastomeric matrix [1][2][3][4][5][6][7][8][9][10]. They belong to the class of smart (or intelligent) materials because their physical properties or macroscopic response can be significantly changed in a controlled fashion by the application of moderate (a few hundred mT) magnetic fields [1,8,9]. Specifically, these are mechanical properties (e.g., the static and dynamic Young's and shear moduli) and different electromagnetic properties (e.g., magnetization reversal curves, magnetic permeability, electrical conductivity and dielectric permittivity) [8,9]. The most prominent effect is the magnetorheological (MR) effect, which is a significant change of the shear storage and loss moduli in external magnetic fields. Due to this, MAEs are also known as magnetorheological elastomers [4]. Furthermore, MAE samples show pronounced deformations both in uniform and non-uniform external magnetic fields. If an MAE sample is placed in a uniform magnetic field, the corresponding changes in its shape or dimensions are usually referred to as magnetostriction, although the physical mechanism is different from that of magnetostriction in conventional solid magnetic materials [8,9]. In non-uniform magnetic fields, one speaks about the magnetodeformation of MAE samples, which can reach 200-300% [9]. Application of a uniform magnetic field also causes huge changes in dielectric properties of MAEs, particularly a relative increase in the effective dielectric permittivity reaches 1000% in moderate magnetic fields up to 0.6 T [9,11]. A detailed description of the wide range of magneto-responsive properties of MAEs can be found in recent reviews [8,9].
The main physical reason for all these effects is believed to be the restructuring of the ferromagnetic filler particles. This is their mutual re-arrangement in external magnetic fields (a change in their relative positions or, equivalently, change in the microstructure of a composite material) [8]. This argument is analogous to the effect observed in MR fluids where particles rearrange along the magnetic field lines forming elongated aggregates. A noticeable re-arrangement is only possible if the polymeric matrix is soft with the shear modulus below 100 kPa [9].
The interest in MAEs is determined by their prospective applications as active vibration absorbers, vibration isolators for mechanical engineering applications, base isolators for civil engineering applications, sensors and actuators [2,4,[12][13][14][15][16]. Magnetically controlled dielectric and electric properties of MAEs open an opportunity to use MAEs as sensors of magnetic fields, as well as to consider them as tunable dielectrics [17,18], which find numerous applications as tunable filters, phase shifters, passive microwave components, or in phased array antenna, etc. Hitherto the majority of fundamental and applied research on MAEs was focused on utilization of bulk properties of these materials. However, it has been recently understood that MAEs are very promising materials for rapid and reversible control of various surface properties, in particular wettability [19][20][21][22], surface roughness [20,23], adhesion [24,25], and friction [26]. It opens up new opportunities for applications of MAE-based smart surfaces in various areas, e.g., droplet-based microfluidics, liquid transporters/distributors, fog harvesters and soft-robot locomotion.
The field of MAE studies is developing rapidly. According to Google Scholar search the total amount of papers published in this field since 2010 would exceed 1400 at the end of 2022 (Figure 1). Several comprehensive reviews are available that focus on fabrication, characterization and applications of these materials [2,4,5,27,28]. As far as the theoretical description of the behavior of MAEs in external magnetic field is concerned, the latest review is about 6 years old [7], although several aspects of theoretical modeling have been also discussed in recent papers [28,29]. We believe that enough notable works have been published in the last five years to warrant an up-to-date review of advancements in theoretical modeling of MAEs. It is worth noting that we do not pretend to compile all the published theoretical works in the field of MAEs, but rather to overview the actual development in the field. Existing trends and those lines of research which would benefit greatly from increased activity in the future are identified. The focus of this review is on the theoretical description of the relationships between the external magnetic field and the resulting mechanical properties (elastic moduli, viscoelastic properties) and phenomena (magnetostriction and magnetodeformation). These effects are referred to as magnetomechanical coupling. This is the field where the majority of published theoretical works is concentrated. Obviously, the reason for that lies in the most promising application area of MAEs. Additional highly interesting physical effects (magnetic properties, magneto-electric effect, magnetoconductivity, surface properties, etc.) are mentioned in the framework of utilized approaches for the description of magneto-mechanical coupling but are not considered in detail. The theoretical works on non-mechanical and surface properties of MAEs deserve a separate review paper.
The paper is organized as follows: In Section 2 the underlying mechanisms that cause magnetomechanical coupling in MAEs are analyzed. In Section 3 the main approaches for modeling of MAEs are presented. Proposed classification of these approaches is based on the concept of different spatial scales that are utilized for modeling these composites. The (combined) multi-scale theoretical approaches are considered in Section 4. Advantages and disadvantages of the existing theoretical methods are discussed and the most promising lines of research in each section are identified. The results are summarized in the concluding section. The search is done according to the following terms in the title: "magnetoactive elastomer" OR "magnetoactive elastomers" OR "magnetoactive polymer" OR "magnetoactive polymers" OR "magnetorheological elastomer" OR "magnetorheological elastomers". The results for the year 2022 are linearly extrapolated from the available data on 15 September 2022.
Basic Mechanisms behind Magneto-Mechanical Coupling
Restructuring of the ferromagnetic filler is commonly accepted as the underlying physical phenomenon for the majority of the effects discussed in Section 1, however a unified theoretical approach suitable for describing and predicting the wide spectrum of characteristics and responses of mechanically soft MAEs has not been developed yet. This can be attributed to the large variability in the material composition and the necessity to take into account nonlinear properties of constitutive materials. For example, the ferromagnetic particles can be either soft magnetic (e.g., carbonyl iron) or hard magnetic (e.g., NdFeB), and they can have different shapes, e.g., spherical or flake-like. Furthermore, the MAE samples can be cross-linked either in the absence of a magnetic field, which results in almost isotropic distribution of magnetizable filling particles, or in an external DC magnetic field, which creates anisotropic filler particle distribution. The magnetization of ferromagnetic particles demonstrates nonlinear dependence on the internal magnetic field, and, for hard magnetic particles, the magnetic hysteresis can't be neglected. When ferromagnetic particles are displaced (translated and/or rotated) in an applied magnetic field, the surrounding polymeric matrix is deformed. It should be noted that the matrix can be chemically attached (grafted) to the particles via the functional particle/matrix interface or be physically adsorbed on the particle surface. As a result, magnetic interactions (both between individual magnetized particles as well as between each particle and the external magnetic field) and elastic forces arising due to matrix deformations compete when a magnetic field is applied to an MAE specimen. In general, elastomer matrices exhibit nonlinear viscoelastic behavior, which further complicates theoretical description. It is obvious that a large variety of synthesis conditions, material compositions, specimen shapes and excitation conditions (magnitude, direction and temporal behavior of an external magnetic field) leads to the need for a comprehensive multi-scale model for MAE materials. Figure 2 schematically shows different scales which should be addressed in the theoretical description of MAE composites. These scales are defined here as follows: microscopic (polymer network, multidomain magnetic structure of µm-sized particles, etc.), mesoscopic (granularity and filler particles as separate physical objects) and macroscopic (larger than the correlation length, specimen scale). It should be mentioned that the mesh size of the polymer network can be comparable to the particle diameter only in the case of magnetic nanoparticles. Typically, the particles are larger than the length of The search is done according to the following terms in the title: "magnetoactive elastomer" OR "magnetoactive elastomers" OR "magnetoactive polymer" OR "magnetoactive polymers" OR "magnetorheological elastomer" OR "magnetorheological elastomers". The results for the year 2022 are linearly extrapolated from the available data on 15 September 2022.
Basic Mechanisms behind Magneto-Mechanical Coupling
Restructuring of the ferromagnetic filler is commonly accepted as the underlying physical phenomenon for the majority of the effects discussed in Section 1, however a unified theoretical approach suitable for describing and predicting the wide spectrum of characteristics and responses of mechanically soft MAEs has not been developed yet. This can be attributed to the large variability in the material composition and the necessity to take into account nonlinear properties of constitutive materials. For example, the ferromagnetic particles can be either soft magnetic (e.g., carbonyl iron) or hard magnetic (e.g., NdFeB), and they can have different shapes, e.g., spherical or flake-like. Furthermore, the MAE samples can be cross-linked either in the absence of a magnetic field, which results in almost isotropic distribution of magnetizable filling particles, or in an external DC magnetic field, which creates anisotropic filler particle distribution. The magnetization of ferromagnetic particles demonstrates nonlinear dependence on the internal magnetic field, and, for hard magnetic particles, the magnetic hysteresis can't be neglected. When ferromagnetic particles are displaced (translated and/or rotated) in an applied magnetic field, the surrounding polymeric matrix is deformed. It should be noted that the matrix can be chemically attached (grafted) to the particles via the functional particle/matrix interface or be physically adsorbed on the particle surface. As a result, magnetic interactions (both between individual magnetized particles as well as between each particle and the external magnetic field) and elastic forces arising due to matrix deformations compete when a magnetic field is applied to an MAE specimen. In general, elastomer matrices exhibit nonlinear viscoelastic behavior, which further complicates theoretical description. It is obvious that a large variety of synthesis conditions, material compositions, specimen shapes and excitation conditions (magnitude, direction and temporal behavior of an external magnetic field) leads to the need for a comprehensive multi-scale model for MAE materials. Figure 2 schematically shows different scales which should be addressed in the theoretical description of MAE composites. These scales are defined here as follows: microscopic (polymer network, multidomain magnetic structure of µm-sized particles, etc.), mesoscopic (granularity and filler particles as separate physical objects) and macroscopic (larger than the correlation length, specimen scale). It should be mentioned that the mesh size of the polymer network can be comparable to the particle diameter only in the case of magnetic nanoparticles. Typically, the particles are larger than the length of network subchains (Figure 2a). This scale difference can reach several orders of magnitude in the case of µm-sized particles which are commonly used as MAE filler, so that the approach in which the polymer matrix is viewed as a continuum medium (Figure 2b, magnifying glass) is fully justified. At the macroscopic scale (Figure 2b), the MAE sample can be considered as a continuum medium with given magnetic and elastic characteristics. network subchains (Figure 2a). This scale difference can reach several orders of magnit in the case of µm-sized particles which are commonly used as MAE filler, so that approach in which the polymer matrix is viewed as a continuum medium (Figure magnifying glass) is fully justified. At the macroscopic scale (Figure 2b), the MAE sam can be considered as a continuum medium with given magnetic and elastic characteris Inherent complexity and nonlinearity of the fully coupled magnetomechan problem call for various approximations, which will be discussed in the follow sections. An obvious simplification is to provide theoretical description for a particu spatial scale. Therefore, it was decided to classify the theoretical approaches to mode of MAEs on the basis of the scale considered in each work.
(a) (b) Figure 2. Schematic representation of MAE's multiscale structure: (a) a magnetic particle polymer matrix resolved at the nanoscale; (b) a macroscopic MAE sample with a rand distribution of magnetic particles in a viscoelastic medium.
Microscopic and Mesoscopic Modeling
Modeling the microscopic structure of the material and its evolution in the magn field is the most fundamental approach to MAE behavior description. In so-ca "bottom-up" models, local behavior of individual particles (microscopic modelling calculated and then employed to obtain the material response via differ homogenization procedures. Ferromagnetic filler particles are usually resolved explic or as parts of particle aggregates. Polymer chains can be resolved explicitly (microsco modeling) or can be represented by an effective medium (mesoscopic modeli Mesoscopic modeling is employed more frequently as the defining feature of M internal structure is the presence of ferromagnetic filler particles, and filler restructur is the underlying process for the changes in macroscopic characteristics of MAEs. main aspects of the modeling are the interparticle interactions, equations of motion collective energy of the system of filler particles and the surrounding polymer. Magn interactions are usually described within the framework of dipole approximation, some works aim to take higher orders of multipole expansion into account. Usu microscopic/mesoscopic models study an element of the material volume to eit understand the processes on the scale of a few filler particles or obtain a representa volume element.
Molecular Dynamics Simulations
A special place among combined microscopic/mesoscopic description of MAE occupied by molecular dynamics (MD) simulations. This field of study has rece experienced active development.
In spite of obvious simplicity, MD models are able to describe the main feature the magnetic filler restructuring within the polymer matrix under the influence of magnetic field to explain the microscopic origin of the experimentally obser phenomena. Inherent complexity and nonlinearity of the fully coupled magnetomechanical problem call for various approximations, which will be discussed in the following sections. An obvious simplification is to provide theoretical description for a particular spatial scale. Therefore, it was decided to classify the theoretical approaches to modeling of MAEs on the basis of the scale considered in each work.
Microscopic and Mesoscopic Modeling
Modeling the microscopic structure of the material and its evolution in the magnetic field is the most fundamental approach to MAE behavior description. In so-called "bottomup" models, local behavior of individual particles (microscopic modelling) is calculated and then employed to obtain the material response via different homogenization procedures. Ferromagnetic filler particles are usually resolved explicitly or as parts of particle aggregates. Polymer chains can be resolved explicitly (microscopic modeling) or can be represented by an effective medium (mesoscopic modeling). Mesoscopic modeling is employed more frequently as the defining feature of MAE internal structure is the presence of ferromagnetic filler particles, and filler restructuring is the underlying process for the changes in macroscopic characteristics of MAEs. The main aspects of the modeling are the interparticle interactions, equations of motion and collective energy of the system of filler particles and the surrounding polymer. Magnetic interactions are usually described within the framework of dipole approximation, but some works aim to take higher orders of multipole expansion into account. Usually microscopic/mesoscopic models study an element of the material volume to either understand the processes on the scale of a few filler particles or obtain a representative volume element.
Molecular Dynamics Simulations
A special place among combined microscopic/mesoscopic description of MAEs is occupied by molecular dynamics (MD) simulations. This field of study has recently experienced active development.
In spite of obvious simplicity, MD models are able to describe the main features of the magnetic filler restructuring within the polymer matrix under the influence of the magnetic field to explain the microscopic origin of the experimentally observed phenomena.
MD modeling is based on solving the equations of motion of particles that make up the system under study. General patterns and characteristics of the material are derived using the laws of particle motion by calculating the integral properties or considering a representative volume element of the material. To handle the intrinsically multiscale structure of MAEs, namely, the fact that magnetic nanoparticles and especially microparticles are one or even two to four orders of magnitude larger than monomer units of polymer where → F i and → T i are the total force and torque acting on the particle i due to its interaction with other particles, external magnetic field and polymer matrix; m i and I i are the mass of the particle and its inertia tensor. → F i,R and → T i,R are a Gaussian random force and torque, respectively. The last terms account for the translational and rotational friction forces, which are proportional to the particle linear, → v i , and angular, → ω i , velocities, with the friction coefficients ξ T and ξ R , respectively.
Magnetic particles are usually modeled as beads bearing point magnetic dipoles located in their centers and either freely rotating [30] or firmly connected with the particle body so that the particle rotates as a whole to orient its magnetic moment [31][32][33]. Additionally, it is assumed that the modulus of the magnetic moment is fixed; this approximation works well for either magnetically isotropic monodomain nanoparticles or magnetically hard ones. Due to the presence of a permanent magnetic moment, the particles interact via dipole-dipole interactions: where → r ij is center-to-center vector between i-th and j-th particles bearing magnetic moments → m i and → m j (the corresponding force is added to should be taken into account in → T i when an external magnetic field → H is applied. The later one forces the hard-magnetic particles to rotate in order to orient their magnetic moments along the field lines. In the simulation model proposed in [34], a finite magnetic anisotropy is taken into account via introducing the additional energy of uniaxial magnetic anisotropy depending on the angle between the magnetic moment and the easy axis of the particle. In this case, the rotation of the particle magnetic moment under the influence of the applied magnetic field is affected by both polymer matrix and by internal magnetic anisotropy. To model a pure repulsion between all beads in the system due to excluded volume, truncated and shifted Lennard-Jones potential (so-called Weeks-Chandler-Andersen potential [35]) is commonly used.
In the coarse-grained MD models (Figure 3), polymer matrix is usually represented by elastic forces acting on magnetic particles, in addition to magnetic forces and excluded volume interactions. In [36,37], the magnetic particles are connected by elastic springs only to some anchoring points in space, fixing initial positions of the particles. Within this approach, either only translations of the particles can be constrained (one-spring model, Figure 3a) or both translations and rotations of the particles are hindered by the polymer matrix (two-spring model, Figure 3b). In [23], the mechanical constraints acting on magnetic particles due to the presence of a polymer matrix are represented by elastic springs connecting the centers of nearest-neighbor particles. The rigidity of the matrix in these cases can be controlled by the value of the elastic constant in the harmonic spring potential. In more detailed approaches, polymer chains are explicitly modeled as beads on springs, forming either a regular network (Figure 3c) with magnetic particles occupying all [31][32][33] Polymers 2022, 14, 4096 6 of 42 or a certain fraction of crosslinks [34,38], or a non-regular network (Figure 3d) with some beads acquiring magnetic moments and thus mimicking magnetic particles [39][40][41]. The fraction of magnetic beads can be varied but the total amount of particles in this case increases considerably (due to additional beads representing segments of the chains), making the simulations much more time consuming, however, in a sense more realistic, in particular, in the description of magneto-responsive behavior of magnetic gels capable of a large variation of volume.
Polymers 2022, 14, 4096 6 of 44 these cases can be controlled by the value of the elastic constant in the harmonic spring potential. In more detailed approaches, polymer chains are explicitly modeled as beads on springs, forming either a regular network (Figure 3c) with magnetic particles occupying all [31][32][33] or a certain fraction of crosslinks [34,38], or a non-regular network ( Figure 3d) with some beads acquiring magnetic moments and thus mimicking magnetic particles [39][40][41]. The fraction of magnetic beads can be varied but the total amount of particles in this case increases considerably (due to additional beads representing segments of the chains), making the simulations much more time consuming, however, in a sense more realistic, in particular, in the description of magneto-responsive behavior of magnetic gels capable of a large variation of volume.
MD Simulations of Magnetic Gels
A lot of efforts have been directed to apply MD simulation technique within the frameworks of simple approaches described above to the study of the structural and conformational behavior of the so-called magnetic macro-, micro-or nanogels, i.e., polymer networks swollen with a solvent and containing some fraction of magnetic nanoparticles. Magnetic gels, or ferrogels, are very promising for biomedical applications, in particular, as drug delivery systems [42]. First models of magnetic gels were quite simple. They were constructed by placing the magnetic nanoparticles on a regular spatial lattice (squire in 2D or simple cubic or diamond cubic in 3D) and connecting them by bead-spring polymer chains attached to specific spots on the surface of the magnetic particles. Periodic boundary conditions were used while the box size was settled in the course of the system equilibration. In this simple approach, in addition to the excluded volume, only dipole-field interactions were taken into account while dipole-dipole interactions were neglected owing to low concentrations of the magnetic particles. As a result, the gel deformations in a magnetic field were explained by a direct coupling of the orientational degree of freedom of the magnetic moments of the nanoparticles to the polymer chains, whose ends were firmly connected with particle surface, creating stress in polymer chains due to rotations of the magnetic particles. It was found that in 2D the particle rotation causes isotropic shrinkage of the gel [31,32] while in 3D the deformations are anisotropic-a strong shrinkage was observed in the direction parallel to the field while the shrinkage in the perpendicular directions was either small or not present at all, depending on the network topology [32,33].
MD Simulations of Magnetic Gels
A lot of efforts have been directed to apply MD simulation technique within the frameworks of simple approaches described above to the study of the structural and conformational behavior of the so-called magnetic macro-, micro-or nanogels, i.e., polymer networks swollen with a solvent and containing some fraction of magnetic nanoparticles. Magnetic gels, or ferrogels, are very promising for biomedical applications, in particular, as drug delivery systems [42]. First models of magnetic gels were quite simple. They were constructed by placing the magnetic nanoparticles on a regular spatial lattice (squire in 2D or simple cubic or diamond cubic in 3D) and connecting them by bead-spring polymer chains attached to specific spots on the surface of the magnetic particles. Periodic boundary conditions were used while the box size was settled in the course of the system equilibration. In this simple approach, in addition to the excluded volume, only dipole-field interactions were taken into account while dipole-dipole interactions were neglected owing to low concentrations of the magnetic particles. As a result, the gel deformations in a magnetic field were explained by a direct coupling of the orientational degree of freedom of the magnetic moments of the nanoparticles to the polymer chains, whose ends were firmly connected with particle surface, creating stress in polymer chains due to rotations of the magnetic particles. It was found that in 2D the particle rotation causes isotropic shrinkage of the gel [31,32] while in 3D the deformations are anisotropic-a strong shrinkage was observed in the direction parallel to the field while the shrinkage in the perpendicular directions was either small or not present at all, depending on the network topology [32,33].
MD simulations of single magnetic nanogels (MNG) with a small fraction of magnetically anisotropic nanoparticles occupying some crosslink beads of a regular polymer structure with equal length of the network subchains were carried out in [34,38,43]. In these papers, not only dipole-field but also dipole-dipole interactions were calculated explicitly owing to the finite size of the system. Besides, the magnetic moment was coupled inside the particle with the easy magnetization axis. The calculated radial distribution functions for varying strength of interparticle dipolar interaction, concentration and temperature clearly indicated the structuring of magnetic particles in the magnetic field. The effect of the particle magnetic anisotropy on the magnetic structures and volume changes of MNGs in magnetic fields was elucidated.
In a series of publications [39,40], the model of irregular polymer network with a fraction of magnetically hard nanoparticles with "frozen-in" permanent magnetic moments was used to investigate the equilibrium structural properties of not only a single magnetic nanogel [39], but also MNG suspensions in absence [40] and in the presence [41] of an applied external field. It was found that inside a single MNG, magnetic nanoparticles form small clusters whose shape is largely affected by polymer elasticity, in particular, the amount of crosslinks [39]. In suspension, MNGs can aggregate due to magnetic interactions leading to formation of magnetic nanoparticle bridges between MNGs [40]. Such self-assembling behavior is largely enhanced when an external magnetic field is applied. Furthermore, it was found that suspensions of MNGs have larger susceptibility to magnetic fields than suspensions of magnetic nanoparticles at the same mean concentration due to a high local concentration of the latter in regions inside the gels [41]. On the other hand, a gel itself has a lower susceptibility than the suspension of magnetic particles of the same concentration due to elastic constraints acting on the particles within the gel. In [44], the behavior of a MNG in the shear flow is studied with the use of the same model.
Refined MD Models of MAEs
In general, harmonic spring potentials acting on magnetic particles can describe qualitatively well elastic deformations arising upon particle movements under the action of the external magnetic field and elucidate the role of magnetomechanical coupling in the resulting magnetic structures and some features of MAE magnetization. In [36], a simple model of a magnetoactive elastomer filled with magnetically hard particles was proposed to study the role of inelastic microstructural matrix deformations induced by magnetic fields. This work was inspired by experimental observations of a substantial change in the magnetic response of MAEs containing hard magnetic particles after their first exposure to external fields-initial magnetization curve of these materials differs substantially from the subsequent ones in consecutive measurements of conventional magnetization loops. It was proposed to model irreversible relaxations of elastic constraints during the first magnetization of the sample simply by shifting the anchoring points of the elastic springs undergoing large deformations upon particle movements. This shift reduces the extension of the spring constraining the particle and facilitates its movement during the second magnetization-demagnetization loop. It was shown that only the model taking into account both translational and orientation irreversible constraints is able to describe the experimental observations qualitatively well.
A special approach to studying structural transformations in MAEs filled with nonspherical flake-like NdFeB particles was proposed in a recent paper [45]. In the developed MD model, the magnetic particles are represented by 14 spherical beads rigidly connected to a central bead, thus, forming an anisotropic ellipsoid-like (or flake-like) aggregate. The central bead acquires a magnetic moment which is directed perpendicularly to the flake plane. Anisotropy in mechanical response, i.e., in translation and rotation of the anisometric particles along long and short axes, arises due to different values of the elastic constants for the harmonic springs connecting four non-magnetic beads (per two beads at long and short axes) in each flake-like aggregate to some anchoring points located in space. Furthermore, irreversible deformations under the influence of the magnetic field were modelled by shifting anchoring points. Computer simulations were performed for a fixed value of volume fraction (0.08) of magnetic particles corresponding to the experiments but different values of magnetic moments of particles and rigidity constants of harmonic springs. In spite of the model simplicity (the particles are regular and monodisperse), the model is able to capture the main features of MAE response to moderate and strong fields which are observed experimentally [45]. In particular, it was shown that in a sample pre-magnetized in a strong magnetic field for a few minutes, further application of a moderate magnetic field leads mainly to flake rotations that are fully reversible. In contrast, in initially non-magnetized sample translations and rotations of the flake-like particles in a moderate magnetic field cause non-reversible formation of chain-like structures.
In recent papers [30,46], the MD model of a multiferroic material, namely an elastomer matrix filled with both ferromagnetic (FM) and ferroelectric (FE) microparticles was proposed. In comparison with previous approaches, the polydispersity of FM and FE particles was taken into account with lognormal distribution of sizes, and the magnetic and electric moments prescribed to the corresponding particles were scaled according to their size. Polymer matrix was modeled in the simplest way via introducing elastic springs connecting each particle with FM or FE particles inside a sphere of a given radius (Figure 4a). Dipole-dipole interactions were calculated only between particles in the close vicinity. It was shown that when a magnetic or electric field is applied, the corresponding FM or FE particles are moved from their initial positions causing mechanical stresses in polymer matrix to be transferred to the particles of the different type. This kind of particle coupling through polymer matrix was shown to be the fundamental mechanism of multiferroic behavior of the composite, i.e., a magnetization causes an electric response while an electric polarization leads to a magnetic response. The simulation results were confirmed experimentally for a polymer-based dispersion of iron and lead zirconate micrometer-size particles ( Figure 4b). are magnetizations in normalized units measured without electric bias and under an electric bias of 5 MV/m; experimental data (black) and simulation data (red) with error bars (grey) [30]. H denotes the magnetic field strength.
Stress-Strain Behavior and Elastic Modulus of MAEs via DPD Simulation
MD models of MAEs mentioned above can capture the structural and magnetization features of magnetoactive polymer materials. In a recent paper [47], the so-called dissipative particle dynamics (DPD) was first applied to study mechanical properties of MAEs. DPD is a coarse-grained molecular dynamics simulation method widely used in modeling of various polymer systems including elastomers [48,49]. This mesoscale method makes it easy to cover much larger time and length scales in comparison with conventional MD and to achieve an equilibrium state even for very large systems. In [47], to capture the size difference between magnetic nanoparticles and monomer units of polymer chains, the nanoparticles were represented by a set of beads bearing fixed cooriented magnetic moments not connected to the polymer matrix. During mechanical deformations, such particles can transfer the mechanical load only through excluded volume interactions with the polymer. The polymerization of monomers into a network mimicking epoxy resin was performed using reactive DPD, in the absence and in the presence of external magnetic field. The developed approach allowed to estimate densities of the load-bearing chains in the polymer matrix and to correlate them with the Young's [30]. H denotes the magnetic field strength.
MD approach makes it possible to study not only rearrangement of magnetic particles in bulk but also on the surface of MAE films. In [23], the coarse-grained MD model was applied to study the structure of a 3D thin film of magnetoactive elastomer adsorbed on a solid substrate. Within this model, a MAE film was represented as soft-core spherical magnetic particles, carrying point dipoles, connected by elastic springs. The concentration of magnetic particles as well as the rigidity of the polymer matrix (i.e., values of the elastic constants of the harmonic spring potentials) were varied. The magnetic field was applied perpendicular to the film surface. The equilibrium structures formed by the magnetic particles in magnetic fields were a result of the competition between dipole-dipole, elastic and dipole-field interactions. It was shown that the surface roughness increases strongly with growing magnetic field due to aligning of magnetic aggregates with the field and formation of mountain-like profiles on the film surface. The effects of the concentration of magnetic particles and rigidity of the polymer matrix were elucidated. The obtained results provided some guidelines for fabrication of MAE coatings with a tunable surface topology.
Stress-Strain Behavior and Elastic Modulus of MAEs via DPD Simulation
MD models of MAEs mentioned above can capture the structural and magnetization features of magnetoactive polymer materials. In a recent paper [47], the so-called dissipative particle dynamics (DPD) was first applied to study mechanical properties of MAEs. DPD is a coarse-grained molecular dynamics simulation method widely used in modeling of various polymer systems including elastomers [48,49]. This mesoscale method makes it easy to cover much larger time and length scales in comparison with conventional MD and to achieve an equilibrium state even for very large systems. In [47], to capture the size difference between magnetic nanoparticles and monomer units of polymer chains, the nanoparticles were represented by a set of beads bearing fixed co-oriented magnetic moments not connected to the polymer matrix. During mechanical deformations, such particles can transfer the mechanical load only through excluded volume interactions with the polymer. The polymerization of monomers into a network mimicking epoxy resin was performed using reactive DPD, in the absence and in the presence of external magnetic field. The developed approach allowed to estimate densities of the load-bearing chains in the polymer matrix and to correlate them with the Young's modulus of the material with isotropic and chain-like distribution of magnetic particles obtained from stress-strain curves. The proposed model also allowed us to elucidate the role of the particle/polymer interface by calculating the elastic modulus tuning the interaction parameters between magnetic beads and monomer units of the polymer chains. Although the magnetic nature of the particles came into play only at the stage of preparation of the system with ordered filler, the developed model lays the foundation for simulations of MAEs mechanical properties in magnetic fields.
To summarize, MD calculations are a powerful tool for studying changes in the material microstructure. They have been particularly successful in investigating structural and conformational behavior of the magnetic macro-, micro-or nanogels in magnetic fields, for example for calculating their magnetic properties and volume changes. The merit of very simple MD models is that they not only allow one to describe the main features of filler restructuring and to explain the microscopic origin of the experimentally observed phenomena but also to establish the foundation for the development of useful approximations.
As far as calculations of MAE properties and behavior are concerned, the theoretical investigations were rather limited, probably because they require large computational resources. However, we believe that MD calculations will gain more importance in the future, while the DPD version seems to be the most promising. In particular, MD models can be generalized to more complicated cases, for example anisometric and/or soft-magnetic particles.
Mesoscopic Structure Modeling: Analytical and Numerical Approaches
The simplest models work with the approximation of a uniform lattice network of ferromagnetic filler particles and with the approximation of magnetic dipole interaction [50][51][52][53][54][55]. The use of the magnetic dipole approximation in modeling, however, leads to noticeable errors for the cases of small distances between particles in relation to their size, which corresponds to magnetoactive elastomers with a high filler volume concentration (more than 20% by volume), where the average distance between ferromagnetic particles inside the polymer matrix has the same order of magnitude as the size of the particles themselves. To describe the pairwise interparticle interaction with higher degree of accuracy, a model with a more complex interpolation interaction potential obtained in the multipole approximation was also proposed [56].
In other cases, the polymer network is modeled as a continuous mechanical medium, the elastic properties of which are described by either linear or nonlinear elasticity theories.
Calculation of Elastic Moduli
In [50], the dynamic response of a magnetoactive elastomer in the presence of various magnetic fields is described using a coarse-grained model with a cubic lattice that contains filler particles (magnetic dipoles) as the nodes. The particles are connected by linear elastic springs. The approximation of a uniform isotropic distribution of filler particles in the polymer matrix is used in the model. Additionally, the limiting case of weak magnetic fields, which do not lead to the rearrangement of the filler into chain structures, is assumed. The Langevin-type equations of motion of filler particles are linearized with respect to a small parameter of particle displacements from the equilibrium position. In this paper, the relaxation spectrum of a cubic lattice is calculated, expressions for the dynamic elasticity moduli of the material for various mutual orientations of the magnetic field and the direction of shear deformation are obtained. It is shown that the dependences of the dynamic moduli on the magnitude of the magnetic field at low fields can be represented by quadratic functions.
As far back as 1996, the authors of the work [51] laid the theoretical foundation for studying the chain structures of magnetically active particles and their effects on the surrounding elastic medium in the presence of a magnetic field. It was suggested that the shear modulus of the material is a superposition of the modulus in the absence of a magnetic field and the additional modulus induced by the magnetic field. The pairwise interaction of spherical magnetoactive filler particles with magnetic dipoles located in their geometric centers was considered assuming their relative displacement caused by the shear deformation of the sample. An expression for the magnetically induced part of the shear modulus ∆G was derived using the limit of small deformations: where ϕ is the volume fraction of particles in the composite, M is the particles' magnetization, µ 1 is the relative permeability of the medium, µ 0 is the magnetic permeability of vacuum and the parameter h = r 0 /d is an indication of the gap between particles in a chain. d denotes the particle diameter and r 0 stands for the distance between the centers of particles in a chain. The maximum possible value of ∆G for a typical MAE, filled with iron particles, can be estimated by taking ϕ ≈ 0.29, saturation magnetization M s ≈ 2.1 T, µ 1 = 1, h = 1. This evaluation of (4) gives ∆G max ≈ 5 × 10 5 Pa, which is about one order of magnitude lower than experimentally observable values. The primary origin of this discrepancy with the experimental values is clear: the solitary chain model ignores possible magnetic interactions between magnetic particles in different chain-like aggregates and changes in mutual positions of particles in an external magnetic field. This paper also considered the problem of spatially inhomogeneous magnetization of particles: the influence of the field produced by particles on the magnetization of neighboring particles is characterized by the average magnetization. This took into account the ratio of the size of the magnetically saturated part of the particle volume to the entire particle volume which led to a nontrivial dependence of the additional shear modulus on the magnetic field. A conclusion about the quadratic dependence of the part of the shear modulus induced by the magnetic field on the average magnetization of filler particles was made. In [52], a generalization of this model for the case of interacting magnetic filler chain structures was considered using magnetic dipole approximation. The interaction energy and the shear modulus induced by the magnetic field were also calculated for the distributions of filler particles corresponding to the simple cubic and body-centered lattices. The authors of [57] obtained expressions for the elastic modulus and shear modulus of a MAE sample with an isotropic cubic lattice of filler particles using the linear elasticity model, the magnetic dipole approximation, and the magnetization model described by the empirical Fröhlich-Kennely model: where µ Fe is the relative magnetic permeability of the filler, µ ini is the initial relative magnetic permeability and H is the magnetic field strength.
An alternative approach to explaining the significant MR effect in magnetically and mechanically soft MAEs has been proposed in the works of Kalita et al. [58,59]. Under MR effect we understand the relative change of the shear storage modulus of an MAE in an applied magnetic field. The explanation is based on the so-called single-particle mechanism of magnetostriction, where the total magnetic anisotropy energy of the filling particles in the matrix is the sum of single particle energy terms [60]. An additional magnetoelastic contribution to the mechanical stress created by the induced magnetic anisotropy counteracts the shear and increases the effective shear modulus of the magnetoactive elastomer when the latter is magnetized. Numerical estimates made for the magnitude of magnetorheological effect (almost two orders of magnitude) were in good agreement with experimental data [59].
In a series of works, an attempt was made to describe the MR effect in MAEs [61] and magnetic ferrogels [62] quantitatively. A concept of primary aggregates of magnetic particles, first put forward in [63] to explain strong concentration dependence of the shear modulus of alginate ferrogels, was used to catch high values of experimentally measured increase in elastic and loss moduli of these materials in magnetic fields, which could not be described properly considering single magnetic particles dispersed in elastic medium (on the level of single magnetic particles). Isotropic spherical agglomerates of magnetic particles introduced in the proposed model ( Figure 5a) had stronger magnetic properties and, in an applied magnetic field, can more easily aggregate into chain-like structures (Figure 5b,c), overcoming elastic forces of polymer matrix than isolated magnetic particles. Furthermore, volume fraction of magnetic agglomerates (including trapped rubber) was claimed to be higher than that of isolated magnetic particles, this fact also favoring magnetic attraction and chaining of the agglomerates in external magnetic field. To calculate the equilibrium aggregation number of chains, a lattice representation was used and a special hierarchical model of aggregation was applied (Figure 6a), taking into account magnetic interactions of agglomerates only within single lines oriented along the field axis. To estimate magnetization of aggregates, they were approximated by ellipsoids of revolution ( Figure 6b). In the developed model, it was assumed that primary agglomerates have the same size, and their chain aggregates are monodispersed. Even this crude approximation provided a rather good agreement with experimental results, in particular, it allowed to describe theoretically the high MR response of alginate ferrogels [62] as well as MAEs based on a permalloy filler [61].
The work [64] provided an overview on how to build a bridge from the mesoscopic positioning of the particles relative to each other to the overall, possibly macroscopic behavior of the entire system. To address the MR effect, reduced dipole-spring models were employed. It was found that whether the mechanical moduli increase or decrease under the influence of magnetic interactions depends on the particle configuration and on the orientation of the magnetization direction. Various regular lattices, randomized particle configurations as well as real particle arrangements extracted from experimental samples by X-ray tomography were evaluated. Upon strong magnetization, it was found that a restructuring of the filler takes place. During this process, against the elastic restoring forces of the springs, particles collapse toward each other into virtual contact and form chain-like aggregates. This effect is accompanied by a significant increase in the mechanical stiffness, in qualitative agreement with corresponding experimental observations [65]. The dynamic moduli, quantifying the storage and loss parts of the dynamic response of the systems, were evaluated as a function of the magnetization and for different particle arrangements as well [66,67]. of revolution ( Figure 6b). In the developed model, it was assumed that primary agglomerates have the same size, and their chain aggregates are monodispersed. Even this crude approximation provided a rather good agreement with experimental results, in particular, it allowed to describe theoretically the high MR response of alginate ferrogels [62] as well as MAEs based on a permalloy filler [61]. The work [64] provided an overview on how to build a bridge from the mesoscopic positioning of the particles relative to each other to the overall, possibly macroscopic behavior of the entire system. To address the MR effect, reduced dipole-spring models were employed. It was found that whether the mechanical moduli increase or decrease under the influence of magnetic interactions depends on the particle configuration and on the orientation of the magnetization direction. Various regular lattices, randomized particle configurations as well as real particle arrangements extracted from experimental samples by X-ray tomography were evaluated. Upon strong magnetization, it was found that a restructuring of the filler takes place. During this process, against the elastic restoring forces of the springs, particles collapse toward each other into virtual contact and form chain-like aggregates. This effect is accompanied by a significant increase in the mechanical stiffness, in qualitative agreement with corresponding experimental observations [65]. The dynamic moduli, quantifying the storage and loss parts of the dynamic response of the systems, were evaluated as a function of the magnetization and for different particle arrangements as well [66,67]. of revolution ( Figure 6b). In the developed model, it was assumed that primary agglomerates have the same size, and their chain aggregates are monodispersed. Even this crude approximation provided a rather good agreement with experimental results, in particular, it allowed to describe theoretically the high MR response of alginate ferrogels [62] as well as MAEs based on a permalloy filler [61]. The work [64] provided an overview on how to build a bridge from the mesoscopic positioning of the particles relative to each other to the overall, possibly macroscopic behavior of the entire system. To address the MR effect, reduced dipole-spring models were employed. It was found that whether the mechanical moduli increase or decrease under the influence of magnetic interactions depends on the particle configuration and on the orientation of the magnetization direction. Various regular lattices, randomized particle configurations as well as real particle arrangements extracted from experimental samples by X-ray tomography were evaluated. Upon strong magnetization, it was found that a restructuring of the filler takes place. During this process, against the elastic restoring forces of the springs, particles collapse toward each other into virtual contact and form chain-like aggregates. This effect is accompanied by a significant increase in the mechanical stiffness, in qualitative agreement with corresponding experimental observations [65]. The dynamic moduli, quantifying the storage and loss parts of the dynamic response of the systems, were evaluated as a function of the magnetization and for different particle arrangements as well [66,67].
Calculation of Magnetostriction
Theory of magnetostriction of MAE samples has received a lot of attention in the literature. The reason is that this phenomenon is important for a number of applications (e.g., actuators for soft robotics), while the comprehensive description of the underlying physics is challenging from the fundamental point of view, even in the case of a spherical MAE sample [68]. If MAE is considered to be a continuous isotropic medium (macroscopic scale), an MAE sphere must stretch along the direction of a uniform magnetic field. On the other hand, taking into account the internal structure of the composite material (mesoscopic scale), one would come to the conclusion that an MAE sphere must contract along the direction of the field, because magnetized particles interact with other particles. As a result, two composites with the same matrix/filler content may behave very differently depending on their mesoscale structure [68].
A qualitative description of the behavior of an elementary spherical cell consisting of a hard magnetic (HM) particle in its center surrounded by an elastic incompressible shell containing a number of uniformly distributed soft magnetic (SM) particles was presented and validated by two complementary theoretical approaches in [69]. These approaches were a continuum analytical description of the magnetoelastic system and coarse-grained MD simulations within a minimal spring-bead model. The main approximations were the linear elastic response and the negligible mutual magnetization between magnetically soft particles. Both models demonstrated that when an external magnetic field is oriented antiparallel to the magnetic moment of the HM particle, a nonmonotonic deformational response of the elementary cell takes place with an increasing field strength. In weak antiparallel fields, local microscopic particle rearrangements cause the shrinking of the cell in the field direction while in stronger fields the elongation along the field axis takes place. MD simulations also provided distributions of SM particles and elastic stresses in the shell depending on the field orientation and its strength.
A theoretical analysis of the effect of magnetic particle concentration on magnetostriction (elongation vs contraction) of an ellipsoidal ferrogel sample in applied magnetic fields was performed in [70]. The change of magnetic free energy under small sample deformations was estimated taking into account both the change of the demagnetizing factor and the magnetic susceptibility. The magnetic susceptibility was calculated assuming linear particle magnetization and pair interaction approximation. It was shown that at particle concentrations below the critical value ϕ crit ∼ 0.162, contraction of the sample in the field direction can occur. The possibility of this effect has been predicted earlier [71], however, more accurate account of the pair distribution function performed in [70] has shown that the range of the sample aspect ratios, R 0 , where this effect can take place is rather narrow: the samples should be either strongly prolate or oblate. This makes experimental observation of this effect rather rare. In a wide range of R 0 , as well as at particle concentrations ϕ > ϕ crit , the sample elongation is more favorable in accordance with experimental data.
In the work [64], the deformation of an MAE sphere in a magnetic field was considered. The particles were assumed to be embedded in a linearly elastic finite-sized sphere. When the particles are magnetized, they distort the surrounding elastic material through the resulting pairwise magnetic attraction or repulsion. Superimposing the contributions of all magnetized inclusions, the overall deformation of the system was calculated [72]. The underlying mathematical expressions were analytical and therefore contained an infinite number of degrees of freedom involved in the distortion of the elastic sphere. The appearance of the global deformation was strongly related to the internal particle arrangement. The shear modulus of the sphere was kept fixed at 1.67 kPa. Therefore, the Young s modulus of the sphere varied according to the well-known relations between the elastic moduli and the Poisson s ratio. Whether the sphere was elongated or contracted along the magnetization direction depended significantly on the mutual particle positioning, on the orientation of the magnetization axis, and on the value of the Poisson ratio quantifying the compressibility of the elastic material [72]. For randomized particle configurations, a tendency of sphere s elongation parallel to the magnetization direction was found, in agreement with corresponding experimental observations [73,74]. More accurate description of magnetostriction phenomenon was performed using a combined micro/meso/continuum approach and described in the corresponding section below.
Mesoscopic Cell Modeling
Due to the fact that the number of filler particles in a real MAE is very large even for the case of low concentrations, the possibilities of direct calculation of material behavior are limited by the computational power of modern computers. One way to solve this problem from a modeling point of view is to consider the properties of a material cell that contains a reasonable number of ferromagnetic inclusions and then calculate or evaluate material properties based on the behavior of this mesoscopic cell. Such smaller systems include single particle cells that help to understand how the presence of ferromagnetic filler influences MAE properties, two particle cells that additionally take into account pairwise particle interactions in the simplest form and multi-particle cells that allow for the introduction of filler distribution-related factors into a model. A notable way of transitioning from a mesoscopic material cell to a macroscopic sample is constructing a representative element of the volume or surface of the material, that is, some element small enough that its behavior can be calculated in a reasonable amount of simulation time, but large enough that the properties and behavior of this element could be related to the properties of the entire macroscopic sample within the specified margin of error. Thus, in the case of studying the MAEs within the framework of the representative volume element approach, it is necessary to construct an element of the polymer medium containing a number of ferromagnetic inclusions corresponding to the filler concentration. At the same time, such an element can be declared as a certain effective "period" of the general internal structure of a magnetopolymer composite. A large number of theoretical studies of MAEs are dedicated to understanding the processes occurring in mesoscopic material cells when external magnetic field and/or mechanical load are present.
The authors of [56] studied the problem of using the magnetic dipole approximation to describe the magnetic interaction of filler particles in magnetoactive elastomers. In order to create a more realistic theoretical model of processes occurring in MAEs, the interaction of a pair of linearly magnetizable spherical particles was studied. In the work, the effective interaction potential for small interparticle distances, as well as the resulting force of magnetic interaction, are obtained. The suggested interaction potential is an approximation of the multipole expansion for the interaction of particles. The equilibrium positions of the two-particle system were found by minimizing the energy functional with the elasticity energy defined by the Mooney-Rivlin model. The hysteresis-type behavior was demonstrated for the equilibrium interparticle distance with a cyclic change in the external magnetic field. A similar modeling process was also used in [75], where the polymer medium was described as a classical medium with properties corresponding to the Kelvin-Voigt rheological model.
Yu.L. Raikher et al. [76] developed an approach to describing processes on a mesoscopic scale, which makes it possible to calculate the magnetomechanical behavior of the volume element of a magnetically active elastomer in the approximation of the linear theory of elasticity of the polymer medium and the magnetic dipole interaction of filler particles. In this approach, it is assumed that a magnetic dipole is placed in the geometric centers of spherical magnetically soft particles, and the field inside the particle is determined taking into account the demagnetization effect. The magnetic dipole moment of each of the particles in the model depends on the collective magnetic field created by the remaining particles in the selected volume element of the material. Using the particle displacement vectors obtained as a result of solving the finite element problem, the energy of an element of a MAE was calculated, and the equilibrium state of the system was determined via energy minimization. It was shown that the simulated system exhibits pseudoplasticity under the condition of a constant external magnetic field presence and a cyclic mechanical load. Figure 7 demonstrates the calculated pseudoplasticity effect in the loading cycle (a)→(b)→(c)→(d). Initial configuration (a) corresponds to the unloaded sample. The assembly of magnetized particles, when forced to rearrange under pressure, finds a more favorable configuration: under zero mechanical load the total energy of configuration (d) of Figure 7 is lower than that of configuration (b) [76].
In [77] a boundary value problem (BVP) for the composite with mixed filling was considered on a mesoscopic scale: a period of hard magnetic particle chain surrounded by a polymer matrix and soft magnetic particles was modeled using finite element method. In this case, the Langevin function was used to describe the magnetic properties of magnetically hard particles and the Fröhlich-Kennelly function is used to describe magnetically soft medium. The relationship between the mesoscopic model and the macroscopic magnetic characteristics of a MAE with the shape of an ellipsoid was also considered.
The work [78] can serve as an example of the microcontinuum approach with the weak form of the Maxwell and mechanical equilibrium equations determining the behavior of a mesoscopic cell. The authors of [78] calculated the size of a mesoscopic cell with isotropic filler particle distribution that is sufficient for the cell to be a representative volume element. The problem is solved both analytically and using FEM modeling. A further discussion of this work is provided in Section 4 in the context of homogenization procedure. termined via energy minimization. It was shown that the simulated system exhibits pseudoplasticity under the condition of a constant external magnetic field presence and a cyclic mechanical load. Figure 7 demonstrates the calculated pseudoplasticity effect in the loading cycle (a) → (b) → (c) → (d). Initial configuration (a) corresponds to the unloaded sample. The assembly of magnetized particles, when forced to rearrange under pressure, finds a more favorable configuration: under zero mechanical load the total energy of configuration (d) of Figure 7 is lower than that of configuration (b) [76]. In [77] a boundary value problem (BVP) for the composite with mixed filling was considered on a mesoscopic scale: a period of hard magnetic particle chain surrounded by a polymer matrix and soft magnetic particles was modeled using finite element method. In this case, the Langevin function was used to describe the magnetic properties of magnetically hard particles and the Fröhlich-Kennelly function is used to describe magnetically soft medium. The relationship between the mesoscopic model and the macroscopic magnetic characteristics of a MAE with the shape of an ellipsoid was also considered.
The work [78] can serve as an example of the microcontinuum approach with the weak form of the Maxwell and mechanical equilibrium equations determining the behavior of a mesoscopic cell. The authors of [78] calculated the size of a mesoscopic cell with isotropic filler particle distribution that is sufficient for the cell to be a representative volume element. The problem is solved both analytically and using FEM modeling. A further discussion of this work is provided in Section 4 in the context of homogenization procedure.
Considerable effort has been directed towards understanding the physical foundations of magnetization features of MAEs based on HM particles and a mixture of HM and SM fillers (so-called hybrid MAEs). HM particles are usually composed of multiple magnetic domains, and magnetization of mechanically soft MAEs containing HM particles includes two processes: the intrinsic motion of the atomic magnetic moments of the particles caused by their interaction with an applied magnetic field and mechanical rotation of the magnetic moments together with the particle body.
A model that takes into account a complex structure of micrometer-sized HM particles and couples the processes of particle intrinsic magnetization and rotation within the soft viscoelastic medium was proposed in [79]. A spherical HM particle was supposed to consist of a densely packed solid assembly of identical single-domain nanograins with an Considerable effort has been directed towards understanding the physical foundations of magnetization features of MAEs based on HM particles and a mixture of HM and SM fillers (so-called hybrid MAEs). HM particles are usually composed of multiple magnetic domains, and magnetization of mechanically soft MAEs containing HM particles includes two processes: the intrinsic motion of the atomic magnetic moments of the particles caused by their interaction with an applied magnetic field and mechanical rotation of the magnetic moments together with the particle body.
A model that takes into account a complex structure of micrometer-sized HM particles and couples the processes of particle intrinsic magnetization and rotation within the soft viscoelastic medium was proposed in [79]. A spherical HM particle was supposed to consist of a densely packed solid assembly of identical single-domain nanograins with an isotropic distribution of the nanograin easy axes. Magnetization of nanograins was described using the Stoner-Wohlfarth model according to which the energy of a single nanograin can be written in the following way: where → e , → h and → n are unit vectors of the magnetic moment → m, magnetic field strength → H and the easy axis of the nanograin magnetization, respectively; K is the energy density constant for magnetic anisotropy, V is the grain volume and E mech is the mechanical energy attributed to each grain (which is equal to zero in the discussed model). The total potential energy of a multigrain particle in an elastic medium included the elastic contribution due to the particle rotation, which was accounted for within the linear Hookean approximation and the magnetic contributions, namely the magnetic anisotropy energy, Zeeman interaction with the magnetic field and pair-wise dipole-dipole interactions between all the nanograins. It was shown that due to magnetomechanical coupling, the magnetic hysteresis loop of a particle composed of highly coercive grains progressively shrinks with the increase of the matrix elastic modulus. The developed model was applied to describe the magnetization curves of MAEs based on HM NdFeB particles [80]. The results of the theory are consistent with experimental observations, the proposed theory is able to describe training effect, negative bias, and reduction of coercivity.
Using the same model for HM multidomain particles, the authors of [81] proposed a generalized model of hybrid magnetic elastomers filled with a mixture of HM and SM microparticles. The magnetization of the SM particles was described by the Fröhlich-Kennelly equation while the interaction between the two types of particles was accounted for within the mean-field approach. First-order reversal curve (FORC) diagrams were calculated for different values of the elastic modulus of the polymer matrix. It was demonstrated that the FORC diagrams display specific new features due to interactions between HM and SM phases and matrix elasticity.
To summarize, a significant progress in understanding the underlying physical phenomena in MAEs has been achieved using numerical and analytical approaches to micro/mesoscopic structure modelling. If the early models accounted only for the magnetization process, recent models took additional effects into account, allowing one to explain significant changes in the elastic moduli of soft MAEs (with soft magnetic, hard magnetic and mixed filling), which are closer to experimental values. As far as the deformation of MAE bodies in an external magnetic field is concerned, a general understanding of factors affecting the deformation of simplest bodies (e.g., ellipsoids of rotation) has been reached. In general, the approaches discussed in Sections 3.1.1 and 3.1.3 allowed us to establish the origin of the observed magneto-mechanical phenomena on the level of the restructuring of particles. From the results obtained by many scientific groups and challenges they faced, it follows that additional work is required to leave the dipole-dipole approximation in modeling of magnetic interactions for highly filled MAEs. Complex microstructures (non-uniform, anisotropic) and filler particle clustering also require more rigorous and comprehensive research. Although the effect of geometric and magnetic anisotropies of filler particles on MAE response to external stimuli has received a considerable amount of attention in recent years, the variety of possible particle shapes and crystal structures of ferromagnetic particles makes it very difficult to reach reasonably complete scientific understanding in this area of studies. Thus, it is expected that future research will be focused on cluster-like filler structures, multidisperse anisotropic fillers and more complex forms of interparticle interactions (both magnetic-field and matrix-mediated).
Continuum Modeling
The most mathematically rigorous approach with well-developed fundamentals is the continuum approach. In the framework of this approach the composite is described as a whole using field equations. Instead of the internal structure of the material, the emphasis is put on its macroscopic response and properties. The underlying theoretical foundation consists of theory of elasticity, physics of magnetic materials and thermodynamics. The main result obtained through continuum modeling is a relation between macroscopic stress and strain tensors taking into account material magnetization. Free energy of the system used to obtain constitutive relations is described as a function of the Cauchy-Green tensor invariants as well as various convolutions of the Cauchy-Green tensor with the magnetic field vector. To construct a continuum model of MAEs it is necessary to obtain the expressions for the magnetic field inside the ferromagnetic phase and the free energy of the material. Analytical solutions of the corresponding magnetomechanical BVPs usually cannot be obtained, therefore the finite element method (FEM) is frequently employed instead. More simple limiting cases are studied rigorously: the cases of small deformations as well as weak magnetic fields.
There are two main ways of creating a continuum model: direct modeling and homogenization-based modeling. The first path requires deriving a full system of field equations that describe mechanical, magnetic and thermodynamic characteristics of the entire sample based on its material properties and behavior. These are so-called "top-down" models. The second path involves averaging the local characteristics of the medium and takes into account the internal structure of the composite. Obtaining explicit analytical solutions for both approaches is very difficult, especially if the general case of arbitrary deformations and magnetic fields is considered. Custom FEM models can provide numerical solutions of the continuum equations; however, more rigorous and universal theoretical frameworks require mathematical description of material behavior. The most prominent approach to material behavior description found in scientific literature involves explicitly characterizing the thermodynamic potentials of the MAE sample, specifically Helmholtz free energy.
Mechanical Engineering Approach
The most natural way of describing the sample's behavior on a macroscopic scale is the direct solution of equations that describe the displacement of each point of the sample under external load and the influence of the magnetic field. The sample can be described as a solid body (or a system of smaller material volumes) governed by classic mechanics. Most often a direct approach is based on solving Newtonian equations of motion for linear theory of elasticity and Maxwell's equations. Alternatively, the information about the magnetic part of the problem is contained in the expressions representing forces acting on each volume element or each point. This approach does not capture the fundamentals of magnetomechanical coupling or the mechanisms of filler restructuring in MAEs, however it is useful for practical applications and especially soft robotics, which has seen rapid development over the course of recent years.
One of the most common tools for analyzing the motion of MAE samples is Newtonian mechanics, namely the equations of translational and rotational motion within the framework of linear elasticity. The displacement of individual small elements of the sample can be described by taking into account the influence of gravitational forces, viscous or dry friction forces depending on the surrounding medium, lifting forces in the liquid, magnetic forces, as well as forces created by the shift of adjacent small elements. Calculation of each of the listed forces usually requires additional modeling considerations, experimental data, or numerical analysis.
Another common tool for describing deformation in MAE samples of simple shapes is the Euler-Bernoulli quasi-static theory of beam bending (or the more general Timoshenko-Ehrenfest beam theory [82,83]). Within the framework of this theory, an elongated object is assumed to be one-dimensional, and a fourth-order differential equation that relates the external load and bending at each point of the object under study is derived: where E is the modulus of elasticity of the sample, J is the moment of inertia, w is the bend at a given point, q is the external force acting per unit length of the sample. In this case, this force is of a magnetic nature, so its distribution along the length depends on the distribution of magnetization in the robot. Depending on the chosen approximations, the basic equation of the Euler-Bernoulli theory is reduced to a differential equation of the third or fourth order. The Euler-Bernoulli equation (or the definition of the bending moment from which it follows) is also used as one of the terms in Newton's equation of motion to obtain a more complete picture of the displacements of the robot elements. The Euler-Bernoulli theory is quite simple and understandable, and therefore is often used in modeling that does not require a fundamental theoretical study of the processes under consideration. In [84], the Euler-Bernoulli theory was used to explain the bending of MAE cantilever beams with hard-magnetic particles, initially magnetized perpendicularly to the beam's plane. The magnetic field was applied in the beam's plane, and it was perpendicular to the initial direction of particles' magnetization (before bending). Modeling the effects of the magnetic field on the cantilever as a generalized distributed moment worked well as a phenomenological approach [84]. In particular, using an expression for the linear magnetic energy density [85], an ordinary differential equation for the beam deflection was obtained in the small deflection angle approximation. This equation could be solved analytically. An explicit expression for the field-induced beam stiffness showed that it was proportional to the square of the applied magnetic field strength. A more precise, general and more complex theoretical tool is the Cosserat rod theory [86]. This theory makes it possible to take into account tension, shear, torsion and bending of an oblong body. Cosserat's theory combines the evolution of the rod geometry (nonlinear process) and the evolution of the mechanical characteristics of the rod (linear process). In the Cosserat model, the rod is a quasi-one-dimensional system described by the curve → r (s, t) passing through the centers of the longitudinal sections, parametrized in space using the parameter of the geodesic of the rod s, and evolving in time. The section of the rod at each point is described by an orientation quaternion consisting of local axes of the Lagrangian coordinate system, indicating the direction of the axis of rotation of the section, and the angle of rotation around this axis. The position of the center of the section → r (s, t) evolves under the influence of the forces arising in the rod, and the orientation of the section evolves under the influence of the torques arising in the rod. Obviously, the orientational quaternion of the cross section is also related to the rotation of the magnetic moments of the filler particles in the rod. To calculate the necessary forces and torques, the momentum balance equations are used at each point of the rod with the magnetic field serving as an external stimulus. It should be noted that the elastic properties of the rod in the Cosserat theory are described by the linear theory of elasticity. Thus, when using the Cosserat theory, it is necessary to solve a closed system of 13 equations that determine the behavior of small elements of the rod. The analytical solution of such a system is often difficult or even impossible due to the nonlinearity of the geometric relationship between the local Lagrangian and Euler coordinates, so numerical methods are used to obtain results within the framework of the Cosserat theory.
Kalita et al. [87] used the expression for the elastic energy of the deformed thin elastic beam to explain the so-called critical bending of a soft-magnetic MAE induced by magnetic field. This phenomenon is characterized by a critical exponent for the bending magnitude, and the derivative of the function characterizing the bending has a singularity in the critical region.
An important basic functional element of many actuating devices is an active soft membrane. Such membranes are used as pumps, filters and as elements of devices that allow for remote-controlled handling of liquids. Membranes are a specific case of thin systems, and, as such, it is possible to develop theoretical descriptions of membrane-based devices that include analytical solutions of the boundary value problems of MAE behavior in external magnetic fields. The work [88] made use of both coarse-grained MD simulations as well as continuum modeling to study the influence of precessing magnetic field on the magnetodeformation of a membrane consisting of a single layer of superparamagnetic colloid particles for varying precession angle of the magnetic field. It was shown that the ratio of the magnetic constant to the elastic constant defines the deformation mode in the system under study. The work [89] developed the membrane theory for MAE-based devices. The asymptotic expansion of variational equations of 3D continuum theory was used to obtain an effectively two-dimensional theory of membrane deformation. Both stress and deformation profiles of circular and annular membranes were obtained for different magnetoelastic loading conditions. The model was also validated using existing data from literature.
Finally, another generally accepted approach to describing MAE behavior is finite element modeling using linear continuum field equations of mechanics and magnetostatics [14,[90][91][92][93][94][95]. This is the most direct macroscopic approach to the description of physical processes. The stresses arising in the robot are divided into mechanical (of an exclusively mechanical nature) and magnetomechanical (induced by a magnetic field). The latter are calculated by solving Maxwell's equations. Then the balance equation of the total stress is solved while taking into account the influence of external forces. Since finite element calculations for complex systems in three-dimensional space require significant time and computational resources, they are usually limited to the study of two-dimensional models that qualitatively describe the real movement of the sample. The use of linear theory is also caused by the duration of calculations for non-linear models. The advantage of this approach is the clarity and the ability to set an arbitrary configuration of the magnetic field, as well as the geometric characteristics of the sample and determine which system has the properties necessary for the expected practical applications. Another advantage of finite element modeling is the existence of ready-made software packages that implement the computational foundations of the method. It is then possible to build and optimize a specific model without the need to create new software from scratch.
Thin MAE rods can be said to be a type of system suitable for direct modeling as well as various simplified models. Dimensional reduction procedure can be carried out for such systems. This reduces the complexity of the problem by modeling the MAE as a one-dimensional system. The main modeling assumption in this case is that any material vector that was normal to the rod centerline in the undeformed configuration remains normal to it and does not experience stretching after deformation occurs. This naturally limits the model to describing simple bending but allows for much easier analytical study of MAEs. The works [96,97] studied MAE rods with saturated magnetically hard filler in the presence of both uniform and gradient magnetic fields. The virtual work principle and Kirchhoff-like equations of motion for rods were used and modified to include magnetic torques and forces. Long-range magnetic interactions were neglected and the absolute value of magnetization of different parts of the rod did not change. MAE deformation and displacement was modeled. In [96] results obtained for simple beams were extensively compared with both experimental data and full-field 3D FEM modeling. In [97] regular rods and helical MAEs were considered analytically, numerically and experimentally. The deformational behavior of the material in magnetic fields obtained theoretically was shown to be in good agreement with experimental data. Models obtained through dimensional reduction were shown to describe simpler MAE systems adequately and can thus be used to study prolate MAE samples with high degree of symmetry more efficiently.
In [98] field-induced vibrations of a rod-shaped MAE sample fixed at one end were studied. A numerical solution of vibration equations was obtained using commercial FEM software ANSYS ® Workbench 16.2, analysis system "Modal".
In [90], thin elastomer samples containing magnetically hard particles were studied. Different areas of the samples had different preferred directions of magnetization. Finite element modeling was performed in the ANSYS ® software package with MATLAB ® scripts by dividing the sample into sections, each of which is considered to be a magnetic dipole with the deformation of each section described using the beam theory of Euler-Bernoulli. In [91], worm-like MAE samples were considered by dividing them into segments, each of which has its own direction of magnetization. Here samples with both hard magnetic and soft magnetic filling were studied. The proposed theoretical model resembles a simple polymer chain model in which the elastic and magnetic moments at the ends of the segments are balanced using an iterative process. The obtained material behavior largely coincides with their experimental behavior, although the system was not described in detail.
In [99], a cuboid sample of the MAE was studied. Silicone elastomer was used as a polymer matrix, and NdFeB particles were used as filler. The MAE under study had an inhomogeneous magnetization profile: the distribution of the magnetic moment direction along the length of the sample was described by a harmonic function. The authors of this paper proposed to use oscillating magnetic field with spatially homogeneous components B x , B y , B z to rotate the sample and change its shape. This was used to create movement of the MAE sample in a surrounding liquid medium or allow it to bypass various obstacles, thus effectively creating a remotely controlled soft robot. Gradient magnetic fields were also considered. The bending of the MAE is described by solving the equations of the Euler-Bernoulli theory for a rod with free ends, where the distributed magnetic moment acted as the stimulus that induces bending of each section of the rod. Using the energy conservation, the kinematic parameters of the sample's movement caused by successive controlled changes in its shape were calculated: the maximum height of the "jump" from a flat surface, the speed of rolling along the surface, the speed of horizontal movement ("walking"), the magnetic field required for climbing onto the water meniscus, swimming speed in liquid medium. When studying the floating of a sample in a liquid, the analysis of the mechanical natural frequencies of the sample was carried out by solving the equations of Newtonian mechanics for the rod element using separation of variables. As a result, within the framework of classical mechanics, as well as the quasi-static theory of Euler-Bernoulli beam bending, equations describing the motion of a simple MAE-based soft robot in a liquid medium and in the air were obtained and solved (analytically for linear and numerically for nonlinear cases). Dependences of the kinematic parameters of various types of motion on the dimensions of the sample, as well as on the amplitude and frequency of the external magnetic field were provided. Experimental video measurements of the characteristics of the shape and movement of the robot for various geometric parameters of the sample were carried out. Comparison of simulation results with experimental data showed the adequacy of the proposed models for all types of motion except for swimming.
In [92], FEM was used for active origami-inspired designs, which incorporated active materials such as electroactive polymers and MAEs into self-folding structures. Constitutive relations were developed for both electrostrictive and MAE materials to model the coupled behaviors explicitly. Shell elements were adopted for their capacity of modeling thin films, relatively low computational cost, and ability to model the intrinsic coupled behaviors in the active materials under consideration. The electrostrictive coefficients were measured and then used as input in the constitutive modeling of the coupled behavior. The magnetization of the MAE was measured and then used to calculate the magnetic torque as a function of the special orientation, which led to spatial deformation of the MAEs. Through quantitative comparisons, simulation results showed good agreement with experimental data.
The authors of [93] studied the behavior of a jellyfish-like device consisting of a magnetoactive polymer core and "tentacles", the ends of which are non-magnetic. The device was placed in a liquid medium in the presence of an oscillating magnetic field. Based on the analysis of the video of the movement of the device, the kinematic characteristics of the jellyfish were calculated, and a simple dynamic simulation of the movement of the tentacles as a rotation of a sequence of small elliptical cylinders around the attachment point of the tentacle was also carried out, the speed of the device is calculated by integrating the Newtonian equation of motion. The results of the calculation of the average velocity were consistent with the experiment for brief time periods of motion. The work also evaluated the influence of the geometric dimensions of the device's components on its behavior using the Euler-Bernoulli theory and two-dimensional finite element modeling via the COMSOL Multiphysics ® 5.3a software package.
In [94,95], systems of cilia-like samples were studied: soft cylinders made of a magnetically active material, fixed at one end on a specific surface. The collective motion of an array of cilia in a liquid medium and in the presence of a magnetic field was studied, taking into account their hydrodynamic interaction. In [94], magnetite was used as a magnetoactive filler, and cilia had sizes of the order of tens of micrometers; in [95], NdFeB particles acted as filler, and cilia had sizes of the order of millimeters. Such systems are capable of generating flow and waves in a fluid both for the purpose of moving external objects and for the purpose of moving the device to which they are attached. In [94], the behavior of the system was described by calculating the configuration of the magnetic field and fluid flow via the finite element method and using the obtained data to calculate the deformation of cilia according to the Euler-Bernoulli theory. In [95], the finite difference method and the Cosserat rod theory were used instead.
To summarize, the approach based on the technical mechanics is pragmatic and application-oriented: it is not focused on revealing the underlying physical phenomena within the composite material but addresses the actuation of MAE-based functional elements. The properties of constitutive materials have to be known. We believe that the combination of microscopic and mesoscopic modeling, as described in preceding sections, with the methods of technical mechanics will lead to rapid development of a fit-for-purpose MAE material design.
Invariant Theory
The fundamentals of the continuum approach to the theoretical description of magnetoactive elastomers were comprehensively described in [100][101][102][103]. All basic equations were provided in their general form, additional conditions and material relations were also given. These papers described the mathematical structure of the desired functions corresponding to the energy, mechanical, and magnetic characteristics of the material in terms of Lagrangian and Euler coordinates, magnetic field, and Cauchy stress tensor invariants ( Figure 8). The results were obtained both directly from the balance equations for the mass, momentum and energy in the Euler configuration, and from the minimization of the energy functional in the Lagrangian configuration. The invariant theory for tensor fields in continuum mechanics was described in great detail in the book [104]. Magnetoelastic free energy is considered as a function of the magnetoelastic invariants: there are six invariants in the case of an isotropic internal structure of the composite and there are ten invariants , , … , when structural anisotropy (transverse isotropy) is taken into account [102].
Additionally, the free energy function is composed of several terms: corresponding to isotropic and anisotropic material mechanical contributions, purely magnetic contributions and coupled magnetomechanical contributions: Coupling terms depend on the coupled invariants and are the most sophisticated aspect of this approach. Another way of constructing the free energy expression is dividing it into the polymer matrix energy, the filler particles energy and the energy corresponding to the interaction between them. The simplest case of small deformations and weak magnetic fields would lead to a free energy function with a quadratic dependence on the magnetic field and quadratic dependence on Cauchy strain. In general, the correspondence principle should be satisfied for the free energy expression as it should correspond to classic elasticity and magnetostatics in the limits of infinitesimal strains and weak magnetic fields, respectively, as those limits imply that the coupling effects practically vanish. The purely mechanical part of the energy stored in the medium most often takes either linear elastic form or hyperelastic form (neo-Hookean, Mooney-Rivlin, Gent, etc.), while the magnetization energy function is usually based on linear, Langevin, hyperbolic tangent or Fröhlich-Kennelly models. The arguments of these model functions are themselves functions of magnetomechanical invariants. The limiting cases of small deformations as well as weak magnetic fields and saturation-level magnetic fields provide the asymptotically correct forms of energy expressions, and the derivatives of mechanical and magnetic Figure 8. Visualization of different descriptive approaches for the movement and deformation of a continuum body in space by two selected configurations in time (t = 0 is the reference configuration, t > 0 is the current configuration). Describing a movement or deformation relative to the coordinates of a reference configuration (undeformed) is called the Lagrangian description, while describing it relative to the coordinates of a current configuration (deformed) is called the Eulerian description [105].
Here F is the deformation gradient tensor, → x are the coordinates in current (Euler) configuration, → X are the coordinates in reference (Lagrangian) configuration, C is the right Cauchy-Green stress tensor.
Maxwell equations for a stationary case with no free currents (classic magnetostatics): Here → B is the magnetic flux density (or B-field) and → H is the magnetic field strength (or H-field).
The magnetomechanical balance equation reads: where P is the reference total stress (both mechanical and magnetic), represented by the first Piola-Kirchhoff stress tensor, and → f is the total force acting on the material volume. Magnetoelastic free energy is considered as a function of the magnetoelastic invariants: there are six invariants in the case of an isotropic internal structure of the composite and there are ten invariants (I 1 , I 2 , . . . , I 10 ) when structural anisotropy (transverse isotropy) is taken into account [102].
Additionally, the free energy function is composed of several terms: corresponding to isotropic and anisotropic material mechanical contributions, purely magnetic contributions and coupled magnetomechanical contributions: Ψ = Ψ(I 1 , . . . , I 10 ) = Ψ iso (I 1 , I 2 , I 3 ) + Ψ aniso (I 7 , I 8 ) + Ψ mag (I 4 )+ Ψ couple (I 5 , I 6 , I 9 , I 10 ), (11) Coupling terms depend on the coupled invariants and are the most sophisticated aspect of this approach. Another way of constructing the free energy expression is dividing it into the polymer matrix energy, the filler particles energy and the energy corresponding to the interaction between them. The simplest case of small deformations and weak magnetic fields would lead to a free energy function with a quadratic dependence on the magnetic field and quadratic dependence on Cauchy strain. In general, the correspondence principle should be satisfied for the free energy expression as it should correspond to classic elasticity and magnetostatics in the limits of infinitesimal strains and weak magnetic fields, respectively, as those limits imply that the coupling effects practically vanish. The purely mechanical part of the energy stored in the medium most often takes either linear elastic form or hyperelastic form (neo-Hookean, Mooney-Rivlin, Gent, etc.), while the magnetization energy function is usually based on linear, Langevin, hyperbolic tangent or Fröhlich-Kennelly models. The arguments of these model functions are themselves functions of magnetomechanical invariants. The limiting cases of small deformations as well as weak magnetic fields and saturation-level magnetic fields provide the asymptotically correct forms of energy expressions, and the derivatives of mechanical and magnetic parts of the energy correspond to the linear elastic modulus and magnetic permeability, respectively.
Mathematical expressions for the invariants using one of the notations are as follows: where ere → N is the unit vector describing a specific preferred direction that exists in the material due to its internal structure. This set of invariants describes a transversely isotropic material. If there exists another preferred direction with its own unit vector, then additional similar invariants are introduced.
The constitutive equations for the material (as described by the Coleman-Noll procedure [106]) are as follows: Finally, one needs to add the material equation for magnetic materials: where ere µ 0 is the magnetic permeability of vacuum and A simple continuum model based on invariant theory was presented in [107]. It utilized finite deformation theory and free energy of a neo-Hookean solid with saturated magnetically hard filler and was tested in ABAQUS 2016 via uniaxial prism deformation and beam bending. The remanent B-field used in the free energy was measured experimentally for all the samples. The results for small bending were obtained both analytically and numerically, and the agreement presented in this work was shown to be good. Finite-element modeling for more complex 2D and 3D structures was compared with experimental results, and it can be said that the general material behavior was captured by the model correctly. The main advantages of this model are its relative simplicity and the ability to predict bending-type behavior of hard magnetic MAE-based regular structures to a reasonable extent.
In [102], an analytical form of the stress-strain relation was derived under the condition of a linear dependence of the MAE free energy on the stress tensor invariants. In [100,103], the case of a polynomial dependence of the free energy on the invariants of the Cauchy tensor was considered. In [108], a polymer matrix containing cylindrical ferromagnetic inclusions was studied, and the strain energy density function was described by the Gent hyperelastic model. Additionally, various specific forms of the free energy expression were proposed and obtained within the phenomenological framework [102,109,110]. In [105], a much more complex form of free energy was used in combination with finite element modeling.
MAE magnetostriction was modeled in [111], where governing equations of the stationary magnetic and the coupled mechanical BVPs were provided. The stationary magnetic BVP was given by Maxwell s macroscopic equations. Coupled mechanical BVP contained an additional body force density in the coupled magnetomechanical case; this approach comes from the book of de Groot and Suttorp [112] and results from a distinction between long-and short-range contributions of atomic interactions, which is not free from arbitrariness. Constitutive model was based on the works of Dorfmann and Ogden [103], Eringen and Maugin [113], Spieler et al. [114] and Vogel et al. [115]. Since small strains and nonlinear magnetizations were considered, restrictive conditions were introduced on the magnetization and the total stress tensor. The mechanical part of the specific free energy was characterized by an isotropic Hookean materials law. Magnetization was described phenomenologically by the hyperbolic tangent model. Computational homogenization used macro-homogeneity condition. Idealized lattices as well as compact and wavy chains were considered. Random microstructures, comprising both unstructured (random heterogeneous) and structures (chain-like) arrangements, were considered. Qualitative agreement between theory and experiments was demonstrated.
In the work [116], the resulting shape of an initially spherical MAE sample was studied. The sample was considered to be linearly magnetizable, isotropic and hyperelastic. Magnetostriction under the influence of external uniform magnetic field was simulated using FEM. The results were compared with the sample elongation predicted by perturbation theory. It was noted that the leading factor that affects the sample behavior is the sample shape and that it was possible to predict complex shape changes without considering the microstructure of the sample. The results obtained through modeling indicated that a spherical sample changes into an ellipsoid and then into a spindle-like object as the external field strength increases (Figure 9). An interesting result presented in this paper is that even simple, initially spherical MAE samples obtain a complex shape due to magneto-deformation in uniform fields, which is an important factor to consider for further research.
The works [117,118] studied the material response to external magnetic field and deformation within the framework of invariant theory in addition to dipolar mean-field theory that combines microstructural and macrocontinuum models and is discussed in Section 4.3 in more detail. The microscopic effects were not discussed in [117,118], and instead authors focused on the role of the initial (undeformed) shape of the sample in the material behavior. MAEs with random isotropic filler particle distribution were considered, and one of the main assumptions of the proposed model was that, under the influence of applied magnetic field, the initially isotropic MAE becomes transversely isotropic through the formation of anisotropic filler particle structures. Thus, the free energy of the sample was divided into isotropic term, anisotropic term and magnetic term with the magnetic part of the energy depending on the sample shape. Uniaxial deformations were studied in [117] while ref. [118] focused on shear deformations. The strong effect of the MAE shape on the sample behavior was demonstrated even within the modeling limitations of an ellipsoidal sample. dom heterogeneous) and structures (chain-like) arrangements, were considered. Qualitative agreement between theory and experiments was demonstrated.
In the work [116], the resulting shape of an initially spherical MAE sample was studied. The sample was considered to be linearly magnetizable, isotropic and hyperelastic. Magnetostriction under the influence of external uniform magnetic field was simulated using FEM. The results were compared with the sample elongation predicted by perturbation theory. It was noted that the leading factor that affects the sample behavior is the sample shape and that it was possible to predict complex shape changes without considering the microstructure of the sample. The results obtained through modeling indicated that a spherical sample changes into an ellipsoid and then into a spindle-like object as the external field strength increases (Figure 9). An interesting result presented in this paper is that even simple, initially spherical MAE samples obtain a complex shape due to magnetodeformation in uniform fields, which is an important factor to consider for further research. is defined as = /√ , where is the MAE's shear modulus [116]. Note that the normalization is performed in the cgs system of units.
The works [117,118] studied the material response to external magnetic field and deformation within the framework of invariant theory in addition to dipolar mean-field theory that combines microstructural and macrocontinuum models and is discussed in Section 4.3 in more detail. The microscopic effects were not discussed in [117,118], and instead authors focused on the role of the initial (undeformed) shape of the sample in the material behavior. MAEs with random isotropic filler particle distribution were considered, and one of the main assumptions of the proposed model was that, under the influence of applied magnetic field, the initially isotropic MAE becomes transversely isotropic through the formation of anisotropic filler particle structures. Thus, the free energy of the sample was divided into isotropic term, anisotropic term and magnetic term with the magnetic part of the energy depending on the sample shape. Uniaxial deformations were studied in [117] while ref. [118] focused on shear deformations. The strong effect of the MAE shape on the sample behavior was demonstrated even within the modeling limitations of an ellipsoidal sample.
The works [119,120] tackled the problem of analytical description of MAEs with two types of magnetic filler: rigid iron particles and soft ferrofluid particles. Explicit free energy function for isotropic MAEs with such fillers was constructed for the two-dimensional and three-dimensional cases in [119] and then used in [120]. The polymer matrix was considered to be incompressible and non-Gaussian. The filler particles were described using classic non-linear magnetization functions (Langevin model and Brillouin model). The variational problem for the free energy was considered using analogies with where G is the MAE's shear modulus [116]. Note that the normalization is performed in the cgs system of units.
The works [119,120] tackled the problem of analytical description of MAEs with two types of magnetic filler: rigid iron particles and soft ferrofluid particles. Explicit free energy function for isotropic MAEs with such fillers was constructed for the two-dimensional and three-dimensional cases in [119] and then used in [120]. The polymer matrix was considered to be incompressible and non-Gaussian. The filler particles were described using classic non-linear magnetization functions (Langevin model and Brillouin model). The variational problem for the free energy was considered using analogies with results obtained before for electroelastic systems by the same authors in [121,122]. Approximate solution of the problem was obtained, and it was demonstrated that it is asymptotically exact for the case of small deformation and weak to moderate magnetic fields. Macroscopic magnetoelastic response (namely, deformation gradient tensor) of suspensions of circular (2D) and spherical (3D) particles was obtained in [119] and compared with full-field FEM simulations. Spherical and cylindrical MAE samples were studied. It was noted that MAEs with ferrofluid filler exhibit stronger magnetostriction effect than their iron-filled counterparts. Discrepancies between theoretical results and experimental data were noted for the case of cylindrical elastomer samples. In [120] the authors proposed an approximate solution that is asymptotically exact for both weak and strong magnetic fields and compared the obtained results with FEM simulations for spherical MAEs with iron and ferrofluid particles. The comparison proved the adequacy of the suggested model based on invariant theory.
Authors of [123] suggested modifying the isotropic part of the mechanical energy and provided a constitutive model with exponential-logarithmic dependence of the energy on the first invariant I 1 . Both isotropic and transversely isotropic materials were modeled, and for the sake of simplicity the dependence of the energy on the purely magnetic response of the material was modeled as additional shear modulus that is exponentially dependent on the applied magnetic field. This is an approach that is somewhat similar to rheological modeling, and it simplifies the calculations. Stress-strain curves and magnetorheological response of MAE samples were obtained, and while the results agree with experimental data for filler concentration around 10 vol% and weak to moderate magnetic fields, a noticeable discrepancy between modeling predictions and experimental data is observed for a filler concentration of~20 vol% and a magnetic field of about 1 T.
The work [124] presented an alternative continuum modeling framework that was based on the usage of spectral invariants of the magnetomechanical medium instead of classic invariants. These spectral invariants consist of the eigenvalues of the Cauchy-Green tensor (which can be interpreted as principal stretches of the system) and traces of different tensors obtained as the products of vectors that define the preferred directions of the material: external magnetic field direction, initial filler structure anisotropy and Cauchy-Green tensor eigenvectors. There are ten independent spectral invariants for a transversely isotropic material, and they can be expressed in terms of classic invariants. The main advantage of this framework is the ability to vary each spectral invariant independently in a triaxial stretch test and obtain the dependence of the free energy of the system on each invariant directly. This approach presented promising and logical direction for further theoretical studies of MAEs and other magnetosensitive solid materials.
To summarize, the theoretical approach using invariant theory has been intensively developed in recent years. It still has some challenges as far as coupled invariants are concerned, but the corresponding works are on the way. We believe that this approach will become more important in the next years because it has a clear physical basis that can be conveniently transferred into computational code.
Effective Medium Theory
Since MAEs can be considered as composite materials, it is natural to employ effective medium theory (EMT) or effective medium approximations to calculate their macroscopic properties from the known physical properties of the constitutive materials. This approach has been developed in the works of Snarskii et al. [125][126][127][128]. The initial MAE microstructure was assumed to be random heterogeneous, in particular, randomly located spherical inclusions of the first phase (carbonyl iron) dispersed in a continuous polymer matrix (second phase). Contrary to the conventional EMT, the microstructure (i.e., mutual arrangement of filler particles) of composite material changes under the influence of an external field in the situation when dimensions and the shape of the sample remain constant. The following physical model was proposed: When the particle concentration ϕ is less than the critical threshold value ϕ c , there is an assortment of finite clusters in the composite material (called a pre-cluster), which, with an increase in the concentration of inclusions ϕ→ϕ c , will connect parts of the pre-cluster and form an infinite cluster. When a magnetic field is applied to an MAE specimen and the elastic matrix is compliant, the particles with the attached matrix can move within the specimen (until the elastic force from the matrix stops them) and thereby increase the relative number of particles (i.e., their concentration) in the pre-cluster. The main idea of the proposed theoretical description of effective properties in the case of the field-induced rearrangement of inclusions was that the increase in their concentration in the pre-cluster can be interpreted as a decrease in the percolation threshold ϕ c . This means that the reduction of the difference (ϕ c − ϕ) is not attributed to a (local) increase in ϕ, but to a decrease in ϕ c . With such a description, the percolation threshold ϕ c is no longer a constant, but is a function of the magnetic field that decreases with increasing H : ϕ c = ϕ c ( H ), where . . . = 1/V . . . dV, V is the averaging volume, wherein the characteristic dimensions of the averaging region should be much larger than the correlation length ξ. The following dependence of the percolation threshold on the external magnetic field, as introduced by Mitsumata et al. [129], was used: where H c ischaracteristic magnetic field strength and ϕ c0 is percolation threshold in the absence of a magnetic field. Comparisons with experiments showed that the H c has the order of magnitude 10 5 -10 6 kA/m. To describe the magnetodielectric effect in MAEs, the authors of [125] used the modified Bruggeman-Landauer (BL) [130,131] approximation: where ε e is the effective relative permittivity and ε 1 , ε 2 are the relative permittivities of the consituitive materials. The addends in the denominators (emphasized by square brackets) proportional to c(ϕ, ϕ c ) represent the generalization of the BL approximation by Sarychev and Vinogradov (SV) [132]. According to [132], this renormalization is related to the additional contribution to the local field by the inclusions. Specifically, the SV term is: In a later paper [126], this method has been generalized to describe the anisotropy of the magnetically induced changes of the effective permittivity. A reasonable agreement between theory and experiment was observed.
To describe the magnetorheological effect, the modified system of the classical selfconsistent EMT for elasticity problem [133,134] was proposed: Here the following notations were used: and G e , ν e denote the effective shear modulus and the Poisson's ratio, respectively, while G 1 , G 2 , ν 1 , ν 2 are the values of these moduli in the first and second phases.
The new term s(ϕ, ϕ c ) is defined in the following way: where ϕ c is the function of the external magnetic field (15) [127]. The concept of the field-dependent percolation threshold allowed one to describe in a unified manner the magnetodielectric effect, the non-monotonous field dependence of the magnetic permeability in MAEs [125] and to explain the order of magnitude for the giant or colossal MR effect. Therefore, the restructuring of the filler has to be taken into account when considering the MR effect in MAEs. Recently, Chougale et al. [135] pointed out that the hydrodynamic reinforcement factor k plays a key role in the drastic increase in the MR effect due to restructuring of the filler and argued that the divergence of k is equivalent to the definition of percolation threshold by Snarskii et al. Interestingly, the model of movable percolation threshold predicts a significant change in a Poisson's ratio of compliant MAEs in external magnetic fields. It was proposed to use the measurement of a Poisson's ratio as a verification test for this theoretical model. The model does not exclude alternative mechanisms, which may be present simultaneously and should also contribute to the field-stiffening or magnetorheological effects (by further enhancement).
If a percolating structure comes into play in a composite material, its existence must be observed in several physical properties (cross-property relations) [128]. An important next step should be theoretical explanation of the empirical relationship (15) proposed in [129]. In particular, the relation of the critical magnetic field H c to the physical properties of a composite material and its constitutive components has to be established [128].
To summarize, the EMT is capable of offering a unified theoretical approach for description and explanation of different physical phenomena in MAEs, caused by the restructuring of particles. Describing the field-induced anisotropy of mechanical properties of MAEs is currently a challenge because a suitable formulation of self-consistent EMT for elasticity problem is not available.
The overall challenges that lie before the field of MAE continuum modeling stem from the very foundations of this approach. The equations describing the material behavior are much more complex than classic equations of particle motion; those equations are frequently nonlinear and include coupled magnetomechanical material relations. Various aspects of such models require further refining: taking into account the viscoelastic properties of the polymer matrices, rigorously incorporating nonlinear magnetization models and magnetic anisotropy into constitutive equations, analyzing the coupling terms of the material energy function in more detail to better capture the behavior of soft MAEs in the presence of strong magnetic fields. As discussed in Section 3.2, several attempts to tackle these problems have been made recently, however a more complete continuous model of MAE behavior that combines all of the mentioned factors has not been developed yet. On the other hand, it should be mentioned that considerable progress has been made in regard to formalization of fundamentals of MAE physics. Several works that provide a deep analysis of thermodynamics of magneto-dielectric-mechanical systems as well as the general properties of the corresponding energy functions have emerged. One can expect that more rigorous and general solutions of the magnetomechanical continuum problems that do not rely on the small deformations and weak magnetic field approximations will appear in scientific literature in the coming years.
Rheological Modeling
Rheological modeling is the simplest approach when it comes to description of MAE behavior and it is intrinsically tied to experimental studies. The dynamic material behavior is described phenomenologically on a macroscopic scale using a system of mechanical elements connected in series or in parallel. This approach is similar to describing the behavior of an electronic circuit with a circuit diagram. Each mechanical element in the equivalent circuit has its own stress-strain relation that can depend on the external magnetic field, and the resulting stress-strain relation of the equivalent circuit corresponds to material behavior. Model parameters are calculated via error functional minimization by comparing the modeling output with the data of dynamic mechanical loading experiments for MAEs in the presence of external magnetic field. Typical elements include linear elastic springs, Newtonian dashpots, plastic elements, friction elements, nonlinear springs and fractional viscoelastic elements. If trends of the model parameters' dependences on the material composition and magnetic field are established and approximated, the model can have predictive value.
Dynamic mechanical analysis is a phenomenological approach, so any rheological model is adjusted to describe specific objects and experiments and load cases. The parameters of the model elements in this approach are obtained via fitting the dependences of characteristics of a typical sample on the external stimuli obtained as a result of experimental measurements. Multicomponent composition, the presence of particle-matrix interfaces, magnetic and elastic memory effects as well as magnetomechanical coupling lead to a very complex transient rheological response of MAEs. It was shown that in temporally stepwise changing magnetic fields and oscillation amplitudes, at least three exponential functions are required to reasonably describe the time behavior of the storage shear modulus of MAEs on long time scales exceeding tenths of minutes [136]. The corresponding time constants of three identified structuring processes differ by one order of magnitude [136]. The presence of different relaxation mechanisms at different time and length scales requires the construction of rheological schemes with several relaxation and retardation times. Attempts to use classical rheological schemes to describe the viscoelastic behavior of magnetoactive elastomers lead to significant complication of the model [137], which is especially noticeable when it is required to describe the behavior of the material for a wide range of magnetic field strength values. The viscoelastic behavior of magnetoactive elastomers can change significantly with a change in the magnetic field, therefore a sufficiently flexible rheological model is required. The discussed approach was also implemented for MAEs using nonclassical rheological elements [137][138][139][140][141], however, the dependence of the model parameters on the external field makes constructing rheological models without a priori knowledge of the magnetic properties of the material rather challenging. The information about MAE magnetization curves must then be obtained using some other experimental or theoretical research methods. In particular, in [142] the magnetization model of MAEs was combined with magnetic dipole theory and quantitative description of frequency-dependent shear modulus in various magnetic fields, as a result, the authors proposed a generalized Maxwell model connected in parallel with a magneto-induced modulus model which was able to describe frequency and magnetic field dependencies of the dynamic shear modulus of isotropic MAEs based on silicone rubber filled with carbonyl iron microparticles. The parameters of the magnetization model were obtained from fitting experimental magnetization curves.
There exists a different class of rheological elements intrinsically suitable for viscoelasticity modeling by virtue of the mathematical definition of their stress-strain relation and possessing sufficient flexibility to describe a wide variety of processes without complicating the rheological model. This class of elements is called fractional rheological elements and, as the name suggests, the stress-strain relation for such elements is represented with a fractional order differential equation. Fractional elements exhibit memory effects and not only generalize rheological models, but also fundamentally enrich them. There exist several forms of fractional differential operators, one of them, the left-handed Riemann-Liouville fractional derivative of order α (0 < α < 1) which was used, for instance, in [143,144], can be expressed as follows: Here Γ(x) is Euler's gamma function. The stress-strain relation for this fractional element has the following form: . Thus, it is characterized by two parameters-the fractional order α and the viscoelasticity coefficient c, which makes it more flexible than the classical ones. For the limiting cases α → 0 and α → 1, one has the following relations connecting fractional viscoelasticity with classical viscoelasticity: It means that the fractional element becomes a Hookean spring in the limit of α = 0, and it turns into a Newtonian dashpot in the opposite limit α = 1.
In recent years, works describing the MAE behavior using fractional rheological models have emerged. It has been shown that even the simplest classical models, namely, the Maxwell and the Kelvin-Voigt models of a viscoelastic medium in principle can describe the viscoelastic behavior of a MAE sample if the classical dashpot is replaced by the fractional element (Figure 10a,b). It has been shown that these simplest one-fractional-element rheological schemes can adequately describe the viscoelasticity of MAEs with various concentrations of magnetic filler in small magnetic fields where the major contribution to the dynamic response comes from a polymer network [143]. They can also be used in saturating magnetic fields where the dynamic response is dominated by the response of a strong magnetic network formed by the filler particles [143]. In intermediate magnetic fields, when the magnetomechanical coupling plays an important role and restructuring of magnetic particles takes place one needs to use more sophisticated fractional schemes. The works [144][145][146][147] used the Zener model that corresponds to the standard linear solid model (Figure 10c). In [147], the four-parameter Zener model with a fractional element instead of a classical dashpot was used to describe the stress relaxation behavior of anisotropic MAEs. The model was in good agreement with experimental data obtained in both single-and multi-step relaxation tests. In [146], the Zener model was modified not only by replacing the dashpot with the fractional element but also using springs with field-dependent spring stiffness. It can describe well the dynamic mechanical behavior of isotropic MAEs in various magnetic fields both in time and frequency domains. In [143], a more accurate model was proposed; it corresponds to the generalized Maxwell model with two branches (Figure 10d). Fitting the experimental data with the results from the generalized Maxwell model has demonstrated that the evolution of the fractional order parameters for this model follows the three scenarios of the MAE viscoelastic properties changes observed in small, interme-diate and strong magnetic fields and it was interpreted in terms of MAE microstructure evolution. The authors of Ref. [148] compared the performance of two branches: classical Maxwell model with six fitting parameters and a fractional Maxwell model with five fitting parameters and concluded that the latter is better in terms of accuracy, simplicity and flexibility. The works [149,150] proposed combined phenomenological models consisting of a fractional rheological circuit that describes viscoelasticity, as well as elements corresponding to other aspects of material behavior. In [149], elements describing the friction and dipole-dipole interaction of filler particles were considered. In [150], elements that reproduce the nonlinear magnetization of the material, nonlinear elasticity, and elastoplasticity were used. In [151], the fractional Maxwell model was coupled with a stochastic linearized Bouc-Wen component. This eight-parameter model described viscoelastic as well as the magnetic field and strain dependent behavior of MAEs with a high accuracy exceeding 91%. In [152], the elastic-plastic model with linear hardening was adopted. To summarize,rheological modeling is actively developing nowadays. The main trend is to combine rheological schemes with models of field-dependent and MAE-composition-dependent dynamic shear modulus and to enhance the predictive capacity of the rheological models in a wide range of loads and magnetic field strengths. In spite of a lack of physical meaning in this type of approach, the rheological models can be very effective in modeling dynamic response of MAE elements used in various practical applications, in particular, for vibration control and isolation. The prospect of combining rheological modeling with continuum and/or microscopic models seems very promising. It could allow one to create models that are reasonably easy to use and have clearly defined physical meaning inherent to the two other approaches mentioned in this article. Rheological modeling is also being used in MAE hysteresis models, which is another promising field of MAE research that would be better discussed in a separate review. Rheological models found currently in the literature do not describe anisotropic response of MAEs and utilize simple approximations for the dependences of the rheological elements' properties on the applied magnetic field. Thus, it follows that such flaws should be addressed in order to create rheological models with higher degrees of generality.
Multi-Scale Modeling Approaches
The most widely spread and the most physically consistent combined modeling framework found in literature is the combination of microscopic (or mesoscopic) and macroscopic scales. Both the microstructure and the sample shape effects are taken into account in combined modeling, leading to a more holistic material behavior description. Local characteristics are used to define macroscopic characteristics in these approaches. Naturally, solving multiscale problems is a complex task that requires a lot of theoretical considerations and computational resources. Direct characterization of the material as a composite medium is difficult, and an alternative to such an approach is the homogenization of the micro-scale medium, which creates combined micro-macroscopic models. Micro-scale models can be based on solving equations of motion for filler particles or treating To summarize, rheological modeling is actively developing nowadays. The main trend is to combine rheological schemes with models of field-dependent and MAE-compositiondependent dynamic shear modulus and to enhance the predictive capacity of the rheological models in a wide range of loads and magnetic field strengths. In spite of a lack of physical meaning in this type of approach, the rheological models can be very effective in modeling dynamic response of MAE elements used in various practical applications, in particular, for vibration control and isolation. The prospect of combining rheological modeling with continuum and/or microscopic models seems very promising. It could allow one to create models that are reasonably easy to use and have clearly defined physical meaning inherent to the two other approaches mentioned in this article. Rheological modeling is also being used in MAE hysteresis models, which is another promising field of MAE research that would be better discussed in a separate review. Rheological models found currently in the literature do not describe anisotropic response of MAEs and utilize simple approximations for the dependences of the rheological elements' properties on the applied magnetic field. Thus, it follows that such flaws should be addressed in order to create rheological models with higher degrees of generality.
Multi-Scale Modeling Approaches
The most widely spread and the most physically consistent combined modeling framework found in literature is the combination of microscopic (or mesoscopic) and macroscopic scales. Both the microstructure and the sample shape effects are taken into account in combined modeling, leading to a more holistic material behavior description. Local characteristics are used to define macroscopic characteristics in these approaches. Naturally, solving multiscale problems is a complex task that requires a lot of theoretical considerations and computational resources. Direct characterization of the material as a composite medium is difficult, and an alternative to such an approach is the homogenization of the micro-scale medium, which creates combined micro-macroscopic models. Micro-scale models can be based on solving equations of motion for filler particles or treating them as a microcontinuum. The scale of the problem under study is of utmost importance for theoretical description of MAEs as not only are the mechanical and magnetic phenomena connected in such materials, but the processes occurring on the scales of filler particles and the entire sample are closely connected as well. Combined microstructural and macrostructural models ( Figure 11) bridge the gap between the different scales using various homogenization procedures through constructing representative volume elements (RVEs) and averaging with volume integration. RVE characteristics are used to obtain macroscopic parameters that are then used in a sample-scale model. An important assumption employed in most multi-scale models is the length scales separation hypothesis according to which the different spatial scales in the material are geometrically decoupled, so any microscopic or mesoscopic structural element is seen as a material point on a macroscopic scale. The processes occurring on different scales influence each other by iteratively transferring field variable data between scales (for example, average magnetization of a microscopic element is assigned to a single point in a macroscopic model). There are several approaches to microscopic element modeling, microscale homogenization, sample modeling and description of the surrounding volume. macrostructural models (Figure 11) bridge the gap between the different scales using various homogenization procedures through constructing representative volume elements (RVEs) and averaging with volume integration. RVE characteristics are used to obtain macroscopic parameters that are then used in a sample-scale model. An important assumption employed in most multi-scale models is the length scales separation hypothesis according to which the different spatial scales in the material are geometrically decoupled, so any microscopic or mesoscopic structural element is seen as a material point on a macroscopic scale. The processes occurring on different scales influence each other by iteratively transferring field variable data between scales (for example, average magnetization of a microscopic element is assigned to a single point in a macroscopic model). There are several approaches to microscopic element modeling, microscale homogenization, sample modeling and description of the surrounding volume.
(a) (b) Figure 11. (a) Sketch of the decomposition of an MAE sample into short-and long-range effects; (b) Formal discretization of sample volume into mesoscopic portions , ∈ 1, . On such scales any particle microstructure appears a homogeneous continuous distribution [153].
Representative Element Homogenization Approaches
Multi-scale modeling is usually based on the internal microstructure of the material. Thus, the homogenization procedures are based on calculations of macroscopic sample behavior using averages of microscopic quantities in representative elements. The representative elements can be obtained using periodic boundary conditions, ordered filler structure models or statistical procedures that calculate the element size based on the deviation of the resulting element characteristics from the macroscopic average.
A framework for analytical magnetoelastic homogenization in MAEs was given and discussed in [154] for a static two-dimensional case. The approach was based on microscopic volume averaging and partial decoupling of the variational magnetomechanical problem. Uniaxial loading in the presence of external magnetic field for a sample containing elliptic particles of various sizes was studied in [155] in a quasi-static regime using Figure 11. (a) Sketch of the decomposition of an MAE sample into short-and long-range effects; (b) Formal discretization of sample volume V s into mesoscopic portions V α , α ∈ [1, N]. On such scales any particle microstructure appears a homogeneous continuous distribution [153].
Representative Element Homogenization Approaches
Multi-scale modeling is usually based on the internal microstructure of the material. Thus, the homogenization procedures are based on calculations of macroscopic sample behavior using averages of microscopic quantities in representative elements. The representative elements can be obtained using periodic boundary conditions, ordered filler structure models or statistical procedures that calculate the element size based on the deviation of the resulting element characteristics from the macroscopic average.
A framework for analytical magnetoelastic homogenization in MAEs was given and discussed in [154] for a static two-dimensional case. The approach was based on microscopic volume averaging and partial decoupling of the variational magnetomechanical problem. Uniaxial loading in the presence of external magnetic field for a sample containing elliptic particles of various sizes was studied in [155] in a quasi-static regime using analytical considerations in tandem with FEM modeling. This variational approach was further developed by Danas in [156], where it was described as periodic homogenization. The local homogenization problem was substituted with a simpler periodic filler structure that consisted of single particle cells, and the variational problem was changed correspondingly to account for perturbations arising due to the employed periodic approximation. The influence of filler volume concentration, distribution, particle shape and orientation on the magnetization and magnetostriction of two-dimensional MAEs was studied. Recently, numerical homogenization was carried out in [157] for the case of isotropic three-dimensional MAEs with magnetically hard filler. The periodic homogenization procedure was improved to include RVEs with several particles of varying sizes and evolution in time (incremental periodic homogenization). The proposed model also included magnetic dissipation potential. An explicit analytical model based on invariant theory was also developed and compared with homogenization results. The analytical model was verified by solving the MAE cantilever beam deflection problem. It was found that for moderately stiff and stiff polymer matrices (with shear modulus higher than 150 kPa) the model's predictions were in line with the homogenization results as well as experimental data found in literature.
In [78], a procedure for constructing a representative volume element of a MAE for the case of spherical filler particles was presented. The authors compared two approaches to averaging and homogenization of the properties of a medium. The first approach involved deriving the weak form of the continuum equations of the medium using the variational representation and a numerical solution of the resulting equations. The second approach was based on generating random distributions of filler particles as a part of an iterative process with each iteration checking if the current element is fit to be a representative volume element using a statistical procedure. The calculation of the parameters of a real representative element was then performed on the basis of the convergence of the physical characteristics of the system (elasticity modulus and effective magnetic permeability) with an increase in the number of statistical realizations. The calculation of these characteristics was carried out using the finite element method. The authors calculated the dimensions of a representative volume element for the case of a two-dimensional system, a polymer matrix with strain energy density in the neo-Hookean form, as well as a fixed filler concentration, and specific mechanical and magnetic properties of system objects. It was shown that the results obtained using both of the considered approaches to the RVE modeling largely coincided.
A microstructure-based constitutive model for hard-magnetic MAEs was considered in [158]. Under free-stress conditions of the post-cured MAEs, the composite was assumed to reach an equilibrium state where the polymeric network balances the dipole-dipole interactions of magnetic particles. Within a single framework, the model described the overall magneto-mechanical response of the MAEs with magnetically hard filler considering the specific contributions of its phases. The numerical results revealed that a pre-deformation of the polymeric network is required to reach consistent mechanical balance in the presence of magnetized particles. The change in the distances between particles during the MAE deformation led to changes in the dipole-dipole interactions affecting the overall response of the composite. This effect was noted to be particularly important in the absence of an external magnetic field.
In [159], the hard-magnetic, compliant MAE was modeled as a three-dimensional micropolar continuum body, which was subjected to external magnetic stimuli. From the angular momentum balance law, it was deduced that the Cauchy stress tensor in these materials cannot be symmetric. Therefore, the micropolar continuum theory [160], with inherently asymmetric stress tensor, was chosen as a rational candidate for modeling the deformation of these materials. In micropolar continuum theory each material particle is associated with a microstructure that can undergo only rigid rotations independently from the surrounding medium. Therefore, each particle contains six degrees of freedom: three translational which are assigned to the macro-element, and three rotational ones which are related to the micro-structure. From the kinetic point of view, the interaction between two adjacent surface elements was considered via a couple vector in addition to the traditional traction vector, which led to the definition of couple stress tensor. It was shown that the presented formulation can successfully predict the deformation of hard magnetic soft materials under various loading and boundary conditions.
FE2-Approach
FE2 method is a robust multi-scale FEM modeling approach that assumes the existence of two classic continuum scales: microscopic and macroscopic. Each macroscopic node of the FE mesh corresponds to a microscopic element of the material. In the case of so-called weakly coupled scales, this microscopic element is a representative volume element (RVE) of the sample. For the sake of simplicity, weakly coupled scales models are more widely used than their strongly coupled counterparts that do not assume any kind of statistical representation of the material microstructure. Another important assumption of the FE2 method is the separation of scales: the scale of an RVE is much smaller than the basic scale of the macroscopic continuum.
In order to describe the material behavior, BVPs on both scales must be solved. This is an iterative process with each iteration consisting of several steps. First, the values of macroscopic variables (such as stress, strain and magnetic field) are assigned to each node starting from the initial material state. These values are then used as boundary conditions for the microscopic BVP for an RVE solved using the weak formulation. The obtained results are then averaged via a homogenization procedure and the averaged values are assigned to the nodes of the macroscopic mesh. Finally, the macroscopic BVP is solved, and the next iteration is prepared. This process is carried out until the changes between iterations become negligible. MAE is a nonlinear material and because of that obtaining a solution is a demanding process in terms of the computational resources it requires. Various linearization methods can be employed to decrease the resource requirements.
The work [161] presented a framework for solving magnetomechanical BVPs using FEM on microscopic and macroscopic scales for large strains. Homogenization theory was applied to the BVP on microscale to create a self-consistent model where macroscopic quantities (deformation gradient and H-field) were used to obtain the average stress and B-field in microscopic elements by employing the generalized Hill-Mandel condition [162,163] in a microscopic BVP: where the line above a symbol is used to denote a macroscopic physical quantity, δ is the variation or increment of a physical quantity and * V denotes the volume average of * . This condition provides connection between macroscopic quantities and microscopic averages. These averages were then used in a linearized macroscopic BVP to calculate new deformation gradient and H-field. Microscopic elements containing a single filler particle were considered with varying microstructure orientation that translates into varying rotation angles for the microscopic cells. Material stiffening in uniform magnetic field under shear load was studied, and the obtained results were noted to be in accordance with experimentally observed phenomena. Magneto-electric-mechanical coupling was considered in [164]. The effects of the sample shape were additionally studied in [165]: two-dimensional rectangular and elliptic samples were considered in order to compare the results with analytical predictions. Continuum theory based around the influence of Maxwell stress on the sample boundary shape was used to obtain analytical estimations. The effects of the magnetic properties and the shape of the sample on Maxwell tractions on its boundary were discussed. It was shown that information about internal stress state of the sample can be obtained using the tractions measured on its boundary.
Fourier transform method can be used in order to reduce the computational complexity of the microscopic problem. In [166] a comprehensive step-by-step algorithm of Fast Fourier Transform method for nonlinear magnetoelasticity was provided along with its mathematical justifications. It was used to simulate the material response to external load and magnetic field in 2D and 3D for neo-Hookean polymer medium and hyperbolic magnetization model. This work provided a rigorous framework that allows one to incorporate Fourier space-based homogenization into multiscale MAE modeling.
Mean Field Approach
Another way of taking into account both microscopic and macroscopic effects was proposed by Ivaneyko et al. [167], which they refer to as the mean field theory. It is obvious that even when using the dipole approximation for magnetic interactions, calculating the contributions to the magnetic field and energy from every single filler particle in an MAE sample is an incredibly demanding task. In the dipole theory, the magnetic field inside the sample can be expressed as a superposition of dipolar contributions from every particle at a given point in space. A dimensionless shape factor f can then be introduced: where ϕ is the volume concentration of the ferromagnetic filler, → r ij = r ij → e r is the vector connecting the centers of particles with numbers i and j, → e m is the vector denoting the direction of the magnetic moment of particle j. The shape factor represents the distribution of filler particles in the sample.
Another important aspect of the modeling approach in question is the decomposition of the problem into two parts: a mesoscopic sphere surrounding the chosen point in the material and the rest of the sample. Due to the distance dependence of the dipolar magnetic field, the particles located far enough from a given particle can be considered to be independent from it. The mesoscopic part of the magnetic field heavily depends on the microstructure while the sample (or macroscopic) part of the magnetic field depends on the sample shape and average magnetization: where f micro is the sum over particles inside the mesoscopic sphere and f macro is the sum over particles outside of it. The macroscopic sum can be replaced by an integral due to the fact that differences between contributions from different particles become negligible far away from the center of the mesoscopic sphere. The macro part of the function f can be calculated more easily due to further homogenization of the field and density variables, and it is related to the demagnetization factor of the sample. The micro part's complexity depends on the local distribution of filler particles. This approximation was dubbed by the authors the dipolar mean field theory. Bulk magnetization can then be obtained using a chosen magnetization model for the filler particles and will depend on f . In this work equilibrium deformation and magnetization for simple cubic, body centered cubic, hexagonal close-packed and tetragonal lattices representing filler particle distributions are calculated using the simplest form of elastic and magnetic energy of the sample. The results of the direct summation in f are compared to the results obtained within the approximation framework, and the agreement was found to be good. This theoretical approach to describing the properties of magnetoactive elastomers was further developed in [168][169][170]. The magnetic field inside the material can be represented as a combination of a local field, which is determined by the mutual arrangement of particles in the region near the selected particle, and a macroscopic field, which is determined by the average characteristics of the sample and its form factor. The filler particles were described as linearly magnetizable magnetic dipoles, and the total magnetization was calculated for an ellipsoid-shaped MAE sample with a magnetic filler concentration not exceeding 20% by volume. Using the linear theory of elasticity, the total energy of an ellipsoidal sample of a magnetically active elastomer was also calculated. The result of both of these assumptions was an integral equation for the magnetization, which was solved iteratively. The short-range effects were ignored. Sample magnetization and deformation were obtained for random filler particle distributions and cylindrical structures. In [168] the dipolar mean field approach was compared with full field FEM simulations, and it was concluded that for systems with sufficiently low filler concentrations both approaches provide qualitatively and quantitatively the same results. In [169,170] using the basic principle that dictates that filler particles tend to form elongated structures inside the polymer matrix in the presence of a magnetic field, and within the framework of this approach an assumption was made that the filler structures could be modeled as magnetized continuous medium areas within the sample volume. With that assumption in effect, magnetic field inside the material can be calculated as an integral of the dipolar contributions from each particle using a density function instead of a direct sum of those contributions. This can be equated to ensemble averaging over all possible configurations of the microstructure. Additionally, the contribution from the particle located at a given point must be excluded, so the integration area is truncated. This assumption allows the model to take into account the microstructural effects arising from dipole-dipole interparticle interactions while at the same time building a continuous model that is more appropriate for analytical studies and easier to solve numerically. In [169], within the framework of this approach, the influence of the initial distribution of filler particles on the energy of a magnetoactive elastomer was taken into account under the assumption of physically small local deformations of the material and weak magnetic fields. It was shown that the initial distribution of particles affects the mechanical behavior of the composite, in particular, the type of material magnetostriction: compression or stretching. A modification of such a model using the mean field theory was proposed in [170].
In [171], another interpretation of the mean field approach was offered: magnetization was considered to be a superposition of the average filler magnetization and a local perturbation. The theoretical framework was generalized by introducing operator formalism. The nonlinear magnetization models were also considered. The work [153] further generalized this approach by discretizing the sample volume into a set of mesoscopic volumes with the microstructure in each of them not directly affecting the other mesoscopic volumes (akin to FE2 approach). The sample was characterized by a macroscopic average magnetization, each mesoscopic volume-by a mesoscopic average magnetization that deviated from the macroscopic average, and each point inside a mesoscopic volume-by a local deviation from mesoscopic average. Each mesoscopic volume can also be modeled as a dipole with characteristics corresponding to the averages of the local fields, and thus f macro is analogous to f micro , but represents the dipolar structure on a different scale without taking deviations into consideration. The Taylor linearization of the general non-linear magnetization with respect to the deviation of the local magnetic field from the average field served as the basis for obtaining self-consistent magnetization equation. This work presented a way to decouple and explicitly calculate the leading magnetic effects corresponding to different scales of the MAE internal structure.
In [172], several theoretical models were used to describe the MAE behavior and, in particular, the influence of microstructural effects on the magnetic field-induced deformations of MAEs. The macroscopic behavior was described using invariant theory and free energy-based constitutive equations. The microscopic/mesoscopic effects were modeled using two different approaches: micro-continuum modeling with invariant theory as well as FEM simulations and dipolar particle interaction theory together with matrix-mediated twobody and three-body interaction theory proposed by [173]. The results obtained using two microscopic approaches were compared for 3D helical filler chains, and a good agreement was demonstrated for interparticle distances corresponding to the applicability limits of magnetic dipole theory. This work aimed to establish a computationally efficient (compared to full-field micro-scale FEM models) algorithm for calculating local MAE response to exter-nal magnetic fields within a multiscale modeling framework on the basis of dipolar mean field theory and classic linear theory of elasticity for media containing hard inclusions.
The work [135] further developed the approach proposed in [170]; namely, chainlike and plane-like structures of the filler were modeled as continuous rods and discs, respectively ( Figure 12). MAE sample deformation and its elastic modulus were then described using invariant theory for transverse isotropic materials to derive the mechanical part of the free energy and dipolar mean field theory with a smeared filler particle density function to derive the magnetic part of the free energy.
on the basis of dipolar mean field theory and classic linear theory of elasticity for media containing hard inclusions.
The work [135] further developed the approach proposed in [170]; namely, chain-like and plane-like structures of the filler were modeled as continuous rods and discs, respectively ( Figure 12). MAE sample deformation and its elastic modulus were then described using invariant theory for transverse isotropic materials to derive the mechanical part of the free energy and dipolar mean field theory with a smeared filler particle density function to derive the magnetic part of the free energy.
It follows that the current promising trend is the opportunity to move away from describing ferromagnetic filler microstructures as uniform lattices towards more complex distributions that resemble the real material structure. Using regular lattices in theoretical descriptions of MAEs leads to undesirable artifacts appearing in the obtained material behavior, so introducing new analytical approaches to approximating filler clusters with simpler shapes or otherwise reducing their structural complexity is one of the requirements for more universal and less computationally intense multiscale modeling of MAEs. is the volume fraction of magnetizable particles inside a smeared structure, and = ⁄ represents the volume fraction of smeared structures inside an elastomer matrix. MAEs, in both cases, exhibit transverse isotropy along a unit vector ⃗ [135].
To summarize, multi-scale theoretical approaches seem to be the most promising line of research because they are capable of solving very complex problems while keeping the advantages of simplified theoretical description at a single scale. The inherent nonlinearity of the problems obtained within coupled multi-scale frameworks seems to constitute the main challenge for the practical implementations of such methods, which leads to high computational costs. Various multi-scale approaches are currently being actively developed and improved in order to understand the connection between microstructural changes and material sample response to external magnetic fields and mechanical loads. Multi-scale modeling shows the importance of both the filler structure inside an MAE sample and the sample shape, as well as the interplay between these factors for both understanding fundamentals of MAE behavior and achieving desired performance of MAEbased devices in practical applications. The most challenging aspect of the multi-scale approaches discussed in this section is their analytical and computational complexity. The work on creating and utilizing less computationally demanding algorithms for multi-scale MAE modeling would greatly benefit the scientific community and would naturally accelerate progress in obtaining a comprehensive and reasonably complete model of magnetopolymer composite materials. Figure 12. Smearing of magnetic particles (with total volume fraction ϕ) into columnar and disk-like structures. ϕ p is the volume fraction of magnetizable particles inside a smeared structure, and ϕ f = ϕ/ϕ p represents the volume fraction of smeared structures inside an elastomer matrix. MAEs, in both cases, exhibit transverse isotropy along a unit vector → e 1 [135].
It follows that the current promising trend is the opportunity to move away from describing ferromagnetic filler microstructures as uniform lattices towards more complex distributions that resemble the real material structure. Using regular lattices in theoretical descriptions of MAEs leads to undesirable artifacts appearing in the obtained material behavior, so introducing new analytical approaches to approximating filler clusters with simpler shapes or otherwise reducing their structural complexity is one of the requirements for more universal and less computationally intense multiscale modeling of MAEs.
To summarize, multi-scale theoretical approaches seem to be the most promising line of research because they are capable of solving very complex problems while keeping the advantages of simplified theoretical description at a single scale. The inherent nonlinearity of the problems obtained within coupled multi-scale frameworks seems to constitute the main challenge for the practical implementations of such methods, which leads to high computational costs. Various multi-scale approaches are currently being actively developed and improved in order to understand the connection between microstructural changes and material sample response to external magnetic fields and mechanical loads. Multi-scale modeling shows the importance of both the filler structure inside an MAE sample and the sample shape, as well as the interplay between these factors for both understanding fundamentals of MAE behavior and achieving desired performance of MAE-based devices in practical applications. The most challenging aspect of the multi-scale approaches discussed in this section is their analytical and computational complexity. The work on creating and utilizing less computationally demanding algorithms for multi-scale MAE modeling would greatly benefit the scientific community and would naturally accelerate progress in obtaining a comprehensive and reasonably complete model of magnetopolymer composite materials.
Conclusions and Outlook
The above considerations clearly demonstrate that a significant progress has been achieved in theoretical modeling of MAEs in the past five to ten years. Prior to that, the re-search was mostly focused on experiments revealing new physical effects and experimental elucidation of dependences of them on the material composition and excitation conditions. The advances in the understanding of the underlying physical phenomena in MAEs have been made in all theoretical approaches described above.
What should the future directions of theoretical research on MAEs be? In our opinion, the answer is determined by the observable trends in the experimental works on MAEs and their potential applications. The current technology of MAE fabrication allows for synthesis of materials with more sophisticated compositions. The filler becomes more complex, with particles of different physical natures (e.g., soft-magnetic, hard-magnetic, and non-magnetic inclusions), different particle sizes (e.g., nm, sub-µm, and µm-sized particles) and shapes (e.g., spherical, rod-like and plate-like particles). Modern methods of additive manufacturing also allow us to fabricate elastomers with specific (sophisticated) particle distributions (e.g., ordered, randomly or nonuniformly distributed filling particles) in different spatial regions of a MAE-based functional element. This is done to achieve the desired response of a functional element (e.g., actuator). The resulting compounds may even include a composite material inside another composite material (e.g., ferrofluid in an MAE) or several polymers combined to form a matrix. Future theoretical works will face the challenge of a necessity to describe the physical effects in magnetoactive polymers with increasing complexity of composition not only qualitatively but also quantitatively. To do this, the nonlinear effects of both the mechanical and magnetic properties of constitutive materials have to be treated without the linearized approximations. The theoretical research of MAEs has not yet reached its goals. On the contrary, the subject of this review paper will be flourishing in the coming years, once theoretical modelling is a practical tool for designing MAE materials for functional applications.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Abbreviations
The | 2022-10-02T15:07:43.874Z | 2022-09-29T00:00:00.000 | {
"year": 2022,
"sha1": "af90b19a1639a42c0b8a6c3366301f0ad19c2f07",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/14/19/4096/pdf?version=1665280938",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c19be0daf7ea14bbeda4ff0e7f0d5a83748dce64",
"s2fieldsofstudy": [
"Materials Science",
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221822495 | pes2o/s2orc | v3-fos-license | Electric field induced magnetization reversal in magnet/insulator nanoheterostructure
ABSTRACT Electric-field control of magnetization reversal is promising for low-power spintronics. Here in a magnet/insulator nanoheterostructure which is the fundamental unit of magnetic tunneling junction in spintronics, we demonstrate the electric field induced 180 magnetization switching through a multiscale study combining first-principles calculations and finite-temperature magnetization dynamics. In the model nanoheterostructure MgO/Fe/Cu with insulator MgO, soft nanomagnet Fe and capping layer Cu, through first-principles calculations we find its magnetocrystalline anisotropy linearly varying with the electric field. Using finite-temperature magnetization dynamics which is informed by the first-principles results, we disclose that a room-temperature 180 magnetization switching with switching probability higher than 90% is achievable by controlling the electric-field pulse and the nanoheterostructure size. The 180 switching could be fast realized within 5 ns. This study is useful for the design of low-power, fast, and miniaturized nanoscale electric-field-controlled spintronics.
Introduction
Nowadays magnetic storage plays a critical role in the development of fast, high-density, nonvolatile memory technology [1]. The typical example is magnetic random access memory (MRAM) device which relies on the magnetic tunneling junction (MTJ) to storage bit information. MTJ is usually constituted of insulator, free magnetic layer in which the magnetization can be switched by an external field, and fixed magnetic layer in which the magnetization direction is firmly pinned. For example, if the configuration of magnetization in the free layer antiparallel to that in the fixed layer represents bit '0ʹ, switching the magnetization in the free layer during the writing process to achieve a parallel configuration leads to bit '1ʹ. A huge number of MTJ units realize the information storage. The writing in MTJ is intrinsically a magnetization reversal process in the free layer. Therefore, developing new strategies to switch magnetization in the free layer is indispensable for revolutionizing spintronics.
Generally, three methods have been proposed to switch magnetization. Firstly, the switching or writing process can be driven by an external magnetic field. In MRAM, built-in wires in every memory cell are required for the switching of a nanomagnet, i.e., the magnetic field must be generated by passing a current through a wire. The extra wires not only make the device circuit complicated and thus hinder the high density but also generate current to result in energy dissipation and overheating. Secondly, spin-transfer torque and spin-orbit torque can be used to switch magnetization. They allow for high density, but are not a low-power method in terms of the high current density. Thirdly, the electric-field control of magnetization is recently massively explored to switch magnetization. Since this method is free of electric currents, it is very promising for the future extremely low-power next-generation spintronics based memory devices.
Generally, multi-field coupling provides more freedom for the design of nanoscale functional structure [2][3][4][5][6][7][8][9]. In detail, the electric-field control of magnetization can be realized either in multiferroic materials which possess more than one ferroic effects and the coupling between two of them, or in insulator/magnet heterostructure whose interface magnetic property can be well modulated by an external electric field. For example, through the magnetoelectric (ME) coupling between the electrical polarization of a ferroelectric material and the magnetization of a ferromagnet, the magnetization can be controlled by applying an electric field. However, the ME coupling is weak in singlephase systems. Such a control is usually implemented in ferroelectric/ferromagnetic heterostructures through strain-mediated elastic coupling , interface bonding [33][34][35][36], and exchanging coupling [37][38][39][40]. On the contrary, in insulator/magnet heterostructure, the electric field is found to induce charge change of the magnetic atoms at the insulator/magnet interface and thus tune the spin-orbital coupling and magnetocrystalline anisotropy of these interfacial atoms. Although this effect is limited to the interface, it fits well for the high-density devices in which the MTJ thickness is nanoscale and the interfacial effect is strong enough to induce magnetization reversal.
In this work, we study the electric-field control of magnetization reversal in nanoscale magnet/insulator nanoheterostructure MgO/Fe/Cu, with a focus on the multiscale scenario for predicating the electric field induced magnetization reversal behavior in the free layer of MTJ. First-principles calculations are carried out to reveal the dependence of magnetocrystalline anisotropy of Fe on the electric field applied to MgO/Fe/Cu. Informed by the first-principles results, magnetization dynamics simulations without and with temperature-induced thermal fluctuations are performed to identify the thermal energy barrier for the equilibrium state, the necessary conditions for magnetization switching, the conditions for 180 � switching, and the temperaturerelated probabilistic switching events. Figure 1(a) shows a typical MTJ unit which is a Cu/Fe/MgO/Fe/Cu nanoheterostructure. Since the magnetization of the top Fe layer is firmly pinned in the plane, only the bottom Fe layer which is free to switch is of interest. By applying a voltage V to the MTJ system, no electric current will be generated in Cu and Fe, and only electric field will be generated in MgO. The atomic structure for first-principles calculations is illustrated in Figure 1(b). For a good lattice match between MgO and Fe, MgO unit cell is rotated by 45 degrees. At the interface, Fe atom is put on the top of O atom for lowering the total energy of the supercell. The supercell is constructed along the [001] direction (x axis), containing nine-layer Fe, nine-layer MgO, four-layer Cu, and 10-Å-thick vacuum. The electric field is imposed by the dipole layer method [41], with the dipole placed in the middle of the vacuum region.
Electric field control of magnetocrystalline anisotropy
The first-principles calculations are carried out within the density functional theory and the framework of the projector augmented-wave formalism as implemented in the Vienna ab initio simulation package (VASP) [42]. The Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional in the generalized gradient approximation (GGA) is employed. An energy cutoff of 500 eV and a Monkhorst-Pack k-mesh 21 � 21 � 1, at which a good convergence of magnetocrystalline anisotropy energy is achieved, are utilized. The in-plane lattice parameter of the supercell is set as that of Fe (0.287 nm). The MgO layers except for the three layers close to Fe are fixed. The atomic positions of all other layers in the x direction are relaxed. The convergence criteria for the structure relaxation are set as 10 À 6 eV and 2 meV/Å for the energies and forces, respectively. By using the self-consistent charge density, non-self-consistent calculations with the spin-orbit coupling are performed to get the total energy as a function of the orientation of the quantization axis. The total energy difference between the different quantization axes is used to determine the magnetocrystalline energy. The magnetocrystalline anisotropy constant (K) of the Fe layer is evaluated as the difference of the total energy per unit Fe volume (nominal thickness 1.2 nm) when the magnetization is along (100)/(010) (z=y) and (001) (x) directions. Positive and negative K indicates perpendicular and in-plane magnetocrystalline anisotropy, respectively. a voltage jump appears there. From the slope of the voltage distribution in MgO, the electric field (E) can be estimated as 0.625 V/nm which is usually related to the real electric field measured in experiments. Repeating the similar calculation procedure for K at different E, the dependence of K on E can be obtained, as shown in Figure 2(b). Without the applied electric field (E ¼ 0), K is around 1.4 MJ/m 3 . If additional electric field is applied, K is found to linearly increase with E. By linearly fitting the data in Figure 2 (b), we obtain the following relationship The large slope of 0:5257 MJ/m 3 /(V/nm) indicates remarkable magnetoelectric coupling in MgO/Fe/Cu system, and means that K can be modulated in a wide range by applying an electric field.
Static analysis of energy
The magnetization state in a single-domain thin film with a geometry of elliptic cylinder can be described by two angles θ and ϕ, as illustrated in Figure 1(c). The associated total energy density is composed of magnetocrystalline anisotropy energy density and demagnetization energy density [14], i.e.
in which the electric field dependent KðEÞ is obtained from first-principles calculations Figure 1(b) and the saturation magnetization of Fe is taken as μ 0 M s ¼ 2:15 T. The demagnetization factors (i.e. N x , N y , and N z in Eq. 2) of an elliptical cylinder can be calculated as [43] N z ¼ in which the second aspect ratio � ¼ t=2b and the eccentricity 2¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi 1 À b=a ð Þ 2 q , with a, b, and t (1.2 nm) as the semi-major axis, semi-minor axis, and thickness, respectively. The coefficients v n and u n can be calculated as functions of � and its complete elliptic integrals of the first and second kind. For the detailed formulations of v n and u n , one is referred to the literature [43]. It should be noted that in principle the stray field and interlayer exchange can also be included in Eq. 2, but are ignored here.
From the viewpoint of information storage such as binary logic and memory applications, the energy barrier at the equilibrium state (E ¼ 0) is usually required to exceed 40 k B T to make a sufficiently stable magnetization. The energy barrier can be calculated as the energy difference between the magnetically easy (z) and hard (x and y) axes, i.e. On the one hand, ΔG should be high enough to stabilize the magnetization state at T ¼ 300 K (i.e. ΔG > 40 k B T). On the other hand, ΔG should be not too high so that applying en electric field could switch the magnetization state. According to Figure 3(a), the geometry a ¼ 2:5b is chosen as an example to analyze the necessary condition for a possible magnetization reversal, and the result is presented in Figure 3(b). In general, an electric field is applied to increase K and thus makes x as the easy axis, indicating a 90 � switching. For example, when there is no electric field, the energy density landscape in Figure 4(a) shows an easy axis along z. If an electric field of E ¼ 0:7 V/nm is applied, the easy axis is changed to x Figure 4(b). The shadow region in Figure 3(b) points out the condition (electric field and film size) under which the magnetization reversal is possible. It is clear that for a ¼ 2:5b, an electric field higher than 0.6 V/nm is required for a magnetization reversal, which is still lower than the typical dielectric breakdown field strength of MgO (2.4 V/nm [44]).
From the above static analysis of energy, it can be deduced from Figures 3(b) and 4(b) that an electric field can change the easy axis from z axis to x axis and thus switch the magnetization by 90 � . However, a 180 � switching is highly desired for the magnetic storage. Especially, a 180 � switching purely by an electric field is critical for the design of low-power spintronics. In following, switching dynamics will be explored to realize the 180 � switching, and a film with a ¼ 50 nm and b ¼ 20 nm will be taken as an example to elucidate the basic idea.
Electric field induced 180 � switching
The static analysis results in Figure 3(b) only depict the necessary (not the sufficient) condition for a 180 � switching. In contrast, the electric field-induced magnetization dynamics provides more freedom to control the reversal process. With the temperature effect considered as thermal fluctuations, the magnetization dynamics of a single-domain object at finite temperature is governed by [14,45] in which γ 0 is the gyromagnetic ratio constant, α ¼ 0:01 is the damping coefficient of Fe, Δt ¼ 0:2 ps is the time step, and P i (i ¼ 1, 2) is a stochastic process with Gaussian distribution, zero mean value, and completely uncorrelated property in time. P i is generated by the Box-Muller method [46]. The characteristic time τ N is related to volume V and temperature T as τ À 1 N ¼ 2αγ 0 k B T=½M s ð1 þ α 2 ÞV�. It is obvious from the random terms in Eq. 5 that higher temperature and smaller volume will result in more intensive thermal fluctuations. A miniaturized MTJ unit with small volume is good for high density, but possibly suffers from thermal fluctuations induced randomness. It should be mentioned that all the analysis is based on the single-domain assumption. To verify this assumption, 3D micromagnetic simulation [47] has been carried out on the elliptical cylinder with a ¼ 2b ¼ 50 nm and t ¼ 1:2 nm. The micromagnetic simulation results confirm that this cylinder is small enough to maintain the single-domain configuration when an electric field is applied to stimulate the magnetization dynamics.
In contrast to the static analysis, here we apply an electric field pulse to trigger the 180 � switching by utilizing the precessional magnetization dynamics which is described by Eq. 5. As a first step, we investigate the case without thermal fluctuations (T ¼ 0). Figure 5 (a) presents the typical switching trajectory triggered by an electric field with a magnitude of 0.8 V/nm and a pulse duration (t s ) of 0.6 ns. After the removal of electric field at t s ¼ 0:6 ns, the magnetization component m z further reaches À 1 and a 180 � switching is realized. The switching time is around 3 ns and the switching is deterministic at T ¼ 0. It should be noted that in order to achieve a 180 � switching, the pulse duration has to be precisely controlled. For the successful 180 � switching, the switching time (t switch ) as functions of E and t s is shown in Figure 5(b). It can be found that a precise control of E and t s at a reasonable range could achieve a fast 180 � switching within 2 ns. It should be mentioned that a long pulse dose not ensure a fast switching here. The main reason is the precession dynamics of Fe with a low damping coefficient of 0.01. If a long pulse is applied, the magnetization will do precession for a long time and so the switching time will not decrease as expected.
However, if finite temperature is considered (i.e. T > 0 in Eq. 5), the magnetization dynamics will be intrinsically altered. Even in the equilibrium states with E ¼ 0, the magnetization is not exactly aligned along the easy axis z. As shown in the histogram in Figure 6, the finite-temperature effect makes the magnetization fluctuate within several degrees around the easy axis. When the temperature is increased from 300 K in Figure 6(a) to 400 K in Figure 6(b), the magnetization fluctuation around the easy axis is more intensive and the distribution of magnetization components m i becomes wider.
In addition, the finite temperature makes the electric field induced switching as probability events. The deterministic switching at T ¼ 0 will be undeterministic at T > 0. For example, under the same condition E ¼ 0:8 V/nm and t s ¼ 0:6 ns as in Figure 5(a) with T ¼ 0, 180 � switching can either succeed (Figure 7(a)) or fail (Figure 7(b)) at T ¼ 300 K. This kind of probabilistic behavior makes the previous studies of the 180 � at 0 K to be reexamined.
For the switching at finite temperatures, we calculate the switching probability (the percentage of successful 180 � switching) as functions of E and t s at 300 K, as shown in Figure 7(c). It can be seen that even though room temperature (300 K) makes the 180 � switching probabilistic, it is still possible to achieve the switching probability above 90% by carefully designing the magnitude and the pulse duration of the electric field. The wide region with high switching probability ( > 90%) in Figure 7(c) indicates the design flexibility of electric field-induced 180 � switching at room temperature. Unquestionably, increasing the switching probability as much as possible is desired. However, for memory applications where different on-chip error detection and correction schemes exist, the achieved switching probability above 90% here is still practicable. For an estimation of switching time at room temperature, as an example in Figure 7(d) we present 1,000 switching trajectories with a switching probability of ,92:6% at E ¼ 0:76 V/nm and t s ¼ 0:61. The switching time is found to be approximately 5 ns at room temperature. It should be noted in Figure 7(c) that a long pulse does not ensure a high switching probability. The main reason may be related to the slow precession dynamics of Fe. Under a long pulse, the magnetization will slowly precess for a long time during which the accumulated effects of thermal fluctuations will be strengthened to reduce the switching probability.
Conclusions
The electric field-induced 180 � magnetization switching in magnet/insulator nanoheterostructure has been demonstrated by combining first-principles calculations and finitetemperature magnetization dynamics simulations. In a model nanoheterostructure system MgO/Fe/Cu, with the electric field dependent magnetocrystalline anisotropy (K) from first principles as input, the static analysis of total energy density of an elliptic Fe nanomagnet is performed to identify the conditions: (1) the energy barrier at the equilibrium state (E ¼ 0) should be larger than 40 k B T to overcome the temperature-induced thermal fluctuations for practical device applications; (2) the voltage induced K change should exceed the energy barrier and make the magnetization reversal possible. Magnetization switching dynamics at zero temperature indicates that precisely controlling the electric field pulse and the nanoheterostructure size could achieve the 180 � switching within several nanoseconds. Moreover, considering the thermal fluctuations at room temperature as random fields, we find the magnetization reversal as probabilistic events and calculate the switching possibility for 180 � switching. The minimum switching time is found to be around 5 ns, which is less than that in the traditional STT-MRAM, MRAM, and DRAM. The present study provides valuable insight into the rational design of electric field controlled and miniaturized nanoscale spintronic devices where temperature-induced thermal fluctuation has a great impact.
Disclosure statement
No potential conflict of interest was reported by the authors. | 2020-09-10T10:24:32.440Z | 2020-07-02T00:00:00.000 | {
"year": 2020,
"sha1": "2bb90a41858ec389e04e2e3c966f1eea679be97c",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19475411.2020.1815132?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "e90094802bd0c3859657d303b678ccafaaa2cd2b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
208248157 | pes2o/s2orc | v3-fos-license | Go From the General to the Particular: Multi-Domain Translation with Domain Transformation Networks
The key challenge of multi-domain translation lies in simultaneously encoding both the general knowledge shared across domains and the particular knowledge distinctive to each domain in a unified model. Previous work shows that the standard neural machine translation (NMT) model, trained on mixed-domain data, generally captures the general knowledge, but misses the domain-specific knowledge. In response to this problem, we augment NMT model with additional domain transformation networks to transform the general representations to domain-specific representations, which are subsequently fed to the NMT decoder. To guarantee the knowledge transformation, we also propose two complementary supervision signals by leveraging the power of knowledge distillation and adversarial learning. Experimental results on several language pairs, covering both balanced and unbalanced multi-domain translation, demonstrate the effectiveness and universality of the proposed approach. Encouragingly, the proposed unified model achieves comparable results with the fine-tuning approach that requires multiple models to preserve the particular knowledge. Further analyses reveal that the domain transformation networks successfully capture the domain-specific knowledge as expected.
Introduction
In multi-domain translation, a unified neural machine translation (NMT) model is expected to provide high quality translations across a wide range of diverse domains.The main challenge of multi-domain translation lies in learning a unified model that simultaneously 1) exploits the general knowledge shared across domains, and 2) preserves the particular knowledge that represents distinctive characteristics of each domain.Unfortunately, standard NMT models trained on the mixed-domain data generally capture the general knowledge while ignoring the particular knowledge, rendering them sub-optimal for multi-domain translation (Koehn and Knowles 2017).
A natural approach to this problem is fine-tuning, which first trains a general model on all data and then separately fine-tunes it on each domain (Luong and Manning 2015).However, the fine-tuning approach requires maintaining a distinct NMT model for each domain, which makes it unwieldy in practice.Towards learning a unified multi-domain translation model, several researchers turn to augment the NMT model to learn domain-specific knowledge.For example, Kobus et al. (2016) introduced a special domain tag to the source sentence, and Britz et al. (2017) and Zeng et al. (2018) guide the encoder output to embed domainspecific knowledge via an auxiliary object.However, all the approaches require the encoder representations to embed both the general and the particular knowledge at the same time.Recent studies have shown that such overloaded usage of hidden representation makes training the model difficult, and such problem can be mitigated by separating these functions (Rocktäschel et al. 2017;Zheng et al. 2018).
In this work, we explicitly model the domain-specific functionality for multi-domain translation by introducing domain transformation networks (DTNs).More specifically, the DTNs transform the general knowledge learned by the encoder to the domain-specific knowledge, which is subsequently fed to the decoder.In this way, the encoder learns general knowledge in the standard fashion, and the newly added DTNs learn to preserve the particular knowledge.We employ a residual connection on DTNs to enable the decoder to exploit both the general and particular knowledge.To guarantee the knowledge transformation, we also propose two supervision strategies: 1) domain distillation that encourages the unified model to learn domain-specific knowledge in a teacher-student framework; and 2) domain discrimination that guides the encoder output and the transformed representation to embed the required knowledge with adversarial learning.
We conduct experiments on three language pairs: Chinese⇒English, German⇒English and English⇒French, covering balanced, unbalanced and large-scale multidomain data.Experimental results show that our model significantly and consistently outperforms both the TRANS-FORMER baseline by +3.35 BLEU points and previous multi-domain translation models (Kobus et al. 2016;Britz et al. 2017;Zeng et al. 2018) by +1.0∼2.0BLEU points on different data, demonstrating the effectiveness and universality of the proposed approach.Encouragingly, our unified model is on par with the fine-tuning approach that requires multiple models to preserve the particular knowledge.Further analysis reveals that the domain transformation networks successfully capture the domain-specific knowledge while maintaining the specificity of each domain.
Contributions Our main contributions are: 1.Our study demonstrates the necessity of explicitly modeling the transformation from the general to the particular for multi-domain translation.2. We exploit two supervision signals to simultaneously and incrementally encourage transformation of domain knowledge.3. We construct several multi-domain data across languages, on which we empirically validate a variety of existing approaches.
Background
Neural Machine Translation A standard NMT model directly optimizes the conditional probability of a target sentence y = y 1 , . . ., y J given its corresponding source sentence x = x 1 , . . ., x I : where θ is a set of model parameters and y <j denotes the partial translation.The probability P (y|x; θ) is defined on the neural network based encoder-decoder framework (Sutskever et al. 2014;Cho et al. 2014), where the encoder summarizes the source sentence into a sequence of representations H = H 1 , . . ., H I with H ∈ R I×d , and the decoder generates target words based on the representations.Typically, this framework can be implemented as recurrent neural network (RNN) (Bahdanau et al. 2015), convolutional neural network (CNN) (Gehring et al. 2017) and Transformer (Vaswani et al. 2017).The parameters of the NMT model are trained to maximize the likelihood of a set The training corpus generally consists of data from various domains, which are not distinguished by the NMT model.This may pose difficulties to multi-domain translation.
Multi-Domain Translation
This task aims to build a unified model on the mixed-domain data by maximizing performances across all domains.Formally, there are N subsets D 1 , ..., D N from different domains, where the n-th domain of subset Accordingly, the training objective is which maximizes the likelihood over training examples in each domain (i.e., L n (θ)).As seen, there is no explicit signals to guide the model to learn domain-aware information in the learning objective function.As a result, the parameters in a standard NMT model generally capture the general knowledge while ignoring the domain-specific knowledge.
Approach
Our goal is to build a unified model, which can achieve good performance on all domains.As shown in Figure 1 where W n is the parameters related to the n-th domain and F(•) is the functional mapping which can be implemented by different types of neural networks such as feed-forward network (FNN), CNN and self-attention network (SAN).Subsequently, the output representations H , which encode both the general knowledge H and the domain-specific knowledge F(H, W n ), are fed to a shared decoder for generating the target sentence y.
The differences between domains are usually uncertain and tiny, which leads to inefficiency of directly fitting a desired underlying mapping.Recently, "residual" -a concept in deep neural networks (He et al. 2016;Sohn et al. 2019) has been successfully applied to extract feature differences in fields of image classification (Sohn et al. 2019) and speech recognition (Van Den Oord et al. 2016) and achieves remarkable improvements of performance.In our preliminary experiments, we have investigated two different implementations of transformation networks including stacked feedforward networks and multi-head attention networks.We found that the multi-head attention mechanism performs better in respect of capturing such domain-aware characteristics for the multi-domain translation task.In this study, we also parameterized F(•) based on domain symbols, where each transformation module is able to maintain its own domainaware parameters.
Domain-Aware Batch Learning
To train distinct parameters of each domain, we propose a domain-aware learning strategy, in which one batch only contains training examples in a certain domain.One straightforward implementation is to alternately or randomly feed domain-aware batches into our proposed model.However, in the preliminary experiments, we found severe overfitting problems when using unbalanced multi-domain data.To overcome this, we propose a more balanced method, which heuristically selects a certain domain-batch by considering its distribution over the entire training corpus.Formally, domain-batches are sampled according to a multinomial distribution with probabilities {q i } i=1,...,N : where n i is the number of batches of the i-th domain and α = 0.7 is the balance factor, which aims to increase the number of tokens associated with low-resource domains and alleviates the bias towards high-resource domains.
Domain Supervision Domain Distillation
The generalization ability of the teacher model can be transferred to the student by using the class probabilities produced by the cumbersome model for training the small model (Hinton et al. 2014).Recent studies on speech recognition show that training student networks with multiple teachers achieves promising empirical results (You et al. 2019).
Inspired by these observations, we propose to teach a unified model with multiple teachers trained on different domains.Specifically, we employ the soft targets produced by fine-tuned models as the supervision signal to train our unified model with the benefits of exploiting more data information and simultaneously reducing the interference across domains.
For the learning objective, we linearly interpolate soft target distribution produced by the corresponding domain teacher with hard labels: where λ is a hyper-parameter that is shared across multiple domains, |V | is the vocabulary size of the target language, and P (•) is the soft target.
Domain Discrimination Adversarial and discriminative learning can effectively distinguish between different types of features (Ganin and Lempitsky 2015; Chen et al. 2017b;Sun et al. 2018;Adams et al. 2019).In this work, we augment the transformation networks with the ability of domain discrimination.Specifically, the adversarial domain classifier is deployed at the input of DTNs, namely: where d is the domain symbol, W D is the weights of softmax classifier and H is the weighted representations of the encoding representations (H), which is calculated as follows: where the computation of α i is similar to self-attention (Lin et al. 2017), in which the query is a trainable vector.Furthermore, we conduct a domain classifier on the output of DTNs H to guide it to embed domain-specific knowledge: where γ is a set of parameters of the domain classifier and H can be obtained according to Equation (8).
Overall, the training objective is a linear interpolation of the likelihood and the domain discrimination: All the data were tokenized and then segmented into subword symbols using byte-pair encoding (Sennrich et al. 2016b) with 30K merge operations to alleviate the out-ofvocabulary problem.We used 4-gram BLEU score (Papineni et al. 2002) as the evaluation metric, and bootstrap resampling (Koehn 2004) for statistical significance.
Model For fair comparison, we implemented all proposed and other approaches on the advanced Transformer model (Vaswani et al. 2017) using the open-source toolkit Fairseq (Ott et al. 2019).We followed Vaswani et al. (2017) to set the configurations of the NMT model, which consists of 6 stacked encoder/decoder layers with the layer size being 512.All the models were trained on 8 NVIDIA P40 GPUs where each was allocated with a batch size of 4,096 tokens.We trained the baseline model for 100K updates using Adam optimizer (Kingma and Ba 2015), and the proposed models were further trained with corresponding parameters initialized by the pre-trained baseline model.We fixed the hyperparameters λ and δ as 0.1.
Baseline Comparisons
To make the evaluation convincing, we re-implemented and compared with five previous models on multi-domain adaptation, which can be divided into two categories with respect to the number of models.The multiple-model approaches require to maintain a dedicated NMT model for each domain: (2018).We also list the results of Zeng et al. (2018) on RNN-based NMT."#M" denotes the number of required models and "#Para."denotes the number of parameters."+" denotes appending new features to the above row."↑" indicates statistically significant difference (p < 0.01) from "Transformer" in the corresponding domains.
• Fine-tune (Luong and Manning 2015) that first trained a model on the entire data, and then fine-tuned multiple models separately using in-domain datasets.• Mixed Fine-tune (Chu et al. 2017) that extended the finetune approach by training on out-of-domain data, then fine-tuning on in-domain and out-of-domain data.The unified model methods handle adaptation to multiple domains within a unified NMT model: • Domain Control (Kobus et al. 2016) that introduced domain tag to the source sentence.• Domain Discrimination (Britz et al. 2017) that adopted domain classification via multitask learning.• Domain Context (Zeng et al. 2018) that incorporated the word-level context for domain discrimination.Our work falls into the unified model, where the above three related approaches are comparable to ours.Our work is not directly comparable to the fine-tuning approaches due to the different numbers of required models.
Results
Table 2 and Table 3 respectively show results on the smallscale balanced Zh⇒En data used by Zeng et al. (2018) and our newly-built large-scale corpus.Besides, Table 4 shows results on Zh⇒En and Zh⇒En multi-domain data.As seen, the proposed models significantly and incrementally improve the translation quality in all cases, although there are considerable differences among different scenarios.
Baselines In Table 2, the Transformer model (Row 3) greatly outperforms the results of RNN-based models reported by Zeng et al. (2018) on the same data (Rows 1-2), which makes the evaluation convincing in this work.The fine-tuning approaches (Rows 4-5) achieve significant improvements over the Transformer baseline.We attribute this to the facts that 1) the fine-tuning maintains a distinct model for each domain; and 2) there are sufficient data in each target domain.The unified models (Rows 6-8 in Table 2) consistently improve translation performance, and the "+Domain Context" method achieves the best performance at the cost of introducing additional parameters.The unified models are directly comparable to our approach.
Our Models As shown in Table 2, the proposed models (Row 9-10) outperform not only the Transformer baseline (Row 3) but also comparable approaches (Rows 6-8).Introducing transformation networks (Row 9) improves translation performance over Transformer baseline by +1.44 BLEU point, indicating that DTNs can effectively capture domainaware knowledge.Besides, adding two supervision signals (Row 10) can outperform the baseline by +3.35 BLEU.Surprisingly, the performance of our unified model is on par with fine-tuning which requires four separate models (34.25 vs. 34.92 BLEU).This is encouraging, since the fine-tune approach catastrophically increases the overhead of deployment in practice, while our approach avoids this problem without a significant decrease of translation performance.
Translation Quality on Other Scenarios
To validate the robustness of our approach, we also conducted experiments on a large-scale Zh⇒En corpus (as shown in Table 3) and other language pairs (as shown in Table 4).As seen, the superiority of our approach holds across different data sizes and language pairs, demonstrating the effectiveness and universality of the proposed approach.Furthermore, our unified model surprisingly outperforms the fine-tuning (multiplemodel) on the unbalanced De⇒En corpus.
Analysis
We conducted extensive analyses on the small-scale Zh⇒En data to better understand our model in terms of effectiveness of domain transformation and supervision.
Effects of Domain Transformation
Domain Transformation With the dimension reduction technique of t-SNE (Maaten and Hinton 2008), we visualized the general and domain-specific representations of source sentences in test set.As shown in Figure 2, the representation vectors in different domains are centered in different regions.Furthermore, the distribution of encoder representations is concentrated to preserve shared knowledge, while the transformed representations are diverse to keep domain-specific characteristics.This confirms that our ap- proach is able to distinctively transform the source-side domain knowledge from the general to the particular.
Domain-Specific Translation
We further examined whether each specialized transformation module acquires its specific domain knowledge.Figure 3 shows the translation results of test set in each domain decoded by four different domain-specific transformation modules.As seen, the each transformation module performs best on its corresponding domain.Some domains with more distinctive characteristics (e.g., Law) can achieve more significant
Effects of Domain Supervision
Contribution Analysis Table 5 lists translation results when baseline or our model uses either domain distillation or domain discrimination, or both signals.As seen, adding supervision signal consistently improves the performance over the "Domain Transformation" model (Rows 6-7), and combining both signals accumulatively achieves the best performance (+1.9 BLEU, Row 8).This confirms the hypothesis in the section of domain supervision that the effects are reflected in three aspects: 1) weak supervision encourages model to exploit both shared and domain-aware knowledge across domains; 2) strong supervision guides model to learn distinct features; 3) combination makes them complementary to each other.It is also interesting to investigate the effect of domain supervision without transformation networks (Rows 1-3), which still improves performance (31.45 vs. 30.90),demonstrating the effectiveness and universality of domain supervision.
Concerning the distillation approach (Rows 2-3 and 5-6), we revisited word-level and sequence-level distillation methods for Transformer-based NMT.Different from the results reported by Kim and Rush (2016) on RNN-based models, we found that word-level distillation marginally outperformed its sequence-level counterpart (31.51 vs. 31.45 on top of "Transformer",and 33.05 vs. 32.70 top of "+ Domain Transformation").Through case studies, we found that word-level distillation produced more fluent outputs, possibly due to providing smoother target labels.This explains why word-level distillation is a widely-used implementation in multi-lingual and multi-domain tasks on top of Transformer-based models (Tan et al. 2019;You et al. 2019).Therefore, we applied word-level distillation in our work.Fine-tune is the conventional way for domain adaptation (Luong and Manning 2015;Sennrich et al. 2016a;Freitag and Al-Onaizan 2016).Chu et al. (2017) extended the fine-tune strategy by training the model on out-ofdomain data, which is then fine-tuned on a mix of in-domain and out-of-domain data.The two approaches can be easily applied to multi-domain translation by separately maintaining a fine-tuned model for each domain.In this study, we empirically compare with the fine-tune strategies, and find that our unified model achieves comparable results with the fine-tuning approaches.
Case Study
Multi-Domain Translation Multi-domain machine translation aims to construct the NMT model with the ability of translating sentences across different domains.Kobus et al. (2016) introduced embeddings of source domain tag to the encoder, which can perform domain-adapted translations in multiple domains.Britz et al. (2017) presented various mixing paradigms for multi-domain settings, and demonstrated their efficacy across multiple language pairs.Zeng et al. (2018) explored utilizing word-level domain contexts and jointly modeled multi-domain NMT and domain classification tasks.Our work is different in that 1) we learn the domain-specific knowledge by transforming from the general knowledge, while Zeng et al. (2018) split the encoder representation into general and domain-specific representations with two separate gates; and 2) we maintain a distinct transformation network with its own parameters for each domain, while Zeng et al. (2018) used a shared set of parameters across domains.In addition, we exploit more domain supervision techniques (e.g., domain distillation) to further improve multi-domain translation performance.
Furthermore, Gu et al. (2019) maintained a distinct set of encoder-decoder for each domain.This is analogous to the fine-tuning strategy, which maintains multiple models rather than a unified model for multi-domain translation.In addition, our approach also benefits from capturing the correlations between the general and domain-specific knowledge with the introduced transformation networks.
Conclusion and Future Work
In this paper, we propose to explicitly transform domain knowledge from the general to the particular for a multidomain NMT model.In order to guarantee knowledge transformation, we also exploit two kinds of supervision signal to further improve the translation quality.Empirical results on a variety of language pairs demonstrate the effectiveness and universality of the proposed approach.We also conducted extensive analyses to demonstrate the necessity of explicitly modelling the transformation of domain knowledge for multi-domain translation.
The proposed approach significantly improves translation performance at the cost of increased computational complexity.Network compression would be a promising direction to alleviate this problem.In future work we plan to exploit different model compacting techniques such as knowledge distillation (Hinton et al. 2014) and network pruning (Han et al. 2016), to make deployment of our approach more practical.
Figure 1 :
Figure 1: Architecture of the proposed multi-domain translation model, which consists of two key components: 1) domain transformation that transforms from the general representations to domain-specific representations, and we maintain a distinct transformation network for each domain; 2) domain supervision that contains two sub-components: domain distillation and domain discrimination.Domain distillation learns domain-specific model guided by domain teachers, which are fine-tuned on corresponding training corpora.Domain discrimination guides the two types of representations to embed the required content.In this example, the data of Domain 1 ("D1") are used to train the model, and solid line denotes the information flow.
Figure
Figure Visualization of encoder (left) and transformed (right) representations.Dots in different colors denote sentences in different domains.
Figure 3 :
Figure 3: Translation results of test set in each domain decoded by four domain-specific transformation modules.As seen, each specialized transformation model performs best on its corresponding domain.
Table 1 :
(Wang et al. 2018)ning corpora: "D" and "|S|" indicate the domain and the number of sentences (in millions).As seen, Zh⇒En can be regarded as "balanced data" as the number of training samples is similar across domains while De⇒En and En⇒Fr are "unbalanced data" as the numbers of sentence pairs are very different.al.(2018), and consists of four evenly distributed domains: law, oral, thesis and news.The large corpus is collected from CWMT2017 Lingosail, TVSub(Wang et al. 2018)and LDC, which consists of four balanced domains: law, news, patent and subtitle.For German⇒English (De⇒En) and English⇒French (En⇒Fr) translation tasks, we used a large amount of training data extracted from OPUS.They respectively contain four and two unevenly-distributed (unbalanced) domains including law, medical, information technology and Koran and European Parliament.The validation and test sets are officially-provided, otherwise randomly selected from the corresponding training corpora.
Chen et al. (2017c)θ) likelihood + log P (d|x; γ) domain classifier + log P (d|x; ψ) + δ × H(P (d|x; ψ)) domain adversarial (10)where δ is the balance factor, and H(P (•)) is the entropy of the probability distribution of the adversarial domain classifier with N domain labels.FollowingZeng et al. (2018)andChen et al. (2017c), we also employed the two-phase training strategy, where we alternatively optimized L(θ, γ, ψ) with {θ, γ} and ψ.Besides, we discarded the component log P (d|x; ψ) when training the {θ, γ} parameter set.Experiments SetupData We conducted experiments on four different corpora, as listed in Table1.For Chinese⇒English (Zh⇒En) translation, we used both a small-scale and a large-scale corpus.The small one is the same as that used byZeng et
Table 2 :
Translation results on small-scale balanced Zh⇒En multi-domain data used byZeng et al.
Table 3 :
Translation results on large-scale balanced Zh⇒En multi-domain data built in this work."+" denotes appending new features to the above row."↑" indicates statistically significant difference (p < 0.01) from "Transformer" on different domains.
Table 4 :
Translation results on unbalanced De⇒En and En⇒Fr multi-domain data."+" denotes appending new features to the above row."↑" indicates statistically significant difference (p < 0.01) from "Transformer" on different domains.
Table 6 :
Table 6 shows a translation example randomly selected from the test set in Thesis domain.As seen, augmenting transformation module into NMT can generate An example of Zh⇒En translation sampled from Thesis test set.Domain-specific words, phrases and patterns are highlighted with colors.Our "+Trans.","+Distill."and "+Discri" models are consistent with Table 5.As seen, augmenting transformation networks into NMT can generate more domain-specific words but with low fluency.Adding supervision signals can incrementally generate more fluent domain-specific phrases and patterns. | 2019-11-22T08:15:52.000Z | 2019-11-22T00:00:00.000 | {
"year": 2019,
"sha1": "c9a263db329887827a367020229e88a51d889317",
"oa_license": null,
"oa_url": "https://ojs.aaai.org/index.php/AAAI/article/download/6461/6317",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "c9a263db329887827a367020229e88a51d889317",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
4552843 | pes2o/s2orc | v3-fos-license | Optimization of the dosimetric leaf gap for use in planning VMAT treatments of spine SABR cases
Abstract The dosimetric leaf gap (DLG) is a beam configuration parameter used in the Varian Eclipse treatment planning system, to model the effects of rounded MLC leaf ends. Measuring the DLG using the conventional sliding‐slit technique has been shown to be produce questionable results for some volumetric modulated arc therapy (VMAT) treatments. This study therefore investigated the use of radiochromic film measurements to optimize the DLG specifically for the purpose of producing accurate VMAT plans using a flattening‐filter‐free (FFF) beam, for use in treating vertebral targets using a stereotactic (SABR, also known as SBRT) fractionation schedule. Four test treatments were planned using a VMAT technique, to deliver a prescription of 24 Gy in 3 fractions to four different spine SABR treatment sites. Measurements of the doses delivered by these treatments were acquired using an ionization chamber and radiographic film. These measurements were compared with the doses calculated by the treatment planning system using a range of DLG values, including a DLG identified using the conventional sliding‐slit method (1.1 mm). An optimal DLG value was identified, as the value that produced the closest agreement between the planned and measured doses (1.9 mm). The accuracy of the dose calculations produced using the optimized DLG value was verified using additional radiochromic film measurements in a heterogeneous phantom. This study provided a specific initial DLG (1.9 mm) as well as a film‐based optimization method, which may be used by radiotherapy centers when attempting to commission or improve an FFF VMAT‐based SABR treatment programme.
tic (SABR, also known as SBRT) fractionation schedule. Four test treatments were planned using a VMAT technique, to deliver a prescription of 24 Gy in 3 fractions to four different spine SABR treatment sites. Measurements of the doses delivered by these treatments were acquired using an ionization chamber and radiographic film.
These measurements were compared with the doses calculated by the treatment planning system using a range of DLG values, including a DLG identified using the conventional sliding-slit method (1.1 mm). An optimal DLG value was identified, as the value that produced the closest agreement between the planned and measured doses (1.9 mm). The accuracy of the dose calculations produced using the optimized DLG value was verified using additional radiochromic film measurements in a heterogeneous phantom. This study provided a specific initial DLG (1.9 mm) as well as a film-based optimization method, which may be used by radiotherapy centers when attempting to commission or improve an FFF VMAT-based SABR treatment programme.
| INTRODUCTION
Stereotactic ablative radiosurgery (SABR, also known as stereotactic body radiotherapy or SBRT) has been shown to be effective for treating tumors in and around the vertebra. 1,2 These "spine SABR" treatments require the use of a small number of treatment fractions (typically 1-4) to deliver a relatively high dose of radiation (typically 12-27 Gy). 3 In order to minimize the time taken to deliver these high-dose fractions, especially for patients who may be suffering pain and discomfort due to vertebral metastases, treatments can be The Spine SABR target volumes are generally irregular in shape due to the location and geometry of the targeted vertebra as well as the importance of sparing the spinal cord, which abuts or penetrates the target volume. Treatment planning using inversely optimized volumetric modulated arc therapy (VMAT) techniques, can result in very steep dose gradients (greater than 12%/mm 4 ) between the targeted vertebra and the spinal cord. To produce the complex dose distributions required to achieve that spinal cord sparing while adequately treating the targeted vertebra, VMAT uses moving multileaf collimators (MLCs), with simultaneously varying dose rates and gantry speeds. 5 These complex, dynamic systems present numerous opportunities for dose uncertainties.
The AAPM Task Group-101 report highlighted the accuracy required in treatment planning for SABR treatments, 6 and recommended rigorous testing of the TPS dose calculation accuracy including end-to-end tests. Accurate calculations of dose and dose gradients are especially important for treatments where ablative doses of radiation are delivered to targets in close proximity to critical structures, such as spine SABR treatments. Dose calculation accuracy is known to be detrimentally affected by the use of suboptimal beam configuration data in the radiotherapy treatment planning system (TPS) 7,8 and by the inappropriate handling of simplifications in the TPS model. [9][10][11] For example, Varian Eclipse TPS (Varian Medical Systems, Palo Alto, USA) simplifies the modelling of the physical geometry of the MLC leaves, omitting physical characteristics such as the rounded leaf-ends.
To overcome this, Eclipse allows the user to define a specific parameter, the dosimetric leaf gap (DLG), which defines the difference between the physical round leaf end and the straight edge model of the TPS. 9 The value of the DLG is applied when calculating dose for modulated radiotherapy (including VMAT) treatment plans, as a retraction between the planned and calculated MLC positions. The DLG parameter is one of a few values that needs to be modified by the user when configuring the Varian Eclipse anisotropic analytical algorithm (AAA). 10 Measurement of the DLG is performed by the sliding-slit test (as described in Varian Medical Systems' documentation 10 ). This method produces a single DLG value per energy, which is applied in the Eclipse TPS to all leaf pairs irrespective of MLC leaf width. 9 While some studies have identified good agreement between planned and measured doses when using the DLG value measured using the standard slidingslit test, 12,13 other authors have identified substantial discrepancies. 11,14 For example, Szpala et al. 11 and Kielar et al. 14 elected to optimize the DLG value using clinical VMAT plans after they observed that the DLG value measured using the sliding slit test produced unreliable results when used to calculate clinical VMAT plans. These authors recommended careful testing for dosimetric accuracy for irradiating small targets, especially those used for SABR.
Similarly, both Szpala et al. 11 and Kumaraswamy et al. 9 found that the single DLG value used in Eclipse should be considered an estimate only; the optimal DLG for each MLC leaf varies with the distance from the central-axis and with the position of the opposite leaf. Due to the differences between the field sizes and complexity of MLC motion required when treating different anatomical sites, 15 the DLG can be expected to vary with anatomical treatment site and treatment modality.
Previous examinations of the Varian DLG have focused on treatments with standard (nonstereotactic) fractionation, planned for the brain, 9,11,14 prostate, 9,12 head and neck, 9,12 and AAPM Task Group 119 standard volumes (average prostate and simplified spine). 13,14 Some of these studies have suggested that the DLG values that are required to accurately calculate dose for FFF modalities are especially different from the DLG values that are obtained using the sliding slit method. 14,16 It is therefore important to specifically evaluate and optimize the DLG that is used when calculating dose for hypofractionated SABR treatments that use FFF VMAT beams.
This study therefore demonstrates the use of radiochromic film measurements to investigate the optimal DLG for use when treating spine SABR cases using a VMAT technique, with an FFF beam, in order to provide a specific initial DLG as well as a film-based optimization method, which may be used by radiotherapy centers when attempting to commission or improve an FFF VMAT-based SABR treatment programme.
2.A | Test treatment plans
The prescription used for the clinical test spine SABR treatment plans was 24 Gy, to be delivered in 3 fractions of 8 Gy. This prescription was selected with reference to literature 16
2.C | DLG verification: Heterogeneous phantom
The suitability of the optimized DLG value was evaluated in an inhomogeneous phantom, the IMRT Thorax phantom (CIRS Inc, Norfolk, USA), using a fine (1 mm) dose calculation grid resolution. Only two DLG values were used when calculating the Spine SABR plans on the IMRT Thorax phantomthe initial 1.1 mm and the optimal 1.9 mm value. As these measurements in the transverse plane were used to evaluate the sparing of the spinal cord region as well as the accurate treatment of the planned high-dose (vertebral) region, both arcs from each treatment were delivered to each piece of film. This represents a single fraction treatment dose.
3.A | DLG optimization: Homogeneous phantom
The DLG value for the 6 MV FFF beam with Millenium-120 MLC was found to be 1.1 mm using the sliding slit method, as shown in Fig. 1.
Using the DLG value identified using the sliding-slit method plans. From these results, the optimal DLG from the film measurements for the FFF Spine SABR is in the range 1.9-2.1 mm. The chamber measurements are shown in Fig. 3(b)the optimal DLG is in the range 1.6-1.9 mm.
A value of 1.9 mm was therefore selected as the optimal DLG for use when planning FFF VMAT spine SABR treatments. The Eclipse AAA beam model used in this study was commissioned using data for field sizes ranging from 3 9 3 cm 2 to 40 9 40 cm 2 . 10 Although data for smaller field sizes is usually measured during linac commissioning, it is not required for commissioning of the beam model within Eclipse. 12 The DLG is used in the Varian Eclipse treatment planning system as an approximation factor to reduce the dosimetric calculation uncertainty arising from the use of a simple MLC model with straight leaf ends. Conventionally, the DLG is measured using vendor-supplied DICOM plans that produce a sliding-slit with 13 control points,
3.B | DLG verification: Heterogeneous phantom
where the MLC leaves move at the same speed, in one direction, with a constant dose rate. 10 This broadly approximates an IMRT delivery, where the MLC leaves move in the same direction, from one side of the field to the other, albeit at different speeds.
By contrast, VMAT treatment deliveries are much more complex.
Each VMAT arc typically uses 178 controls points, with MLC leaves undergoing frequent changes in direction. Adjacent MLC leaves may move in opposite directions to each other and at different speeds.
The dose rate is also modulated and defined for each control point.
A single point measurement using the sliding slit method does not replicate the complex MLC movements such as those in a VMAT treatment for a spine SABR case.
It is therefore unsurprising that determination of the appropriate DLG value for clinical use in planning VMAT treatments should require the use of more complex plans than the sliding-slit, evaluated using more sophisticated measurements than a point dose.
T A B L E 2 Gamma agreement indices (percentage of points passing a gamma evaluation using 3%, 1.5 mm criteria) resulting from comparing the dose measured using film in a transverse plane through the heterogeneous (thorax) phantom against the dose calculated in the same plane using the treatment planning system with the sliding-slit-based DLG (1.1 mm) and the optimization-based DLG (1.9 mm). Optimization of the DLG for VMAT treatments should involve the use of treatment plans that are representative of intended clinical use of the beam model, with measurements completed using accurate, high-resolution two-dimensional dosimeters. 9,11,13,14 In this study, radiochromic film was shown to produce results that were sufficiently sensitive to DLG variation for use in DLG optimization, although verification using a second dosimeter (such as an ionization chamber) may be advisable (see Fig. 3). The radiochromic film used in this study also provided accurate, high-resolution measurements that allowed the suitability of the optimized DLG value to be verified, when dose was calculated at a high resolution and the test treatments were delivered to a heterogeneous phantom (see Fig. 4). Estimated measurement uncertainties affecting the use of radiochromic film for radiotherapy dosimetry range from 0.55% 21 to 4%. 22 It is therefore important to independently evaluate uncertainties when commissioning any radiochromic film dosimetry system that is used to optimize beam configuration values, such as the DLG. The results shown in Fig. 3 confirm the importance of optimizing the DLG using a range of clinically likely test treatments. For this study, the test treatment volumes were thoracic and lumbar vertebral bodies, with and without left and right pedicles, and the corresponding treatments were designed with a range of different field sizes and collimator angles. Figure 3 shows that the particular values of the DLG that gave the closest agreement between the planned and measured doses differed between plans and between measurement techniques. We have adopted the optimal DLG of 1.9 mm for the 6FFF beam model for use in our clinic, for treatment of spine SABR cases. We have not yet investigated the application of this optimal DLG to SABR planning for other anatomical sites. The identification of a DLG value that is optimal for an entire class of plans (for a specific treatment modality, used to treat a specific anatomical site) evidently requires the use of different examples of the specific anatomical site to be treated.
| CONCLUSIONS
This study used an evaluation of DLG suitability for four spine SABR test treatment plans to confirm that the DLG identified using the conventional sliding-slit method does not produce clinical treatment plans that show good agreement between planned and measured doses for VMAT treatments delivered using a FFF beam.
Based on the results of this study, the following general recommendations can be made, for optimizing the DLG for use in planning spine SABR (or any other) VMAT treatments:
CONFLI CT OF INTEREST
The authors have no conflicts of interest to disclose. | 2018-04-03T05:01:56.764Z | 2017-06-02T00:00:00.000 | {
"year": 2017,
"sha1": "00a9f3629c2e7c647b4ed6f3f25f80e9769f57a1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/acm2.12106",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "00a9f3629c2e7c647b4ed6f3f25f80e9769f57a1",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51934072 | pes2o/s2orc | v3-fos-license | A case report of infective endocarditis in a 10-year-old girl
Infective endocarditis is a rare disease in children, and it can result in significant morbidity and mortality. The epidemiology of infective endocarditis in children has shifted in recent years with less rheumatic heart disease, more congenital heart disease survival, and increased use of central venous catheters in children with chronic illness. Less commonly, infective endocarditis occurs in children with no preexisting cardiac disease or other known risk factors. We present a "case of" 10 year-old girl with no known cardiac disease or any other risk factors who was diagnosed with infective endocarditis according to modified Duke criteria. Blood cultures grew haemophilus parainfluenza. She had prolonged fever for 2 weeks after starting antibiotics, even though her blood culture became sterile 48 hours after treatment. We emphasize the importance of maintaining high index of suspicion for endocarditis in febrile children, even those without cardiac anomalies or other apparent risk factors.
Infective endocarditis is a rare disease in children, and it can result in significant morbidity and mortality. The epidemiology of infective endocarditis in children has shifted in recent years with less rheumatic heart disease, more congenital heart disease survival, and increased use of central venous catheters in children with chronic illness. Less commonly, infective endocarditis occurs in children with no preexisting cardiac disease or other known risk factors. We present a "case of" 10 year-old girl with no known cardiac disease or any other risk factors who was diagnosed with infective endocarditis according to modified Duke criteria. Blood cultures grew haemophilus parainfluenza. She had prolonged fever for 2 weeks after starting antibiotics, even though her blood culture became sterile 48 hours after treatment. We emphasize the importance of maintaining high index of suspicion for endocarditis in febrile children, even those without cardiac anomalies or other apparent risk factors.
Case Report
A 10-year-old previously healthy girl presented to the hospital with bacteremia. The patient was initially admitted to the hospital 2 days prior with 3 days of fevers (up to 40°C), emesis and fatigue that were thought to be related to a viral illness. She was discharged home 24 hours later after her condition slowly improved. A blood culture, which was obtained during that hospitalization, resulted positive at 48-hours for gram-negative rods, so family was called and the patient was re-admitted. The patient had no chronic diseases, and took no medications. She was born full-term with no complications and her growth and development were age-appropriate. On admission she had a temperature of 38.5°C, a pulse of 96 beats/minute, a blood pressure of 98/65 mmHg, a respiratory rate of 20 breaths/ minute, and an oxygen saturation 98% in room air. Her physical examination was significant for an alert and oriented child who was mildly dehydrated. She had a regular heart rate and rhythm, and normal first and second heart sounds without murmurs. Her lungs were clear to auscultation bilaterally, and her abdomen was soft, non-distended, and non-tender without hepatosplenomegaly. She had a normal dentition with no tooth decay. She had no skin rash or petechiae. The rest of her examination findings were unremarkable. A complete blood count revealed a normal white blood cell count of 7.3x10 3 /mm 3 (4-12), anemia with a hemoglobin and hematocrit of 10 g/dL (11-14) and 29.1% (32-42) respectively, and thrombocytopenia with platelets 103x10 3 /mm 3 (140-440). Electrolytes showed a sodium of 135 mmol/L (138-145), a potassium of 3.1 mmol/L (3.7-5.6), a blood urea nitrogen of 11 mg/dL (10-18), a creatinine of 0.5 mg/dL (0.4-1), and a calcium of 8.3 mg/dL (9-11). Her C-reactive protein (CRP) was elevated at 10.8 mg/dL (normal <1 mg/dL). Her urinalysis showed no hematuria. A second blood culture was obtained before she was started on intravenous (IV) ceftriaxone. Her first and second blood cultures (48 hours apart) were identified later as beta-lactamase negative Haemophilus parainfluenza. She developed a new regurgitant heart murmur two day after admission, so a transthoracic echocardiogram (TTE) was obtained due to concern of infective endocarditis (IE). This showed a vegetation on the mitral valve. She had a transesophageal echocardiogram, which confirmed a 13x10-mm vegetation below the posterior leaflet of the mitral valve and resultant mild to moderate mitral valve regurgitation ( Figure 1).
High dose ceftriaxone 100 mg/kg/day divided every 12 hours was continued. She had daily fevers (38.3-39.1 C) during her entire hospital stay. Blood cultures became sterile 48 hours after starting ceftriaxone. However, they were obtained daily for a week due to persistent fevers. They remained negative for bacterial growth. Abdominal ultrasound was normal with no hepatic, renal, or spleen abscess. A TTE was repeated twice and showed stable size vegetation, stable mitral regurgitation, and normal cardiac function. Computed tomography of head showed no evidence of embolic stroke and the patient's neurological status and examination remained normal. It was determined that her daily fevers were due to the large size of the vegetation and difficulty to eradicate the organism, not due to treatment failure or complications. Prophylactic surgery to prevent a primary embolic event was not indicated in this case per American Heart Association (AHA) guidelines. 1 She was discharged home despite persistent fevers after 11 days of hospital stay to continue IV ceftriaxone therapy at home for 4 weeks with a close follow up with her primary care physician, pediatric infection disease specialist, and cardiologist. She became afebrile at home two weeks after treatment started. Repeated echocardiogram a month after discharge showed stable mild to moderate mitral valve regurgitation with a small echo density attached to the posterior leaflet of the mitral valve likely representing a fibrinous material and not an active vegetation.
Discussion
Infective endocarditis (IE) is a rare disease in children, and it can result in significant morbidity and mortality. The epidemiology of IE in children has changed in recent years as congenital heart disease (CHD) becomes the main predisposing factor from the developed world and rheumatic heart disease becomes much less frequent. [1][2][3][4][5] There is increased incidence of IE in children with no preexisting heart disease likely due to increased use of central venous catheters (CVC) especially in premature children with chronic illness. 1,3 However, in up to 10% of cases, IE is seen in children with no known structural heart disease or other risk factors 1 similar to this case. Viridans streptococci and staphylococcus aureus remain the most common pathogens responsible for pediatric IE with or without CHD. [1][2][3][4][5][6][7] On the other hand, a small percent (5%) is caused by a group of fastidious gram-negative organisms known as HACEK (Haemophilus species, Aggregatibacter species, Cardiobacterium hominis, Eikenella corrodens, and Kingella species). 1,5 Culture-negative IE, which is estimated to be in 5% of cases as well, has been described in patients with clinical and echocardiographic evidence of IE with blood culture yields no organisms. 1,5 Damaged cardiac endothelium and transient bacteremia are believed to be the main two factors in the pathogenesis of IE. Damaged endothelium, resulted usually from turbulent blood flow in CHD, causes a sterile platelets-fibrin thrombus (nonbacterial thrombotic endocarditis). It is then the transient bacteremia (from dental procedure or daily activities like toothbrushing) that colonize this thrombus and replicate to from the infected vegetation. 1,5,8 The pathogenesis of IE in children with no preexisting cardiac disease and no CVC or other known risk factors (as seen in this case) is not fully understood. These children might have asymptomatic undiagnosed mild structural cardiac anomalies. 3 The clinical presentation of IE has been traditionally classified as subacute and acute presentation. Subacute IE presents as prolonged low-grade fever for weeks or even months with other symptoms like fatigue, chills, myalgia, and weight loss. 1,5,7 Acute IE on the other hand presents with high fever and rapid deterioration if not recognized in a timely manner. 1,5 Patient might have mixed features similar to this case as patient presented acutely, but was clinically stable overall and did not deteriorate. The diagnosis is based on well-known modified Duke criteria (Tables 1 and 2). This case met the criteria for definite diagnosis (2 major clinical criteria). Laboratory abnormalities that can be seen in IE are elevated acute phase reactants (erythrocyte sedimentation rate and C-reactive protein), anemia, thrombocytopenia, hematuria, and positive rheumatoid factor. 1,5 Cardiac echocardiography is essential for the diagnosis and monitoring vegetation size and cardiac function. It is important to notice that absence of vegetation on echocardiography does not necessary "rule" out IE. 5 Patients usually require a prolonged course (4-6 weeks) of antibiotics intravenously. The blood culture in this patient resulted positive for haemophilus parainfluenza; betalactamase negative, she was already on IV ceftriaxone for her bacteremia and she was continued on a high dose 100 mg/kg/day divided every 12 hours. Ceftriaxone is the recommended drug for HACEK per The AHA guidelines. 1 It important to mention that caring for these children with IE should be a collaboration between pediatric hospitalist, infectious disease specialist, cardiologist, and cardiac surgeon. The AHA has released new guidelines in 2015 with detailed antibiotic regimes and surgical indications. 1 Cardiac complications include congestive heart failure, valvular dysfunction, intra-cardiac abscess, and heart block. 1,5,7 Extracardiac complications include among others sepsis, extra-cardiac infections (e.g. osteomyelitis and renal Case Report abscess), immune complex depositions (e.g. glomerulonephritis), and embolization (e.g. stroke). 1,5,7 Finally, the AHA recommend to focus mainly on oral and dental hygiene rather than antibiotics prophylaxis in preventing IE. Antibiotics prophylaxis recommended before high-risk dental procedures for cardiac conditions with the highest risk for adverse outcome from IE and these include: 1 i) cardiac valve repair with a prosthetic valve or material; ii) previous IE; iii) certain CHD (e.g. unrepaired cyanotic CHD, and repaired CHD with prosthetic material or device during the first 6 months after the procedure); iv) recipients of cardiac transplants who develop cardiac valvulopathy.
Conclusions
The following conclusions shoud be considered: -Infective endocarditis should always be suspected in febrile children even with-out known cardiac disease or other apparent risk factors like central venous catheters. -Infective endocarditis might cause prolonged fever after starting treatment. -Congenital heart disease is the principal predisposing factor for infective endocarditis, with more cases in children without pre-existing heart disease due to widespread use of central venous catheters. -Viridans streptococci and Staphylococcus aureus remain the main culprit pathogens. | 2018-08-14T20:08:23.086Z | 2018-07-10T00:00:00.000 | {
"year": 2018,
"sha1": "cb105a3bfef153f02c731da5ec99b053e3a5c559",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4081/cp.2018.1070",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b07ddec1497a830214b5606710e0632838103e7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14931946 | pes2o/s2orc | v3-fos-license | A unique natural human IgG antibody with anti-alpha-galactosyl specificity.
A new natural anti-alpha-galactosyl IgG antibody (anti-Gal) was found to be present in high titer in the serum of every normal individual studied. The antibody was isolated by affinity chromatography on a melibiose-Sepharose column. The reactivity of the antibody was assessed by its interaction with alpha-galactosyl residues on rabbit erythrocytes (RabRBC). The specificity was determined by inhibition experiments with various carbohydrates. The anti-Gal interacts with alpha-galactosyl residues, possibly on glycolipids of human RBC (HuRBC), after removal of membrane proteins by treatment with pronase. In addition, the anti-Gal bind specifically to normal and pathologically senescent HuRBC, suggesting a physiological role for this natural antibody in the aging of RBC. The ubiquitous presence of anti-Gal in high titers throughout life implies a constant antigenic stimulation. In addition to the theoretical interest in the antibody, the study of the anti-Gal reactivity seems to bear immunodiagnostic significance. Decrease in the antibody titer was found to reflect humoral immunodeficiency disorders.
It is well accepted at present that the majority of antibodies in human serum are produced as a result of natural immunization. However, the information about the nature and specificity of natural IgG antibodies is relatively scarce. The classical natural antibodies against blood group antigens, which display a well-defined anti-carbohydrate specificity, are mostly of the IgM class and are present in only part of the population, according to the blood type (1,2). The anti-T or anti-Thomsen Friedenreich antibodies, which interact with/3-galactosyl groups usually penultimate to terminal sialic acid residues on various cell membranes, are similarly mostly of the IgM class (3,4). In a recent study, Guilbert et al. (5) demonstrated, in pooled human sera, low activity of natural IgG antibodies to a variety of evolutionary preserved proteins such as thyroglobulin, actin, or myoglobin. Although some cross-reactivity was noted, these antibodies could be inhibited mainly by their respective antigens. The structure of the epitopes involved could not be determined.
In the present study we report on the identification and isolation of a new natural IgG antibody with a distinct anti-a-galactosyl reactivity. The anti-galactosyl (anti-Gal) 1 antibody was found to be of interest for the following reasons: (a) The anti-Gal is the only natural IgG antibody found to be present in high titers in the serum of every normal individual. (b) The anti-Gal seems to display a physiological role in senescence of human erythrocytes (RBC). The anti-Galbinding site on the human RBC differs from any known galactose-containing RBC antigen, and possibly is an epitope on membranal glycolipids. (c) Since anti-Gal is constantly produced throughout life, the determination of its titer may serve as a potential tool for the assessment of the humoral immune response in individual patients.
Materials and Methods
RBC and Sera. Whole blood was obtained from normal donors and from 13 + thalassemie patients, using sodium citrate or heparin as anticoagulant. Sera were obtained from clotted blood of normal individuals. All blood samples studied were screened for ABO (H) specificities by regular blood banking methods.
Isolation of Anti-Gal IgG from Normal Serum. The natural anti-Gal IgG was isolated from heat-inactivated sera of normal individuals of AB blood group by affinity chromatography, using as immunoadsorbent melibiose-Sepharose (Sigma Chemical Co., St. Louis, MO), expressing terminal a-galactosyl residues. Sera samples were applied in 100-ml batches onto a 10-ml melibiose-Sepharose column at 37 °C at a flow rate of 10 ml/h. The unbound serum proteins were eluted with 200 ml phosphate-buffered saline (PBS). The bound antibody was eluted by 20 ml of 0.5 M l>-galactose and passed through a 5-ml column of protein A-Sepharose (Sigma Chemical Co.). This second column, which binds the IgG molecules complexed with galactose eluted from the affinity chromatography column, was washed with 500 ml PBS. The IgG was eluted with 7 ml of 0.1 M glycine HCI buffer, pH 2.6 and immediately neutralized with an equal volume of Tris-HCi buffer, pH 8.4. The eluate was dialyzed against two changes of PBS. To remove any possible residual anti-T (Thomsen Friedenreich) antibody activity, the dialyzed eluates were absorbed on Vibrio cholera neuraminidase (VCN)-treated human RBC at a 3:1 ratio (3).
For this purpose 10 ml of 10% 0 type RBC were incubated with 0.1 U/ml VCN (Behringwerke, Federal Republic of Germany) for 30 min at 37°C in PBS containing 3 mM CaCI2. The absorption was confirmed by demonstrating complete lack of IgG binding to VCN-treated human RBC. The anti-Gal reactivity was assessed by the binding of the antibody to the terminal a-galactosyl groups on rabbit RBC (RabRBC) glycolipids (6) followed by agglutination or rosetting antiglobulin test with K562 cells (7). The anti-agalactosyl specificity of the antibody was determined by inhibition experiments with various carbohydrates (Sigma Chemical Co.).
Hemagglutination Assay and Inhibition by Carbohydrates. Hemagglutination activity of the isolated anti-Gal was titrated by mixing twofold serial dilutions of the antibody sample with an equal volume of 0.5% RabRBC suspension in the wells of microtiter tray. The diluent was PBS, pH 7.4. Agglutination was evaluated after the RBC were settled at room temperature for 2 h. Titers were expressed as the greatest dilution of sample that caused complete agglutination.
To assess the capacity of a given carbohydrate to inhibit hemagglutination, the antibody at a titer of two agglutinating units was mixed with various concentrations of the carbohydrate in an iso-osmotic solution in titration wells. After 30-min incubation of the mixture at 37 °C, the 0.5% RabRBC suspension was added and scored for agglutination as described above.
EA Rosetting Direct Antiglobulin Test (DAT) with K562 Cells. The test is based on the high affinity between the Fc portion of RBC-bound IgG molecules and the Fc receptors on the myeloid cell line K562. This interaction leads to the formation of erythrocyteantibody (EA) rosettes. The proportion of the K562 cells forming EA rosettes is related to the amount of the RBC-bound antibody molecules (8). A quantity of 0.1 ml of 1% suspension of washed RBC was mixed with an equal volume of antibody-containing solution and incubated for 30 min at 37 ° C. Thereafter, the RBC were washed twice and resuspended in 0.1 ml of rabbit anti-human serum (broad spectrum containing anti-human IgG, IgM, IgA, and anti-complement; Ortho Diagnostics, Raritan, NJ) or, if stated, in specific rabbit anti-human IgG serum (Ortho). The suspension was incubated for 30 min at 24°C. Thereafter, the RBC were washed twice, mixed with 0.1 ml K562 cell suspension in PBS (106 cells/ml), spun for 5 min at 200 g, and incubated for 30 min at 4°C. The pellet was resuspended and the percentage of K562 cells binding the IgG-coated RBC and forming EA rosettes was scored in a hemocytometer. This antiglobulin test was found to be 20-40-fold more sensitive than the regular antiglobulin test and highly specific, since RBC lacking bound Ig formed no rosettes with K562 cells (8). Inhibition of the rosetting antiglobulin test by various carbohydrates was performed by their addition to the antibody solution before the incubation with RBC as described above.
Sensitization of Staphylococcus aureus (Staph A) Bacteria with Anti-Gal. Staph A strain
Cowan 1, containing protein A and prefixed with 0.1% glutaraldehyde, were incubated with the purified anti-Gal at a 5% final concentration of the bacteria, for 24 h at 4°C. The sensitized bacteria were washed and tested for binding to galactosyl residues on RabRBC by mixed hemagglutination. Staph A sensitized with anti-Gal agglutinated 1% RabRBC at a concentration of bacteria as low as 0.01%. For visualization of anti-Gal binding to various RBC populations, 1% RBC suspensions were mixed with 1% sensitized Staph A, spun for 5 min at 400 g, and incubated for 1 h at room temperature. Thereafter, the pellets were gently resuspended and fixed with 0.1% glutaraldehyde. These samples were processed for scanning electron microscopy according to procedure previously described (9).
Separation of RBC Age-related Subpopulations. Senescent and young normal RBC were separated on the basis of age-dependent differences in density, on a discontinuous Percoll gradient (Pharmacia Fine Chemicals, Piscataway, N J) according to the method of Alderman et al. (10). The subpopulation of senescent RBC with density of >1.11 g/ml (1-2% of total RBC) and the subpopulation of young RBC with density of <1.08 g/mi (1-2% of total RBC) were isolated, washed twice in PBS, and used for further analysis.
Enzymatic Treatment of HuRBC. The interaction of the anti-Gal with HuRBC was assessed after treatment with the following enzymes: (a) Pronase (from Streptomyces griseus; Sigma Chemical Co.): HuRBC were brought to a 10% suspension in PBS containing 0.1% pronase and incubated for 1 h at 37°C. Thereafter; the RBC were washed twice and adjusted to 10% concentration in PBS for further studies. (b) a-Galactosidase (from coffee beans; Sigma Chemical Co.): pronase-treated RBC were suspended in 0.1 M citric acid, 0.2 M Na2HPO~ buffer, pH 5.0 containing 3% glycerol, to a concentration of 10%. a-Galactosidase was added to a final concentration of 5 U/ml. The suspension was incubated for 3 h at 37°C and washed, thereafter, twice with PBS. Reactivity of the enzyme was confirmed by parallel elimination of the agglutinability of B-type RBC by anti-B antibodies (1,2). (c)/3-Galactosidase (from E. coli; Sigma Chemical Co.): the pronase-treated RBC were resuspended in 0.01 M Tris HCI buffer containing 0.01 MgCI2, 0.01 M mercaptoethanol, and 0.1 M NaCI, pH 7.5. 500 U of 3-galactosidase were added to the 1-ml RBC suspension. The mixed suspension was incubated for 3 h at 37 °C and washed twice with PBS. Reactivity of the enzyme was confirmed by the hydrolysis of 0-nitrophenyl 3-Dgalactoside to 0-nitrophenol and galactose.
Results
Characteristics and Specificity of the Anti-Gal Isolated by Affinity Chromatography. The normal AB sera contained anti-Gal reactivity at titers of 1:800 to 1:1,600 (Table I), as assessed by the binding to RabRBC in the rosetting antiglobulin test. Due to its IgG nature, the unseparated anti-Gal failed to agglutinate other RabRBC in the presence of the whole serum IgG. After fractionation of 100-ml AB serum samples on a 10-ml melibiose-Sepharose column, the anti-Gal titer decreased in the effluent from 1:800 to 1:200, suggesting that 70-80% of the antibody activity was retained on the column. The specificity of the column absorbing the antibody was demonstrated by the finding that anti-Rh titer in one of the sera tested was not altered after fractiontion (Table I). The antibody obtained after specific elution with 0.5 M galactose, passage through the protein A column, and adsorption on VCN-treated RBC, was found to produce 100% rosettes with RabRBC at a titer of 1:500 decreasing to 10% rosettes at the endpoint ranging between 1:1600 and 1:6400. The antigalactosyl specificity of the antibody is described below (Table II). The antibody preparation directly agglutinated RabRBC at titers ranging between 1:64 and 1:256, depending on the donor of the AB serum. Sepharose column without melibiose did not bind antibody.
The IgG nature of the antibody obtained after the protein A column was further confirmed by the single identical precipitin line obtained in Ouchterlony double immunodiffusion assay against broad spectrum rabbit anti-human Ig and a-Galactosyl-a-galactosyl-a-glucosyl-fl-0 1. 5 3 fructose (stachyose) * The assay was carried out in anti-Gal concentration yielding two agglutination units.
the specific rabbit anti-human IgG serum. A Mancini radial immunodiffusion assay with the isolated antibody indicated that the antibody concentration within the serum ranged from 30 to 70 #g/ml. Isoelectric focusing followed by immune fixation with anti-IgG antibodies showed the isolated anti-Gal to be a polyclonal antibody with pI values ranging from 4.0 to 8.5. The specific interaction of the anti2Gal with a-galactosyl residues on the RabRBC was shown by the rosetting antiglobulin test as well as by hemagglutination (Table II). Galactose at a concentration of 100 mM completely inhibited binding of the anti-Gal to the RabRBC and formation of rosettes, whereas 50% rosette inhibition was observed by a galactose concentration as low as 2 raM. Agglutination of the RabRBC by the antibody was noted at a galactose concentration not higher than 6 mM. The a-galactosyl-containing carbohydrates melibiose, stachyose, and a-methyl-galactoside inhibited the anti-Gal reactivity more potently than did galactose. In contrast,/3-galactosyl-containing disaccharides did not affect the binding even at 100 mM. Accordingly,/3-methyl-galactoside was 30-fold less effective than a-methyl-galactoside in inhibiting anti-Gal reactivity.
Other carbohydrates tested, with the exception of o-fucose, failed to inhibit the binding of the anti-Gal to RabRBC even at the concentration of 100 mM. The capacity of D-fucose to partially inhibit the anti-Gal reactivity is probably due to the identical arrangement of hydrogen and hydroxyl groups at positions C-2, C-3, and C-4 as in D-galactose.
Binding of Anti-Gal to Human RBC. Since anti-Gal is present in high titers in normal sera, it is not surprising that the antibody does not bind to freshly isolated human RBC (Table III). Proteolytic treatment of human RBC by 0.1% pronase for 60 min at 37°C resulted in extensive binding of the anti-Gal, as indicated by the high proportion of rosettes obtained after incubation of treated RBC with the antibody. This interaction was readily inhibited by galactose and a-galactosylcontaining carbohydrates, but not by B-galactosyl-containing carbohydrates, fructose, glucose, or mannose (not shown). Furthermore, incubation of the pronase- Thalassemic RBC. x 6,600. Note the surface deformation of this pathologically aged RBC. (c) Human normal senescent RBC, with density of 1.11 g/ml, obtained from Perco|l density gradient, x 9,500. FIGURE l c treated RBC with a-galactosidase eliminated the capacity of these RBC to interact with the anti-Gal, whereas incubation with /3-galactosidase did not affect the capacity to bind the antibody. As expected, the anti-Gal was devoid of antibodies binding to the/3-galactosyl units penultimate to terminal sialic acid residues, since the isolated antibody was adsorbed on VCN-treated RBC (see Materials and Methods). Accordingly, no antibody binding was detected after incubation of VCN-treated RBC with the anti-Gal.
In addition to binding to RabRBC-and pronase-treated human RBC, the anti-Gal bound to thalassemic RBC, which are known to be prematurely aged RBC (11). The binding, as assessed by the rosetting antiglobulin test (Table III), could be visualized in scanning electron microscopy using Staph A-sensitized with anti-Gal ( Fig. 1 b). In a previous study we have reported that thalassemic RBC bind in situ IgG antibodies with anti-galactosyl specificity (12). Thus, the thalassemic RBC were incubated with 0.1 M galactose for 60 min at 37°C, for the elution of the autologous antibodies. After this incubation, thalassemic RBC formed 5 -15% rosettes with K562 cells, whereas the binding of the natural anti-Gal to these RBC resulted in 55-100% rosette formation. In accordance, the Staph A sensitized with the anti-Gal readily bound to these deformed pathological RBC (Fig. 1 b). Nonsensitized Staph A did not bind to thalassemic RBC. In extension of this finding, experiments have shown that there is specific binding of anti-Gal to normal senescent, but not young, RBC (Table III, Fig. 1 c). To detect the relatively small amount of anti-Gal molecules bound to normal senescent RBC, the K562 cells were pretreated with 0.04 U/ml of VCN. The VCN cleaved the sialic acid units from the K562 cells and diminished the zeta potential between the myeloid cell and the RBC. Thus, the affinity of the Fc portion of the RBCbound IgG molecule for the Fc receptor on the K562 cell was increased. While half of K562 cells formed rosettes with the senescent RBC following their incubation with anti-Gal, only a few rosettes were detected with the young RBC.
Anti-Gal in Normal Sera of Donors of Various Age Groups. The titer of anti-Gal
was assessed in the sera of 300 individuals of varying ages. In >95% of the normal adult population, anti-Gal titers ranged between 1:800 and 1:1,600 irrespective of the blood group (Fig. 2). The anti-Gal, like other IgG antibodies, cross the placenta and were detected in cord blood in titers only slightly lower than those found in the maternal blood (Fig. 2). The anti-Gal titer decreases to its lowest level at the age of 3-6 mo, correlating with the decrease in the total IgG level at this age. The antibody titer was found to increase gradually thereafter, reaching the adult level by the age of 2-4 yr. The anti-Gal titer in 20 elderly individuals (70-90 yr), was found to be within the same range as that of young adults. The binding of nonpurifed anti-Gal to RabRBC in titer yielding 100% rosettes was inhibited by D-galactose-and a-galactosyl-containing carbohydrates, to the same extent as the inhibition observed using the purified antibody. Other carbohydrates, including ¢~-galactosyl-containing disaccharides, failed to affect this interaction (not shown).
Anti-Gal Reactivity in Immunodeficient Individuals. The ubiquitous presence of
anti-Gal in high titers throughout life implies a constant antigenic stimulation. It was thus assumed that the titration of anti-Gal may serve as a useful method for the assessment of humoral immunodeficiency disorders. The serum of an infant with Bruton type agammaglobulinemia contained only 20% of the anti-Gal reactivity observed in the serum of an age-matched healthy individual (Table IV). A similar difference was observed in the serum IgG concentration of the two infants. After administration of a y-globulin preparation to the immunodeficient patient, a 10-fold increase in the anti-Gai titer was observed. Reduced activity of the anti-Gal was similarly found in acquired immunodeficiencies. The sera of the multiple myeloma patients tested, contained four-to fivefold increase in the serum IgG concentration, however, anti-Gal titers were 40-80-fold lower than that in normal sera. This is due to the abnormal clone of IgG comprising most of the serum immunoglobulin entity. Advanced chronic lymphocytic leukemia is another type of secondary immunodeficiency state where most of the antibody-producing lymphoid tissue is replaced by the malignant lymphocytes. Accordingly, anti-Gal titer was found to be very low. Active immune suppression caused by glucocorticoid treatment was similarly reflected in the titer of anti-Gal, as seen in the serum of a 6-yr-old patient suffering from hemophagocytic lymphohistiocytosis, receiving a daily dose of 40 mg prednisolone for the suppression of antibody production. It should be noted that a decrease of anti-Gal reactivity to 1:400 titer was observed only after 30 d of treatment, whereas additional 30 d of administration resulted in the further decrease of anti-Gal titer to 1:100 (Fig. 3). Prednisolone administration for 75 d resulted in a decrease of the anti-Gal titer to 1:50. This titer did not alter upon continuation of the immunosuppressive treatment for additional 45 d.
Discussion
The natural anti-Gal IgG that is described in the present study seems to be the same antibody previously described by us to be present in situ on thalassemic RBC (12). In that study we isolated the antibody through binding to RabRBC and elution by galactose. In the present study, we demonstrate the isolation of the antibody by affinity chromatography with a chemically defined antigen and show the distinct anti-a-galactosyl specificity of the antibody. The anti-Gal differs from all human natural antibodies with known anti-galactosyl specificity. The anti-blood group B antibody, which also displays an anti-a-galactosyl specificity, is present only in A type and 0 type individuals and is mostly of the IgM class (2). The anti-Gal is present in sera of all blood groups and is mainly an IgG antibody, as indicated by the almost similar titers observed in maternal and fetal blood. IgM antibodies do not cross the placenta. The anti-Gal differs from the anti-T antibody, which is mostly of the IgM class and interacts specifically with fl-D-Gal(l --~ 4)GlcNAc residues naturally present on cortical thymocytes or exposed on other cell types after VCN treatment for removal of terminal sialic acid units (3,13,14). It should be stressed that, unlike anti-T antibodies, which are found in the serum of all mammals tested (3), the anti-Gal reactivity as assessed by its binding to RabRBC could be demonstrated only in human or baboon serum, but not in the serum of mice, rats, guinea pigs, and rabbits. From the structural components known to be present on RabRBC, it is most likely that the anti-Gal binds to the glycosphingolipid Galot(1 ~ 3)Gal/3(1 ~ 3)GIcNAc/3(1 3)GalB(1 ~ 4)Glc-O-ceramide, which is found to be present on these RBC (6). The assumption that anti-Gal binds to glycolipids on RabRBC, while not yet proven, is supported by the finding that pronase-treated RabRBC bind the anti-Gal four-to eightfold more than do nontreated RabRBC (not shown). It is possible that the IgM moiety of the anti-Gal, which was not investigated in the present study, may reflect the heterophilic antibodies to RabRBC described long ago by Schiff (15) to be present in normal human sera.
The specific binding of anti-Gal to normal and pathologically senescent RBC may imply a physiological role for this antibody in the aging of human RBC. In situ binding of autologous IgG to normal senescent RBC has been reported (10, 16).
The observed selective in vitro binding of the isolated natural anti-Gal to senescent but not young RBC possibly reflects an in vivo mechanism for the labeling of the aging RBC to be recognized by the macrophages of the reticuloendothelial system.
The binding site of anti-Gal on human RBC is still under study. The extensive binding of the anti-Gal to pronase-treated RBC suggests that the antibody interacts with a glycolipid rather than a glycoprotein. The elimination of the antibody binding as a result of treatment with ot-galactosidase further implies the a-galactosyl structure on the binding epitope. The only kown glycolipid to be present on all human RBC and to bear terminal a-galactosyl residues is the trihexose ceramide molecule Gala(1 ~ 4)Gal/3(1 ~ 4)Glc-O-ceramide (17), which is related to pk antigen (18). The possibility that trihexose ceramide is the antigenic determinant, which in course of senescence becomes accessible to anti-Gal binding through the removal of membranal proteins (including those bearing sialic acid), is currently under investigation.
The anti-Gal was found to be present in sera of individuals above the age of 4 yr in a remarkably high titer ranging between 1:800 and 1:1600. A marked decrease in the anti-Gal titer was found only in infants of 3-6 mo. This is compatible with the total decrease of maternal IgG and the initiation of self IgG synthesis. The unaltered production of the antibody throughout life implies a constant antigenic stimulation which, as in the anti-blood group antibodies, may originate in the intestinal flora (1). E. coli as well as shigella were reported to bear a-galactosyl groups on the cell wall (19) and thus may be the source of such antigenic stimulation. Mixed agglutination experiments between the anti-Galsensitized Staph A and various intestinal bacterial strains may help to establish this issue.
The observed invariable high titer of the anti-Gal may have a diagnostic significance in providing information on the general function of the humoral immune system. This is demonstrated by the finding that primary and secondary humoral immune deficiencies, as well as active immune suppression, are reflected in the anti-Gal titer of the individual patient. The assessment of immune reactivity by determination of the anti-Gal titer has the advantage over the commonly used titration of anti-blood group antibodies, since it is present in all individuals. In addition, blood group antibodies decrease with age (20).
Further investigation of the natural anti-Gal antibody may not only provide clues to the mechanism of senescence of RBC, but may also prove that this antibody takes part in the nonspecific amplification of local immune responses, since the trihexose ceramides are lipids present in membranes of a large amount of cells including fibroblasts (17), epithelial and kidney cells (21,22). These cells, which normally are not exposed to serum antibodies due to physiological barriers, may be found to bind the antibody following local inflammatory reactions resulting from primary specific immune reactions.
Additional study of this unique human natural IgG antibody may thus prove to be of immunological theoretical as well as practical interest.
Summary
A new natural anti-a-galactosyl IgG antibody (anti-Gal) was found to be present in high titer in the serum of every normal individual studied. The antibody was isolated by affinity chromatography on a melibiose-Sepharose column. The reactivity of the antibody was assessed by its interaction with a-galactosyl residues on rabbit erythrocytes (RabRBC). The specificity was determined by inhibition experiments with various carbohydrates. The anti-Gal interacts with a-galactosyi residues, possibly on glycolipids of human RBC (HuRBC), after removal of membrane proteins by treatment with pronase. In addition, the anti-Gal bind specifically to normal and pathologically senescent HuRBC, suggesting a physiological role for this natural antibody in the aging of RBC. The ubiquitous presence of anti-Gal in high titers throughout life implies a constant antigenic stimulation. In addition to the theoretical interest in the antibody, the study of the anti-Gal reactivity seems to bear immunodiagnostic significance. Decrease in the antibody titer was found to reflect humoral immunodeficiency disorders. | 2014-10-01T00:00:00.000Z | 1984-11-01T00:00:00.000 | {
"year": 1984,
"sha1": "c551bbedd5b68fa565baf2ee4fc9728e6dc1893f",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/160/5/1519.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "c551bbedd5b68fa565baf2ee4fc9728e6dc1893f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
269715514 | pes2o/s2orc | v3-fos-license | Midlife health crisis of former competitive athletes: dissecting their experiences via qualitative study
Sports participation confers many health benefits yet greatly increases injury risk. Long-term health outcomes in former athletes and transition to life after competitive sports are understudied. Ending a sport may pose physical and psychosocial challenges. The purpose was to determine the lived experiences of former competitive athletes and how their sports participation impacted their long-term health and well-being. Former college varsity athletes participated in semistructured interviews focusing on their experiences, including past and current health, the impact of injuries, activity, exercise, diet and transition to life after competitive sport. Thematic analysis was completed using a collaborative, iterative process. Thirty-one (16 female, 15 male) former college athletes aged 51.3±7.4 years were interviewed. Six themes emerged: (1) lifelong athlete identity; (2) structure, support and challenges of the college athlete experience; (3) a big transition to life beyond competitive sport; (4) impact of competitive sport on long-term health; (5) facilitators and barriers to long-term health after sport and (6) transferable life skills. Continuing sports eased the transition for many but often delayed their postathlete void. Challenges included managing pain and prior injury (eg, If I didn't have my knee injury, I would definitely be more active), reducing energy needs and intake (eg, When I was an athlete, I could eat anything; and unfortunately, that’s carried into my regular life), lack of accountability, changed identity and lost resources and social support. Participants suggested a programme, toolkit, mentoring or exit course to facilitate the transition. While former athletes benefit from transferrable life skills and often continue sports and exercise, they face unique challenges such as managing pain and prior injury, staying active, reducing energy intake and changing identity. Future research should develop and evaluate a toolkit, programme and other resources to facilitate life after ending competitive sports under ‘normal’ conditions (eg, retirement) and after a career-ending injury.
BACKGROUND
Former elite athletes live longer than nonathletes 1 2 yet also have higher rates of musculoskeletal injury, [3][4][5][6][7][8] osteoarthritis 2 9 and joint replacement. 2 10Limited evidence to date presents an unclear picture of longterm implications of sports participation on cardiometabolic health, body composition, function and overall wellness in ageing former athletes. 11 12Studies on mid-twentieth century
WHAT IS ALREADY KNOWN ON THIS TOPIC
⇒ Former elite athletes live longer than non-athletes yet also have high rates of musculoskeletal injury, osteoarthritis and joint replacement.Limited evidence presents an unclear picture of the long-term implications of sports participation on cardiometabolic health, body composition, function and overall wellness in ageing former athletes.In short, participating in high-level sports does not make athletes immune to health challenges as they age.Former athletes may face unique challenges as they age that could be targeted in potential future intervention studies.
WHAT THIS STUDY ADDS
⇒ While midlife former competitive athletes experienced many benefits from sport (eg, transferrable life skills and social connections), they also faced unique challenges transitioning to life after sport that impacted their long-term health and well-being.Physical challenges included managing prior injuries, modifying diet to accommodate lower energy needs and finding new or different exercise(s) and activities.Psychosocial challenges included a changed identity, losing the scheduled and structured team environment, lack of accountability and no longer having such strong social support.While many continued to participate in sports as athletes and/or coaches, which eased the transition and delayed the postathlete void, others made clean breaks.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
⇒ Midlife former competitive athletes suggested a programme, toolkit, mentoring or exit course to help facilitate the transition to life after sport for current competitive athletes.Future research should design and evaluate these resources to address athletes' unique physical and psychosocial challenges as they age.These resources may ultimately facilitate long-term health and wellness in current and former athletes.
Open access former elite athletes suggest that they have better health and function as they age [13][14][15][16] despite high osteoarthritis prevalence, 14 but sports have changed dramatically in recent decades. 8More contemporary data indicate that midlife former collegiate athletes have poorer physical fitness and health outcomes than recreationally active controls. 178][19][20] In short, participating in highlevel sports does not make athletes immune to health challenges as they age, and many athletes have long periods of reduced health and function. 11 17 18espite the common knowledge of struggles athletes face after competitive sports, 21 limited research has examined former athletes' challenges as they transition to life after competitive sports. 22 23Athletes who suffer a career-ending injury often experience loss of identity, lack of external support and/or mental health decline. 24thletes after injury 24 and those who recently retired from sport 25 often struggle with engaging in sufficient physical activity, a well-known contributor to overall health. 26 27To our knowledge, no research has investigated the health experiences of midlife former athletes who have a unique perspective on how sports participation impacts their ageing and long-term health.
Determining the impact of competitive sports participation on long-term health may guide how healthcare providers and coaches counsel athletes and elucidate areas for future research.Our purpose was to determine the lived experiences of midlife former competitive athletes and how their sports participation impacted their long-term health and well-being.Specifically, we interviewed midlife former college athletes to characterise their lived experiences, including their previous and current physical, mental and emotional health; the impact of sports participation on current health and function; activity, exercise and dietary patterns; and transition to life after competitive sport.
Study design
The current study is part of an ongoing mixedmethods clinical study investigating determinants of the impact of injury history and sports participation on health outcomes and physical activity patterns in former collegiate athletes (NCT05344001).We used a qualitative description methodology, an interpretive methodology yielding results with a low level of abstraction that characterise phenomena experienced by participants. 28
Participants
Participants were former collegiate athletes 40-64 years old interviewed between May 2022 and February 2023.Participants were required to have participated in a collision, contact and/or jumping, cutting or pivoting sport (eg, basketball, football, soccer, softball, volleyball) at the collegiate varsity level.Collegiate varsity sports in the USA are the highest level of competition excluding professional sports for many types of sports.Collegiate varsity athletes often train and compete for approximately 20 hours (or more) per week at high intensity and have many additional team obligations including travel, team meals, film sessions and other activities.Individuals were excluded for the following reasons: neurologic (eg, stroke, Parkinson's) and/or degenerative disease that impairs function, pregnancy and lower extremity joint replacement.Participants provided written informed consent to participate in this IRB-approved study (Marquette University IRB Protocol #3967).They signed an additional section on the informed consent document explaining the qualitative study and their willingness to participate.Every former athlete from the parent study (DP5-OD031833) was invited to participate during the in-person informed consent process until the targeted sample size for each gender was met.No participants dropped out.
Data collection
Former college athletes participated in semistructured interviews (online supplemental appendix 1).Interviews were conducted by the first (JJC) and last (LBP) authors.JJC is an assistant professor of physical therapy who has been a physical therapist for 9 years and is a clinician-scientist and former college athlete (National Collegiate Athletic Association (NCAA) Division I and III basketball); he received training in qualitative study methodology from the last author (LBP).LBP is an associate professor and nurse practitioner with over 10 years of qualitative research experience.The interviewer(s) interacted briefly with participants during the in-person assessments at a state-of-the-art athletic research facility where the interviews were conducted.Participants were informed of the overarching purpose of the study and the general types of questions they would be asked.Open-ended questions probed previous and current physical, mental and emotional health; the impact of injuries; activity, exercise and diet; and transition to life after college sport (figure 1).Interviews were conducted in a private conference room by the first and/or last authors; occasionally, another research team member (eg, student research assistant) observed.Demographic data and sports participation history (ie, collegiate sport[s], competition level) were collected.Anthropometrics were measured, including height, weight and BMI.Addendums were added when participants thought of other ideas to share, but no other repeat interviews were conducted.Interviews were audio recorded.Interviewers recorded field notes immediately following each interview.
Data analysis
Audio recordings were transcribed verbatim using an online speech-to-text application ( otter.ai) and checked for accuracy.Identifying information was redacted; transcripts were not returned to participants.Coding and thematic analysis were completed online (Dedoose) 29 using a collaborative, iterative process.Three research team members (JJC, TLW and LBP) coded transcripts separately and met frequently to review coded transcripts and create a comprehensive list of codes.Themes were identified from the codes until all experiences were represented.Conflicts between team members were resolved via discussion.Interviews were conducted until additional data did not add new topics or insights.
Study rigour was ensured through multiple methods, including independent coding and collaborative development of final codes and themes via consensus.Bias was limited by having transcripts reviewed by multiple researchers from differing disciplines (ie, recent elite collegiate athlete, physical therapist researcher and nurse practitionerresearcher) and frequent discussions.Credibility was ensured through establishing rapport with participants. 28Intercoder reliability was ensured through frequent coding meetings and had substantial to excellent agreement (Kappa=0.76-0.90). 30 31Direct participant quotes in the study findings support the themes and allow the readers to consider the validity and transferability of the data.
FINDINGS
Thirty-one midlife former college athletes representing nine sports across all NCAA Divisions and the National Association of Intercollegiate Athletics competition were interviewed (table 1).Interviews lasted an average of 34 min (range 17-69 min).Six major themes emerged (figure 2).
Theme 1: lifelong athlete identity
Participants expressed a strong identity as athletes who permeated their lives and were integral to how others perceived them.Athlete identity transcended time: What are you?What do you get up and do every day?You go to school, or you play sports.That's all you do… My entire youth through my college years was all about athletics (57-year-old man).Another said, It was all sports, sports, sports (42-year-old man).
Competitive nature
A strong subcomponent of athlete identity was competitive nature, which remained in participants long after they retired from college sports and affected interpersonal interactions.Channelling or redirecting their competitive nature was often challenging and could become problematic, as competitiveness goes into something else, whether it's gambling, whether it's eating-anything that's a vice for you-it can turn bad because your friend has one drink, you have to have three (41 -year-old man).
Lost identity and postathlete void
Participants sensed lost identity or a postathlete void after their sport ended: It's in your soul….And then that's just gone, that competition's gone (52-year-old woman).Another noted that sport becomes your identity.You spend so much time thinking about it and doing it… It takes a while to get that out of your brain.It took me years (60-year-old man).
Open access
Theme 2: structure, support and challenges of the college athlete experience Disciplined and scheduled lives Many participants noted their college experiences differed substantially from those of their non-athlete peers, especially in season.One major difference was their disciplined and scheduled lives: I was waking up at five o'clock in the morning and going to 6 A.M. practice, but it made me very disciplined in my academics… when I had an hour to study, I got great grades, so it kept me disciplined (49-year-old woman).
Strong social support
Participants enjoyed immediate strong social support with great camaraderie and teammates (59-year-old woman) that often started during recruitment before coming to campus.These relationships often remained decades later: You make lifelong friends, and it's not about winning or losing.It is the camaraderie and learning to work together and overcoming obstacles together as a team (52-year-old man).
Stress relief from sport
Participants noted significant stress-relieving benefits from sport: You had an outlet to go and exercise and sort of get rid of the stress and anything that was bothering you.And you had these breaks where for three or four hours, you're so engrossed in whatever you're doing as a college athlete, that the rest of the world goes away (52-year-old woman).Another noted, Sports is your sanctuary (55-year-old woman).
Injury
The benefits also came with some risks.Sport-related physical injuries negatively impacted experiences, challenging how participants viewed their identities and managed their schedules.Open access (44-year-old woman).Others expressed injury leading to concerns with psychological health.One participant's injury caused a depression within itself… because I went from an All-American to last guy on the bench, then potentially being kicked off the team… because they needed my scholarship (41-year-old man).
Burnout
Burnout was another component of psychological health discussed by at least one-third of the participants.One summarised it as, At the end of my playing career, I hated basketball.I hated the situation.I didn't like the team.There was a lot of very bad emotions.And I walked away from the sport and never played again (49-year-old woman).
Gratitude ('embrace the experience') Despite the strong negative emotions that some participants experienced, many expressed gratitude for participating in college sports and discussed the perks like travel, good food and graduating debt-free.One participant noted: I chose to be a college athlete.I'm so lucky.I get to be a college athlete… playing college athletics was lifechanging for me (52 year-old woman).
Theme 3: a big transition to life beyond competitive sport Transitioning from being a competitive athlete to the next phase of life was challenging.Participants described experiences of leaving sports, including why they stopped competing (eg, abrupt end due to injury, team folding or quitting vs graduating as planned) and the impacts on their career, family and social networks.Two major transitions occurred in former college athletes: (1) transition from college to postcollege life and (2) transition from sport to life after sport.These transitions coincided in some participants and occurred years or decades apart in others, as some continued sport long after college.
Challenges spanned physical and psychosocial domains
Challenges in transitioning from sports spanned physical and psychosocial domains (figure 2).Several participants expressed making a clean mental break from sport: When I got done with college, I boycotted working out.I just shut it down (60-year-old man).Another said: After college, I didn't do anything (sports) for maybe a few years (59-year-old women).
Ending college sports was often linked to strong emotions that varied greatly among participants, ranging from sadness to gratitude to burnout.One said: I don't think I watched a basketball game for probably eight or 10 years… it's hard for me to come to an alumni event because I just really hate the game (49-year-old woman).In contrast, another said, It seemed very final.Having played grade school, high school, college, and then all of a sudden, nothing.I think that was difficult… (I felt) pretty lost.I think that somewhat contributed to my depression (61-year-old woman).Another said, I remember my last game: it was terrible.We were crying on the sideline… I remember it was just kind of miserable; watching it end, because it just becomes your identity (60-year-old man).However, for others, the transition after college sports seemed natural and logical: I pretty much just flipped from being an athlete-student to a wife and entered (my) career … And it was like this is what I was supposed to do (55-year-old woman).
Continuing sport eased the transition for many, at least initially In contrast to those who quit sports after college, some participants continued to compete in their same sport recreationally, competitively or professionally.However, not all sports can be continued: American football is not a lifelong sport.And once you're done playing in college, there are no opportunities to play again.I mean, you can play flag football, but that's not the same, not even close (42-year-old man).Many others changed to different sports like running, triathlon or slow-pitch softball.Those who continued participating in sports often expressed fewer challenges, at least initially.For example, one participant who continued to compete into her fifties said she transitioned just fine.I didn't have any withdrawals from college athletics, only because I continued to compete (61-year-old woman).However, she had a very difficult transition after she stopped playing sports later in life because that was her activity and social life.
Coaching helped others transition Some participants who did not continue playing sports after college transitioned into coaching.Coaching provided not only a consistent schedule but also meaning and purpose: Coaching is obviously a job.I got paid for it.But it was so much more than that.It was teaching young minds; it was helping them through the experiences that I had to experience (44-year-old woman).Another expressed, When they (young athletes) realize you put the time and effort into anything, that's a life lesson, natural sports lesson, and I love watching these kids realize that.So that's why I like coaching (53-year-old man).Coaching, however, did not satisfy all participants: So then I went back to finish competing, just because it [coaching] didn't satisfy the void' (42-year-old man).
Factors facilitating the transition
Several participants suggested methods for facilitating the impending transition from college sports.Recommendations included a programme, toolkit or course to help facilitate the transition to life after competitive sport would be helpful, addressing sleep, nutrition and hydration, exercise, mental (45-year-old man) and financial literacy, among others.However, another expressed that, while communication and education were very important, she doubted anything could fully prepare college athletes: It's just like, how do you get prepared to have a baby?… But… I think communication and education are just so important (44-year-old woman).Participants also discussed positive role models or mentors as extremely beneficial to facilitate transition: For me, the transition to regular life was observing his [my coach's] life, listening to his wisdom.Knowing how sports can serve as an example, in how to move through life effectively, and how to handle obstacles (56-year-old man).
Open access
Theme 4: impact of competitive sport on long-term health Participants discussed their health after competitive sport, noticing immediate changes after ending sport.One participant highlighted these changes, After discontinuing college athletics, I think that's when my weight and physical health and strength was at its worst (40-year-old man).Participants often asserted that their health was worse than in college: Oh, there's no comparison.I don't think I can ever work at that level again.It's exhausting (46-year-old woman).
Physical health and ageing
Many participants mentioned the effects ageing had on their ability to exercise at their desired levels.I can't do near what I did as a college kid… I'm not even running anymore (59-year-old woman).Regarding ageing, Body wise-just aches and pains.You go running one day, and suddenly, your hamstring hurts, and it lasts for three months (49-year-old woman) and Can I still go at the same rate?No. My knees hurt daily (52-year-old woman).
Many noted the effects of sport-related physical stressors and injury on top of normal ageing.One participant injured her shoulder in practice and told the coach she was fine.In retrospect, I wish I would have addressed it sooner because now I could be sleeping, and my shoulders will pop out in and out and I wake up and my shoulders are sore (41-year-old woman).At times, participants expressed changes from injury as bothersome, but they did not let it stop them: I really don't perceive my injuries as anything other than a nuisance.I still keep moving (59-year-old woman).Some participants thought they were better in some aspects of health presently than in college.One former athlete felt she was physically fit with more endurance in college from playing soccer, but overall, just with the diet, things that are different, that I'm healthier in that respect now than I was back then (47-year-old woman).A former volleyball player mentioned she ran in college to stay in shape.Still, her workouts are now more well-rounded, with her lifting weights and participating in balance exercises like yoga.
Participants also commented on their health compared with that of their former teammates.There's probably only 5 or 10 of us and I'm one of them, that looks like they could still play.Everybody else is kind of either really limping, big and fat (62-year-old man).Contrastingly, None of us want to be the one that's not staying in shape, right?And so, I look at them, and I'm like, 'wow, you inspire me.'Most of us are in better shape ('body type wise') now (52-year-old woman).
Psychological health was positively and negatively affected by sport
Several participants discussed how sports helped them manage stress, depression or anxiety, whereas others had contrasting experiences.One participant noted, I think what I realized late in life is I had a lot of anxiety that was masked as a younger person because I had an outlet of athletics (52-year-old woman).Transitioning out of competitive sport (theme 3) and managing a changing athlete identity (theme 1) presented psychological health challenges to many (see above).
Theme 5: facilitators and barriers to long-term health after sport Participants discussed barriers and facilitators of continuing good health after college.The most common barriers included lack of accountability, pain, prior injury and changing energy (dietary) needs.Facilitators included engaging in continued physical activity and competition, often through team or endurance sports and prioritising health, sometimes through wellness challenges or social networks.Several factors, like families and work, were considered barriers by some and facilitators by others.
Lack of accountability
One barrier was a lack of external accountability as a coach no longer tells you what to do (52-year-old woman).As described by one participant, I don't know if I can push myself to that level of exercise because, to me, it was insane.I'm grateful for it (college sport).But when it's gone, it's really hard to stay active at that level (46-year-old woman).
Pain and prior injury
A major barrier to exercise was dealing with pain and injury postsports.One participant noted: If I didn't have my knee injury, I would definitely be more active.I try to be active.I try to swim and get exercise that way.But I would definitely be up and around more, I think if my knee wasn't bothering me (61-year-old woman).
Another noted that his constant knee pain due to a prior traumatic knee injury led to daily physical limitations and that he used alcohol to 'numb' the pain (figure 2).Even participants who did not have a prior traumatic injury experienced pain that limited their activity (see theme 4).
Overeating and changing metabolism were struggles for many former athletes Overeating was another common concern often mentioned by participants that was a barrier to good health.Many needed and were often instructed to eat large quantities of food during competitive sports, and these patterns often continued after they finished competing.Participants mentioned needing to trim down as their activity levels fell and metabolisms slowed with ageing.Others noted exercise was driven by wanting to prevent weight gain while continuing to enjoy eating as they had as athletes.A football player noted, When you stop participating, your metabolism changes, and I didn't modify my eating habits when my metabolism started slowing down.I started gaining a lot of weight (53-year-old man).
Prioritising health
Many participants recognised exercise's positive physical and mental health benefits, which facilitated their continued exercise.One participant summed it up: athleticism helped me to endure many things in my life.It made Open access me feel stronger, physically and mentally (61-year-old man).Others noted that they intentionally tried to learn about and improve their health.Notably, several participants acknowledged that nutrition was not a focus in college, and their nutrition improved as they learnt more about healthy eating: Our bodies, I think, look better now.We are leaner; we eat better.You know, nutrition was not a thing… I can remember eating pizzas-a whole one to myself!(52-year-old woman).Social support groups and wellness challenges sometimes offered through work or through maintained social networks with prior teammates, helped facilitate health.
Theme 6: transferrable life skills of the athlete Participants discussed many skills developed as collegiate athletes that were transferable to success in life, including organisational skills (ie, time management, punctuality, dependability) and workforce skills (ie, strong work ethic, team membership, leadership abilities).Participants had strong relationship abilities and felt skilled at interacting with other people from different worlds (55-year-old woman) and being with and around people all the time improved my interpersonal skills (52-year-old woman).
Beyond the organisation and relationship skills, participants often described mental fortitude or toughness: There's a certain mentality for sure, you know, there is… a warrior mentality… a certain like, 'kick butt' kind of attitude that you have, and it doesn't just go away (60-year-old man).This mental toughness came with an ability to handle failure: Sport teaches you how to work hard, how to be part of a team, how to deal with failures… Sports are a microcosm of life….great laboratories to learn about yourself and learn about how you are pursuing life (45-year-old man).Some related that employers were particularly interested in hiring former athletes because of their work ethic and determination.Not until much later in life did some participants realise the benefit and positive impact it [sport] would have on me in the future… from developing your character, developing different skill sets, and the network that you will have (43-year-old woman).
DISCUSSION
While midlife former college athletes experienced many benefits from college sports, they also faced unique challenges transitioning to life after sport.Challenges spanned physical and psychosocial elements.Physical challenges included managing prior injuries, modifying diet to accommodate lower energy needs and finding new or different exercise(s) and activities.Psychosocial challenges included changed identity, loss of the scheduled and structured team environment, lack of accountability and no longer having such strong social support.While many continued to participate in sports as athletes and/or coaches, which eased the transition and delayed the postathlete void, others made clean breaks.Participants suggested a programme, toolkit, mentoring or exit course to help facilitate the transition to life after sport for current competitive athletes.
Strong athletic identities, including their competitive nature, often persisted decades after individuals stopped participating in college sports.These traits were viewed as both a benefit to many professional and personal settings yet also something that needed to be managed and channelled appropriately, particularly in some work or social environments.Strong athlete identity has been identified in other qualitative 32 33 and mixed-methods 34 research and is positively associated quantitatively with postretirement depression and anxiety in former college varsity athletes. 35These and other research studies [36][37][38] suggest that retiring from sport is complex and often challenging as athletes' identities are tested or changed.Further research is warranted, including how to equip athletes better to prepare for this transition.
Pain may be a major barrier for many former competitive athletes to continue participating in sports and exercise.Several studies have noted that physical activity patterns may reduce dramatically and be insufficient after sports-related injury, especially ACL injuries, 32 33 but activity patterns among former athletes are not as well documented.Ekhtiari and colleagues found that nearly one-third of former professional basketball players have moderate to severe problems with mobility, and almost half have moderate to extreme pain/discomfort. 18Additionally, opioid use is high among athletes. 39In the present study, most individuals expressed changing their exercise or activity patterns after competitive sport, often in reaction to prior injuries and/or current pain.Many former athletes exercised less or completely stopped sports for some time after college; however, a few felt their workouts were more well-rounded after college.Turning to other sports, including recreational leagues like slow-pitch softball or endurance sports like running or triathlon, helped athletes stay active and engaged socially.Future research should investigate optimal management strategies for prior injuries and why some athletes continue to participate in sports and exercise, and others do not.
There are several limitations to consider when interpreting the results of the study.First, the sample was heterogeneous (ie, different sports, levels of college sport, roles on the team (star player and captain vs end-ofthe-bench role player), etc); this heterogeneity, however, may make the findings more translatable to a broader range of former athletes.Second, the study relied on qualitative measures, and objective quantifications of corresponding outcomes (eg, dietary intake and activity levels) were not presented.Finally, the participants were college athletes approximately two to four decades ago.Thus, the applicability to present college athletes is unknown.We chose to focus on midlife because these individuals have experienced the transition to life after competitive sport and more fully understand the longterm implications of their sports participation on their overall health.Future research should also explore transitional experiences of recent former college athletes as many factors, including training expectations and Open access resources (eg, nutrition, psychological counselling, etc) are ever changing.
CLINICAL IMPLICATIONS
The off-ramp from competitive sport needs to be managed.While former college athletes benefit from transferrable life skills and often continue sports and exercise, they face unique challenges transitioning to life after sport.Health challenges included managing pain and prior injury, maintaining physical activity, reducing energy intake and changing identity.Future research should develop and evaluate a toolkit, programme and other resources to facilitate life after ending competitive sports under 'normal' conditions (eg, retirement) and after a career-ending injury.
X Jacob John Capin @JacobCapin Acknowledgements The authors would like to acknowledge all study participants for their participation in and invaluable contributions to this research.The authors also thank individuals from the Life After Sport Trajectories (LAST) Lab who supported aspects of the study, especially Ms. Lindsey Mirkes, MS, for her role in recruitment, scheduling, and data management.
Contributors JJC contributed to conception and design; funding acquisition; data acquisition, analysis, and interpretation; manuscript drafting; and incorporating revisions.TLW contributed to data analysis and interpretation, manuscript drafting, figure preparation, and critical review.JHS contributed to data interpretation, manuscript drafting, figure preparation, and critical review.CSS, SLL, WBF and SKH contributed to conception and design, data interpretation, and critical review.LBP contributed to conception and design; data acquisition, analysis, and interpretation; manuscript drafting; and critical review.All authors guarantee the accuracy of the data and approve the final version of the submitted manuscript.JJC serves as the gaurantor of the work, accepting full responsibility for the work, the conduct of the study, the integrity of the data, and the decision to publish.
Figure 1
Figure1General interview categories (see online supplemental appendix 1 for the full interview questions and prompts list).
Figure 2
Figure 2 Six major themes, sub-themes and representative quotes (the percentages listed for each theme represent the proportion of the number of times any codes within that theme were coded out of the total number of coded quotes). | 2024-05-12T05:08:01.259Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "ed6f843a55fb9f963f3f3c7155ae00e1cde8db41",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ed6f843a55fb9f963f3f3c7155ae00e1cde8db41",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5463549 | pes2o/s2orc | v3-fos-license | Oral autopsy: A simple, faster procedure for total visualization of oral cavity
I of individuals in mass disaster and also tracing unidentified human remains are challenges to the investigating team. The forensic dentist plays an important role in identification, especially in mass disasters.[1-3] Identification based on dental information is a highly efficient, reliable, and rapid procedure.[4,5] Currently, forensic dentistry plays a major role in forensic research and identification of humans worldwide. Be it a manmade disaster or natural disaster, the important information obtained contributes to the identification of mass disaster and homicide victims. It can be achieved by the tooth shape and position, restorations, malocclusion, anomalies in teeth and so on that make each dentition unique. It can also guide investigative officer in homicide cases by establishing identity of criminals.[6,7]
Introduction
I dentification of individuals in mass disaster and also tracing unidentified human remains are challenges to the investigating team. The forensic dentist plays an important role in identification, especially in mass disasters. [1][2][3] Identification based on dental information is a highly efficient, reliable, and rapid procedure. [4,5] Currently, forensic dentistry plays a major role in forensic research and identification of humans worldwide. Be it a manmade disaster or natural disaster, the important information obtained contributes to the identification of mass disaster and homicide victims. It can be achieved by the tooth shape and position, restorations, malocclusion, anomalies in teeth and so on that make each dentition unique. It can also guide investigative officer in homicide cases by establishing identity of criminals. [6,7] Usually, the forensic dentist participates in establishing the age, [7][8][9] determination of sex, [7,10,11] and race of corpses or skeletal remains, [12] manufacture of models for rugoscopy, [13,14] examination of bite marks, [15] and assessment of facial trauma, especially in cases of child abuse. [16,17] The forensic dentist is also responsible for making radiological examinations [18,19] and postmortem dental records. [20] The common problems that a forensic team faces during identification are poor state of conservation of unidentified bodies and incomplete presence of remains, which may retard the identification process. In such situations, This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.
For reprints contact: reprints@medknow.com cAse RepORt oral autopsy may help; in difficult cases where oral examination cannot be completed due to accessibility, for proper visualization of teeth and its structure oral autopsy is necessary. Oral autopsy helps to make a proper postmortem dental record. It primarily helps to register the teeth present in the oral cavity, the ante mortem dental treatments received, the study of dental malpositioning, and the type of occlusion. Within the field of forensics, dental evidence is considered to be the most trustworthy method of identification. [4] In this article, we have attempted to explain a simple method to obtain access to the oral cavity for recording postmortem dental findings in suitable cases where accessibility is a challenge and there is a presence of complex findings.
Description of procedure
Consent is to be obtained from the medical officer and also the investigative officer for performing an oral autopsy after explaining the complete procedure. This oral autopsy procedure is simpler, faster, and preserves facial configuration, which may help in the visual recognition of the remains by family members and other interested persons. The procedure includes: • The information obtained can be compared with the data offered by the family, dentists of the victims, and contributing private or public institutions (ante mortem data), and finally establishment of the identity.
Miscellaneous data
(Some of the manifestations an examiner has to look for include bluish color of the lips and fingernails due to hydrochloric acid, corrosion of oral mucosa due to consumption of kerosene, petrol, etc., blue line along the gum, with bluish black edging to the teeth in chronic lead poisoning, garlic smell from the oral cavity due to consumption of organophosphorus poison, garlicky pungent odor due to consumption of aluminum phosphide, etc.)
Discussion
Disaster is said to be an event, which affects multiple individuals' lives as well as properties at a given time and place. Disaster can be natural such as hurricane, tornado, flood, and earthquake or manmade such as a terrorist attack. It may affect a large number of people. If individuals remain to be unidentified, it is a source of psychological trauma to the surviving family members and friends and slows down the legal procedure. The mass disaster management team is a group of specialists which comprise the police, army, home guards, civil guards, and medical examiners such as forensic pathologists and forensic odontologists. The role of dentists is essential, especially in the DVI team. [1,2,5,7,21] This has been noted in the case of the tsunami in 2004, the World Trade Center attack in 2001, and many more such incidents. It is necessary for the forensic identification team to have a good relation with the local or state dental association so that the identification of victims in mass disaster is speedy.
Postmortem dental record Scientifically supported positive identification, especially in mass disaster and also unidentified human remains/body require a well-organized, preplanned management.
Usually, the process of identification of unknown cadaver relies on the antemortem and postmortem data. It is very much necessary to have good access to the oral cavity to record complex finding by oral autopsy as a part of identification without disfiguring facial configuration. As a standard protocol, the photographs of unknown cadavers are obtained by DVI team members. The person performing oral autopsy has to consider taking photographs before and after oral autopsy. The photographs of both arches occlusion, restoration, and dental appliances (each appliances and restorations has to be photographed and explained) have to be taken. A detailed examination of the oral cavity helps to understand the socioeconomic status and personal habits of the victim as well as the treatment received by him/her. Radiographs surely have a very important role in the process of identification; they provide details, which may not be registered by clinical examination, for example, the shapes of restorations, bases under restorations, dental and radicular shapes, endodontic treatments, and anatomy of the maxillary sinuses. Virtopsy or imaging methods such as orthopantomogram (OPG), computed tomography (CT), and magnetic resonance imaging (MRI) can be used; if antemortem radiographic record is available, it has to be compared with the postmortem record and this would thus, contribute as an additional element in process of identification.
An attempt can be made to help the investigating team by performing oral autopsy of an unknown deceased individual and providing the complete dental picture to the team, requesting them to contact the dentist in and around the location in which the deceased person was found.
Although many researchers such as Fereira et al., [22] Luntz (cited by Vale and Noguchi), [22] and Jakobsen et al. [22] have suggested their own methods for obtaining access to the oral cavity, each method has its own advantages and disadvantages. Primarily, the main aim of the investigative team is identification of the individual, which may solve many legal and administrative issues. Any procedure can be followed with the prime consideration of preparing a proper postmortem dental record, which may facilitate identification of the individual.
Further oral autopsy may help in thorough examination of the oral cavity in cases where death has occurred due to the consumption of poison. [23][24][25][26] It also helps to collect the oral mucosal tissue for establishing postmortem interval.
Conclusion
Recording of postmortem oral findings are useful for future comparison with antemortem records. Accessibility to the oral cavity is essential for recording the postmortem data. We have attempted a procedure, which is easy to perform to obtain total accessibility to the oral cavity where accessibility is a difficult by bilateral incising from the angle of the mouth to the tragus of ears and reflection of tissue. We have found the above procedure not only simpler but also faster.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2018-04-03T03:37:05.572Z | 2016-05-01T00:00:00.000 | {
"year": 2016,
"sha1": "f0d3e8c99427d01bdfeb74b5c19acb62ec51fc45",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc4970404",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "327a6b16acc42063f9b5ce70b6aad22524adf0c7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119723213 | pes2o/s2orc | v3-fos-license | On stability, convergence and accuracy of bES-FEM and bFS-FEM for nearly incompressible elasticity
We present in this paper a rigorous theoretical framework to show stability, convergence and accuracy of improved edge-based and face-based smoothed finite element methods (bESFEM and bFS-FEM) for nearly-incompressible elasticity problems. The crucial idea is that the space of piecewise linear polynomials used for the displacements is enriched with bubble functions on each element, while the pressure is a piecewise constant function. The meshes of triangular or tetrahedral elements required by these methods can be generated automatically. The enrichment induces a softening in the bilinear form allowing the weakened weak (W2)procedure to produce a high-quality solution, free from locking and that does not oscillate. We prove theoretically that both methods confirm the uniform inf-sup and convergence conditions. Four numerical examples are given to validate the reliability of the bES-FEM and bFS-FEM.
Introduction
Rubber-like materials are able to withstand extremely high strains whilst exhibiting very little or no permanent deformation and consequently are widely used in industry. In addition to elastic properties, the volume of these materials is almost preserved upon loading. Rubber-like materials are said therefore to be nearly incompressible and typically possess bulk moduli that are several orders of magnitude higher than their shear moduli (equivalently, they have a Poisson's ratio close to one half). It is well known that the stress analysis of nearly-incompressible materials requires special care. Applying low-order finite elements based on quadrilaterals, hexahedra, triangles or tetrahedra, to such problems, results in a severe underprediction of the displacement known as locking. A variety of numerical methods have been proposed to overcome this defect, for example: h-version of finite elements [9,10], B-bar method [39], mixed formulations [6,15], enhanced assumed strain (EAS) modes [21,52], reduced integration stabilization [28] and two-field mixed stress elements [49], a stream function approach [8] and mimetic finite difference method [14] and so on. In addition to these, several publications investigate an average nodal pressure formulation in which a constant pressure field is enforced over a patch of triangles or tetrahedra [17,19,27,29,47]. Despite the many available approaches for solving nearly-incompressible elasticity problems on a triangulation, only a few methods are based on rigorous mathematical analysis. An example of one such method can be found in [29]. Here, the author introduced a discontinuous pressure and used bubble functions in order to enrich the space of piecewise linear polynomials to which the displacements belong. However, the method still has certain drawbacks inherited from FEM such as 1) an overestimation of the stiffness matrix for nearly-incompressible and bending-dominated problems, 2) a poor performance for distorted meshes, 3) a poor accuracy of the stresses. Moreover, we make mention of the very important three-field (Hu-Washizu) methods. In fact many of the two-field methods mentioned in the overview can be derived as special cases of the Hu-Wahsizu formulation, for which a rigorous analysis has been carried out in [26,30].
In this paper we propose two improved methods which use bubble functions as enrichments to the edge-based and face-based smoothed finite element methods (bES-FEM and bFS-FEM). These methods contribute to the further development of advanced numerical tools that can be used for nearly-incompressible elasticity problems, whilst simultaneously building on the advantages of some classical methods as explained below.
Firstly, an improved version of a so-called bES-FEM has the same desirable features as bES-FEM-T3 studied in [43]. Both bES-FEM and bFS-FEM work well for three-dimension problems, where bubble functions are generally defined by the (d + 1)th-power bubble function and the hat function. Most importantly, both methods are theoretically proven to ensure the uniform inf-sup condition and the convergence. In addition, there is a basic difference between bES-FEM and bES-FEM-T3 as follows: for bES-FEM, the approximate pressure and displacement are directly computed by the mixed approach provided in (16a) and (16b) while for bES-FEM-T3, the approximate pressure is computed as a posteriori of the displacements based on the edge-based smoothing domains.
Secondly, we use mixed methods [6,16] to reformulate the linear elasticity problem as a mixed displacement-pressure problem. Our aim is to attain a good approximation to the pressure solution [9], which we model here as piecewise constant.
Thirdly, the proposed approximation to the displacement solution is a combination of the displacement from ES-FEM/FS-FEM [35,42] and the displacement from the bubble functions [48,41]. ES-FEM and FS-FEM improved the standard FE strain fields via a strain smoothing technique described in [23]. The methods proposed in this paper build on ES-FEM and FS-FEM, and therefore inherit the positive qualities associated with this smoothing technique, namely 1) its solutions are more accurate than those of linear triangular elements (FEM-T3) and quadrilateral elements (FEM-Q4) using the same sets of nodes [39,53]; 2) ES-FEM and FS-FEM perform well with distorted meshes; 3) their stress solutions, which are very precise, have the convergent property and 4) they can be easily implemented into existing FEM packages without requiring additional degrees of freedom. Clearly this technique of smoothing is a powerful tool and it has already been applied to a wide range of practical mechanics problems, e.g, [36,44,45]. Nevertheless, if the displacement is approximated only by ES-FEM or FS-FEM i.e. without enrichment by bubble functions, these methods violate the inf-sup condition and uniform convergence. Other methods from the SFEM family also fail to satisfy this condition, implying that they also suffer from volumetric locking in the case of nearly-incompressible elasticity [42,43]. To overcome volumetric locking for the SFEM family, only a few approaches have been presented. For example, in [42], the authors suggested a combined FS/NS-FEM model and in [43] the use of bubble functions was proposed. Neither of these approaches are based on a rigorous mathematical analysis.
Finally, the degree of freedom which is associated with the pressure variable can be statically condensed out of the system of equations, in contrast to the method based on the classical MINI element [16], for example, where condensation cannot be applied.
The rest of this paper is organized as follows. In the next section, we briefly recall the boundary value problem of linear elasticity, the mixed displacement-pressure formulation and its associated weak form. Section 3 describes the enrichment of ES-FEM and FS-FEM by bubble functions. Section 4 presents the mathematical properties of bES-FEM and bFS-FEM, where only small deformations are considered. Displacement, energy and pressure error norms are defined in section 5 for the precise quantitative examination of various models. Four numerical tests are presented in section 6 to demonstrate the effectiveness and accuracy of the proposed methods. In the final test we apply the proposed bES-FEM to a large deformation problem. In the last section we draw conclusions and give possible directions for future work.
The boundary value problem of linear elasticity
We consider a static linear elasticity problem in a bounded domain Ω ⊂ R d , d = {2, 3} with a Lipschitz boundary ∂Ω. The governing equations express equilibrium between the Cauchy stresses σ and the applied body forces f − div σ = f in Ω. ( The displacement u is prescribed on the boundary ∂Ω by In addition to (1) and (2), we introduce the infinitesimal strain tensor ε which is related to the displacement u by where ∂ i = ∂ ∂x i , (x 1 , · · · , x d ) ∈ R d and ε(u) = [ε ij (u)] i,j=1,d . For an isotropic linear elastic material, the constitutive relation is given by where λ and µ are the Lamé constants and δ ij is the Kronecker delta. The Lamé constants are related to the Young's modulus, E, and Poisson's ratio, ν, through the following: In this paper our attention is devoted to the study of nearly-incompressible materials for which Poisson's ratio is close to 0.5. Such a choice of this parameter is well known to lead to a poor performance by FEM due to locking and instability.
Mixed displacement-pressure formulation and the weak form
The elasticity problem (1) can be rewritten in a mixed displacement-pressure form where the pressure p is introduced as an additional variable. The mixed form is equivalent to the penalized Stokes equations. We now introduce several function spaces which are required for the weak form: : The space to which the pressure solution belongs is L 2 0 (Ω). The condition that the volume integral of the pressure should be zero follows directly from integrating equation (6), transforming the integral to a boundary integral and then using the fact that the displacement satisfies homogeneous Dirichlet boundary conditions. The mixed approach aims to find a displacement field u ∈ V 0 and a pressure p ∈ L 2 0 (Ω) that satisfy The bilinear forms are defined as follows: In the definition of the bilinear forms we have introduced Voigt notation, in which the components of the stress and strain tensors are arranged in column vectors, for example: ε = {ε xx ε yy ε zz ε xy ε yz ε zx } T . The matrix D of material constants is symmetric, positive definite and its eigenvalues are bounded in [λ D min , λ D max ] ⊂ R + .
The finite spaces
The polygonal domain Ω is discretized by the triangulation T h (the primal mesh), where T h consists of triangles (2D) or tetrahedra (3D). The set T h has N e elements, N n nodes (or vertices), N s edges, N f faces (3D) and Ω = Ne i=1 T i . For each element T ∈ T h , the barycentric point c T is called a mesh point of T . Let V h be the standard linear finite element space defined on the triangulation T h , which has the standard nodal basis functions N i (i = 1, N n ) associated with node i. We define the space of bubble functions as where the basis bubble functions are chosen to be one of two types (see [48] and [41]).
For the first type, the ξth-power bubble function is used for each element T ∈ T h with where each function λ T (i) is a barycentric coordinate associated with a vertex x T (i) of the triangle T , and c b is computed in such a way that b T (c T ) = 1 where c T is the centroid of T .
The second type is a hat function defined on T , where T is partitioned into sub-triangles (2D) or sub-tetrahedra (3D), {T (i) } i=1,d+1 . This is achieved by joining the centroid c T to the two vertices on each edge of the triangle in turn (2D), or to the three vertices on each face (3D).
The finite element space for the displacement which is enriched with bubble functions is defined as h which is restricted on T ∈ T h is written as where the identity matrix of size d is denoted by Id d , N T (i) is the standard nodal basis function associated with the vertex x T (i) of the triangle T , N b c T is the standard nodal basis bubble function defined on T with the centroid c T . The values u T (i) and u c T ∈ R d are the nodal values of u h at the vertex x T (i) and the barycenter c T .
The dual mesh
Now, we design a dual mesh for smoothing the strain and the divergence operator. For each of the 2D bES-FEM, the 3D bES-FEM and the 3D bFS-FEM, a dual mesh T * h is created in a similar manner to the 2D and the 3D edge-based smoothing domain [33,22] and the face-based smoothing domain [33] respectively. It is constructed by connecting all vertices, center points of elements in T h and center points of faces (for 3D bES-FEM). The dual mesh Figure 1b shows an element of the dual mesh in 3D consisting of six tetrahedral elements together with an inner smoothing cell centered along the edge AB. In Figures 1c and 1d we give a further 3D example showing a smoothing cell Ω s k associated with edge AB of the boundary ∂Ω.
For bFS-FEM, we also have an example for a smoothing domain Ω s k ∈ T * h . The domain Ω s k associated with the face k is created by simply connecting three nodes B, C, D of the face to the centers H, I of adjacent elements as shown in Figure 2. With the dual mesh T * h , the space V B h is equipped with the following the inner product, semi-norm and norm (see [31]). As a consequence of remarks 3.4 and 3.5 in [31], we have relationships between | · | V B h and | · | 1 , and also between || · || V B h and || · || 1 as follows: where H 1 (Ω) is a Sobolev space which is endowed with the semi-norm |.| 1 and norm ||.|| 1 ,
The third mesh
Next, a third mesh T * * h is constructed by connecting all centroids {c T } T ∈T h and midpoints of all edges of T h in 2D, plus barycenter points of all faces in 3D. The third mesh T * * h satisfies Ω = Nn i=1 V i , and none of the elements of T * h overlap. Each element V k ∈ T * * h is also associated with a vertex x k of the primal mesh. Figure 3a is an example of an element V k ∈ T * * (a) a 2D element (b) a 3D element Figure 3: constructed by connecting centroids {c T i } T i ∈T h and midpoints {x e i } i∈1,6 with edges {e i } i=1,6 in 2D. Figure 3b is another example for an intersecting domain V k ∩ T between V k ∈ T * * h and T ∈ T h in 3D. This intersecting domain is made from a set of a vertex x k , midpoints Based on this third mesh, we define the following finite element space for the pressure Let p i be the nodal value of p h at a vertex i ∈ 1, N n . Then Now, we apply 2D/3D bES-FEM and bFS-FEM for discretizing the nearly-incompressible elasticity problem in the two following sections.
Smoothed strain and smoothed divergence
In 2D, according to the formula (3), the discretized strain ε(u h ) is obtained as On each smooth element Ω s k ∈ T * h , the strain ε(u h ) is smoothed by and we also have a formula for the smoothed divergence By performing the integration in (13), the smoothed strain ε k can be rewritten on the boundary ∂Ω k s , as follows: where n (k) (x) is defined by By transforming (10), (12) and (13) into the formula (15), we remove the need to use shape function derivatives in the calculation of the discrete smoothed strain ε k (u h ). The number of Gauss points used for the line (2D) or face (3D) integration in (15) depends on the order of the shape functions and bubble functions. In 3D, the strain and the divergence are similarly smoothed.
Weakened weak statement for bES-FEM and bFS-FEM
Here, we want to find the discrete solution The system of equations in (16) is known as a weakened weak (W 2 ) form because derivatives of the displacements are no longer needed in contrast to the usual weak form [29]. Also, due to (16b), we will be able to calculate the discrete pressure p h from the smoothed divergence ∇ · u h as is shown by the formula in (83), see Remark 4.2.
The mathematical properties
In this section, we present the important mathematical results for bES-FEM and bFS-FEM when applied to the linear elasticity problem.
Theorem 4.1 (Coercivity and Continuity)
The bilinear form a(·, ·) is continuous, symmetric and coercive on This theorem can be proven by invoking the theorem 3.2 (coercivity) and the theorem 3.3 (continuity) in [32].
h is continuous and satisfies the uniform inf-sup condition, i.e. there exists a positive constant β 0 independent of the mesh size such that To prove the theorem 4.2, we need to look for a relationship between In [29], b(u h , q h ) satisfies the uniform inf-sup condition, from which it follows that b(u h , q h ) satisfies this condition. This idea was similarly used to prove the uniform inf-sup condition in [29], where the author also indicated the relationship between b(u h , q h ) and the bilinear form derived for the MINI element.
where there exists uniquely (20), the smoothed divergences ∇ · ℓ h and ∇ · b h , which are restricted on Ω s k ∈ T * h , are defined by (14).
Proof: Using the fact that ∇ · ℓ h is constant on each T ∈ T h , we obtain For any element T ∈ T h with its vertices {x h is constructed by barycentric points of all faces (3D), midpoints of all edges and the centroid points c T for all T ∈ T h .
We now calculate the integral For the 2D and 3D bES-FEM On the above element T ∈ T h , the integral where the domain Ω s e T (i) ∈ T * h corresponds to the edge e T (i) . The set E T (i) contains all edges of T such that these edges have a common vertex x T (i) .
In the first case of T (a triangle or tetrahedron), we assume that all edges and all faces (3D) of T are inner edges and inner faces, i.e. the edges and faces are not on the boundary ∂Ω. For each i = 1, d + 1 and j = 1, d, the integral where T e T (i) is a subset of T h such that its elements have a common edge e T (i) and T ∈ T e T (i) . From (24) and (25), the integral for all K ∈ T e T (i) \{T } and e T (i) ∈ E T (i) , as follows: From (26) and (27), in the integral (28) By using the centroids c T for all T ∈ T h , the midpoints of all edges, plus barycentric points of all faces (3D) to construct the dual mesh T * h and the third mesh T * * h , we have where for all i = 1, d and T ∈ T h , the notations card(E T (i) ) and card(E T ) are the number of all elements of E T (i) and E T , respectively. Furthermore, we have card(E T (i) ) = d and card(E T ) = card(E K ) for all K, T ∈ T h , because the primal mesh T h is a triangulation Therefore, the coefficient of ( From (22) and (30), the two coefficients of (∇·ℓ h ) | T q T (i) in the two integrals
For the bFS-FEM method
Using this method, we obtain the coefficient of (∇·ℓ h )| T q T (i) in the integral where F T (i) is a set of all faces of a tetrahedral T whose has a common vertex x T (i) and card(F T (i) ) is equal to d. The notation f T (i) is a face of T , one of its vertices is x T (i) . The two sets F K , F T contain all faces of K, T ∈ T h , respectively. We have used the following expressions In the other cases of T ∈ T h which has at least one edge or one face belonging to the boundary ∂Ω, we also obtain the same results as (30) and (31).
From (22), (30) and (31), we deduce that Our next objective is to find the relationship between This relationship is shown in the following lemma.
Lemma 4.2 There exists a positive constant α which depends on the bubble function, such that Proof: By the definitions of the spaces B h and V * * and Considering T whose all edges stay in the internal domain Ω, for each i = 1, d + 1, we have is performed for bES-FEM and bFS-FEM in turn.
For the 2D and 3D bES-FEM Furthermore, the other coefficients of q T (i) u c T , which are also found in From equations (29), (36) and (37), the coefficient of In two dimensions, we compute the coefficient of In Figure 5, we introduce some extra notation including the midpoints of edges [x i , x j ], [x k , x i ] and [x k , x j ] denoted by x ij , x ki and x kj respectively. We write γ j and γ We directly compute the coefficient (39) for the two types of bubble functions investigated here.
Together with this assumption, we use lemma 3.2 of [29] to obtain By directly computing the quantities on the reference elementT , we have • The barycentric coordinates of a point P (x (1) , x (2) ) in the reference triangleT arê (2) ). The basic cubic bubble function on the reference triangleT is N b cT (x) = 27λ 1 (x)λ 2 (x)λ 3 (x).
• The relationships between the normal vectors nγ(1) i and nγ (2) i with i = 1, 3: From (40)- (45), we point out that Hence, we use the results of (34), (39) and (46) to imply that With the computations of (32), (33) and (47), we conclude that Defining u * h = ℓ h + 11 16 b h , using (48) and the result of the first step, we get Finally, due to the result of Theorem 3.1 in [29] and (49), the uniform inf-sup condition holds for the bilinear form b(·, ·) on V B h × V * * h .
The hat bubble functions (9)
For each triangle T ∈ T h , the divergence of the hat bubble function is equal to a constant on each sub-triangle {T (i) } 1,3 of T , so we have By (34), (39) and (50), we obtain which implies that Therefore, In three dimensions, we also compute the coefficient of q T (i) u c T (38) on the following tetrahedron T constructed from four vertices {x In this particular case, the coefficient of where vectors n c T , whose length is equal to measure of triangular faces c T , x We also get the coefficient of q T (i) u c T of b(u, q), as follows: Furthermore, we have relationships between normal vectors in the two formulas (54) and (55) For the bFS-FEM method In a similar manner to the calculations for bES-FEM, the coefficient of q T (i) u c T is equal to Figure 7), we obtain the following where normal unit vectors n (c T ,x T (j) ,x T (k) ) , n (c T ,x T (j) ,x T (l) ) and n (c T ,x T (k) ,x T (l) ) of the tetrahedron x T (l) ) are measured by the area of triangular faces (c T , x T (j) , x T (k) ), (c T , x T (j) , x T (l) ) and (c T , x T (k) , x T (l) ), respectively. Additionally, normal vectors in (54) and (58) relate together From (54)-(59), there exist the two positive constants α 1 , α 2 satisfying Therefore, for each the 2D/3D bES-FEM or the bFS-FEM, we choose the coefficient α that is equal to α 1 or α 1 α 2 , respectively.
From the results of the two lemmas 4.1 and 4.2, we deduce that there are two positive constants α 3 , α 4 depending on α 1 , α 2 , such that the 2D/3D bES-FEM method
Theorem 4.3 (Convergence)
We assume that (u, p) and (u h , p h ) are the two pair solutions of the problems (7a,7b) and (16a,16b), then we get the following error estimation (62) where C is a positive constant and independent on h. This coefficient h is defined by and a radius of the circumscribed circle for each element K * of M * , K * * of M * * is denoted by "diam(K * )", "diam(K * * )", respectively.
Proof: Let us consider any w h ∈ V B h (λ) defined by Then, by applying the coercivity (17), one has where p M * * h ∈ V * * h is a characteristic function defined by , this is a result of (16a) subtracted from (7a). Inequality (64) continues to be evaluated as follows where eigenvalues of the material matrix D are upper bounded by λ D max , and p M * h is a characteristic function defined by with the pressure solution p of (7a, 7b), on each element K * ∈ M * . Besides, we have the following estimations because of (26), (27, (30), (31) and (45) found in [31]. Moreover, we have Using (4)-(67), the inequality (65) is rewritten as follows Let us subtract (7a) from (16a), getting Transforming b(v h , p h − q h ) in the stability property (19) by (70), we have (71) Now, we estimate each part in the left hand side of (71): For the other part of (71), thanks to two equations (63a) and (63b) of [38], one writes From the results (71), (72) and (73), we get the following inequality for all q h ∈ V * * . Thanks to the results (20), (23) in [32], (79) in [31] and the continuity property, then there exists a positive constant δ being independent on the other coefficients such that Applying the inequality (75) to two inequalities (68) and (74), one obtains the following inequalities and Let us the inequalities (76) -(77), and use the inequality (79) in [31], we obtain We need to prove that there exists a positive constant C 1 without depending on h such that Thanks to (19), we get which follows Hence, with C 1 = 2, the inequality (79) is proven. We apply (79) to (78), and thank to the inequality (79) in [31] for getting where the positive constant C 2 is defined by The coefficient C 2 is positive, because the Lamé coefficient λ can be chosen large enough, while ν is closed to 0. as h → 0. Therefore, the inequality (62) is proven with In the following remark, we briefly recall how the scheme can be implemented for the problem (16) based on the displacement.
Then the bilinear form Therefore, we arrive at a problem of finding u h ∈ V B h such that where the solution u h of (84) is the same as the solution of the problem (16).
Remark 4.3:
On applying bES-FEM and bFS-FEM to linear elasticity problems, the equations can be expressed as the following linear system where A, B, C are matrices associated with the bilinear forms a(·, ·), b(·, ·) and c(·, ·) respectively, and f h is associated with the linear operator (f, ·). This framework of bES-FEM and bFS-FEM for problems in linear elasticity has an implementation similar to that of the MINI element. However, the matrix C of (85) is different to C in the system of linear equations associated with the MINI element, because the matrix C of (85) is diagonal and each degree of freedom corresponding to the pressure can be computed by (83). It follows that the matrix is positive definite.
Error norms
In order to study the error and convergence of the proposed numerical methods, we introduce three error norms: the displacement error norm, the pressure error norm and the energy error norm.
Displacement error norm
The displacement error norm is defined by where u is the analytical solution for the displacement and u h is the numerical approximation.
Pressure error norm
The pressure error norm is written as where p is the analytical pressure solution and p h is the numerical solution.
Energy error norm
The energy error norm must take into account of the fact that some of the numerical methods solve purely for displacements but others solve additionally for pressure. The NS-FEM and ES-FEM only approximate the displacement field, hence for these two methods the evaluation of the norm follows that of [32] and is based on N s smoothing domains Ω s where σ is the analytical solution for the stresses and σ (k) (u h ), the numerical approximation to the stresses, is derived from the smoothed strain solution ε (k) (u h ) defined on smoothing domains Ω s k .
The MINI and bES-FEM approximate both displacement and pressure. Hence, we propose a modification to the definition of the energy error norm appropriate to each method.
The norm for bES-FEM incorporates a term which depends on the pressure and is based on The energy error norm of the MINI method also contains a term which depends on pressure but it is evaluated on the N e triangles, T ∈ T h , and written as
Numerical results
In this section, we present some numerical results to demonstrate the efficiency and accuracy of the newly-proposed methods. For this purpose we use four benchmark problems (three cases for small deformation and a remaining one for large deformation), and compare results from bES-FEM and bFS-FEM with the results from the methods listed below.
• MINI -The mixed displacement-pressure finite element method with cubic bubble functions [7].
• FEM -The standard FEM using three node triangular elements with linear shape functions [53].
• Q4/ME2 -The mixed-enhanced formulation with five enhanced modes. Unless otherwise noted for the results which follow the transformation matrix, T , used for the mixed-enhanced simulations was taken as the inverse transpose of the average Jacobian, i.e., T = J −T avg [40]. • HFS-HEX8 -The hybrid finite element formulation with fundamental solutions as internal interpolation functions using linear 8-node brick elements [20].
The domain Ω is a tapered panel (see Figure 8) whose left boundary is clamped, and whose right boundary is subject to an in-plane shearing load of 100 in the y-direction. Plane strain conditions are assumed. The material is described by two parameters: Young's modulus E = 250 and Poisson's ratio ν = 0.4999. Analytical solution for this problem is not available and therefore the vertical displacement at the top conner of the right-hand boundary (i.e. the point (48,60)) is compared with other numerical results taken from [40]. A comparison shown in Figure 9, it is observed that bES-FEM can produce more accurate solution than the other methods such as MINI, ES-FEM, NS-FEM and especially mixed-enhanced strain elements [40]. generate a distorted mesh, the locations of the interior nodes of the initial mesh are modified by an irregularity factor d to obtain new coordinates where r c ∈ [−1, 1] is a random number; d ∈ [0, 0.5] is a distortion density; ∆x, ∆y is the size in x and y directions, respectively. For two distortion densities, d = 0.1 and d = 0.5, the resulting meshes are illustrated in Figure 12. Figure 13. For the pressure field, it can be observed that MINI method is more sensitive to mesh distortion than bES-FEM. With the refined mesh (64 × 64), bES-FEM behaves well, see Figure 14.
6.2. Cylindrical pipe subjected to an inner pressure The next benchmark problem, also considered in [11], is a cylindrical pipe subjected to an inner pressure p = 8kN/m 2 , where its internal radius and external radius are a = 1m and b = 2m respectively (see Figure 15). Due to the axisymmetric nature of the problem, we only model the upper right quadrant of the pipe. We impose symmetric conditions on the left and bottom edges, the outer boundary is traction-free and a pressure is applied to the inner boundary. Plane strain conditions are applied and the Young's modulus is E = 21000kN/m. This problem is interesting in the nearly-incompressible case, i.e. when Poisson's ratio ν is close to 0.5. Its domain is meshed by 3-node triangular and 4-node quadrilateral elements as shown in Figure 16. The cylindrical pipe problem has an exact solution for the radial and Figure 16: Domain discretization of a cylindrical pipe subjected to an inner pressure: 256 three-noded triangular elements (left), and 128 four-noded quadrilateral elements (right).
tangential displacement [51] u r (r) = and for the stress components In equations (91) and (92), (r, ϕ) are the polar coordinates, and ϕ is measured counterclockwise from the positive x-axis.
The rate of convergence of MINI, NS-FEM and bES-FEM is investigated for this problem and the results of this are shown in Figure 17. According to Figures 17a and 17b, the two convergence rates in both the displacement and the pressure error norms of bES-FEM are very high (≥ 1.93). The convergence rates of MINI and NS-FEM in the displacement error norm are close to 2, but their convergence rates in the pressure error norm are not as high as that of bES-FEM. Moreover, in all three norms the error in bES-FEM is lower than the error in both MINI method and NS-FEM. Figure 17c confirms the convergence proof of bES-FEM as proved in Theorem 4.3.
Nearly-incompressible block
In this section, a nearly-incompressible block with dimensions 100 × 100 × 50 is considered. The bottom face of the block is fixed and it is loaded on the top by a uniform pressure of q = 250/unit area, acting on an area of 20 × 20 at the center. By symmetry, only one quarter of the model is studied, using a tetrahedral mesh of 750 elements with appropriate symmetry boundary conditions applied to the two interior faces. The geometry, the boundary conditions and the material parameters E and ν are given in Figure 18. The vertical displacement at the top center P of the block is presented in Table 1, where the results from bFS-FEM are compared with the results from other numerical methods found in References [20], [4] and [5]. Reference [3] reports that FS-FEM suffers from volumetric locking. In Table 1 our results indicate that the bubble enrichment alleviates the locking problem. In fact, we see that bFS-FEM is softer than all but one of the other methods.
An extension to large deformations: Case study of 2D Cook's membrane problem
In the final test, Cook's membrane is considered for large deformations. The strain energy density of a compressible neo-Hookean material is [12] Ψ where λ and µ are Lamé's parameters as before. The bulk modulus κ can be written in terms of these parameters: λ = κ − 2 3 µ. The deformation gradient F is F ij = ∂x i ∂X j or F = ∂x ∂X , and the Jacobian determinant is J = det (F). The second Piola-Kirchhoff stress can be obtained by the first derivatives of the strain density (from equation (93) where the right Cauchy-Green deformation tensor is C = F T F. The derivatives of principal invariants with respect to the right Cauchy-Green deformation tensor C (∂I 1 /∂C, ∂I 2 /∂C, ∂I 3 /∂C), and the derivatives of the strain energy with respect to the principal invariants (∂Ψ/∂I 1 , ∂Ψ/∂I 2 , ∂Ψ/∂I 3 ) are given by [12]. The elasticity tensor can be expressed in terms of the second derivatives of the strain energy density function given in equation (93) or in component form [13] C ijkl = λ C −1 For this problem, we use the same domain Ω as in the small deformation problem with a shearing load of 1/16 in the positive y-direction. The shear and bulk moduli are µ = 0.6 and κ = 1.95, 10, 100, 1000, and 10000 respectively. Note that when the bulk modulus is κ = 1.95, the neo-Hookean material is compressible, and when the bulk modulus is increased (κ = 10, 100, 1000, and 10000), the neo-Hookean material is approximately incompressible (Poisson's ratio is close to 0.5). The results for the proposed method are compared to the standard FEM, ES-FEM and NS-FEM with the three-noded triangular element. The numbers of elements per side are 2,4,8,10,16,20,32,40, and 100 for this test. Figure 19 illustrates the convergence of the vertical displacement at the mid-point of the right-hand boundary using both compressible and incompressible models for the proposed method, FEM, ES-FEM and NS-FEM respectively, and Figure 20 similarly illustrates the convergence of the strain energy. As shown in those figures, bES-FEM is the most robust, accurate and reliable method for both compressible and incompressible problems, compared to the conventional FEM, ES-FEM, and NS-FEM. In the compressible problem, ES-FEM also gives relatively good convergence; however when the Poisson's ratios are close to 0.5, its convergence becomes slow. Through the problem tested, we believe that the present method can be well applied to some relevant problems [2,18,27,47].
Conclusions
We have in this paper presented the edge-based and face-based smoothed finite element methods enriched by bubble functions (bES-FEM and bFS-FEM) for nearly-incompressible elastic materials in 2D and 3D. These two methods help soften the bilinear form allowing the weakened weak (W 2 ) form to yield accurate and stable solutions. For both bES-FEM and bFS-FEM we have shown that the uniform inf-sup condition and the convergence are satisfied in the case of small deformation. Numerical results showed, for the cases we tested, that the present method is superior to several other elements in terms of accuracy for a given number of degrees of freedom, in particular for heavily distorted meshes.
The proposed method is simple to implement in existing FE codes. It is efficient, and, as it does not lock even for heavily distorted triangular (simplicial) meshes which are relatively easy to generate automatically for arbitrary domains, the method is promising for incompressible problems where the structure undergoes severe deformations, as is the case during cutting and deformation of soft tissues.
Furthermore, for problems with a curved boundary ∂Ω, triangulations T h based on simplices are not able to cover the domain Ω completely, and therefore the boundary ∂Ω is different from the boundary of T h . This issue will introduce a further error into the numerical solution. Hence, in future work, we will combine the methods presented here with NURBS functions to handle the boundary ∂Ω exactly. | 2019-04-11T21:46:24.016Z | 2013-05-02T00:00:00.000 | {
"year": 2013,
"sha1": "47acecf9a46ff3ea040b1f66ae64c1be6a71aad4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1305.0466",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fec0a13ed26776c97a430ba60ee0787e5d03d385",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
52888795 | pes2o/s2orc | v3-fos-license | Position And Rotation of Driver’s Head as Risk Factor for Whiplash in Rear Impacts
Evidence suggests that head position increases risk of whiplash injury to vehicle occupants in rear impacts. The aims of this study were to collect exposure data on head position and rotation during naturalistic driving and to express this in the form of a parametric statistical model for use in computer simulations to optimize seat design for neck injury prevention. An instrumented vehicle equipped with an eye-tracker was used to collect digital readings that were complemented with a four-track video recording. Data from driving trials (approximately 30-60 minutes) were analyzed when the vehicle was stopped, stopping or moving slowly as these are thought to be manoeuvres where impact and hence neck injury risk is highest. It was found that the ‘t location-scale’ distribution provided best fit to the experimental data and that the measured interquartile range or central 50% of head movement in such manoeuvres was approximately ± 15 mm lateral, ± 10 mm longitudinal and ± 7.5 degrees left-right rotation. These ranges provide guidance on the degree of biofidelity required in computer simulation models. Further analysis showed that out-of-range head rotation and rapid rotation explained the majority of missing digital readings and these two motions should therefore be modeled separately as elements of the parametric model.
Introduction
Prevention of whiplash injuries remains an outstanding challenge in vehicle accident research. As yet, there is still uncertainty as to which injury gives rise to whiplash symptoms so diagnosis and mitigation are subject to supposition. The same can be said for whiplash injury mechanisms although some evidence is available. Some studies suggest that having the head turned at impact is one of the risk factors for whiplash. For example, Barnsley et al. [1] suggested a mechanism of injury in which impact to the rear of a vehicle causes an occupant's head (if it is already slightly rotated), to rotate further before neck extension occurs, pre-stressing various cervical spinal structures and increasing their susceptibility to injury. Sturzenegger et al. [2] assessed 117 patients for long-term whiplash and also found head position to be statistically significant. They postulated as a mechanistic basis that the permitted range of extension of the neck is reduced by half when it is rotated. They also referred to further experiments with cadavers showing that the anterior longitudinal ligament was more liable to rupture when the head was rotated before application of an extension strain to the neck.
Some compilations of accident case files [3,4] have recorded whether occupants were actually turning their heads to the side at the time of impact. Jakobsson [4] concluded that "Sitting posture, such as turned head and increased head to head restraint distance, significantly increases AIS 1 neck injury rates" [5]. Winkelstein et al. [6] suggested that "axial pre-twist of the head and neck increase facet capsular strain and may play a role in the whiplash injury mechanism". Jakobsson et al. [7] reported that "Occupants responding that they had turned their head (rotated around z axis to any degree) at the time of impact had a statistically significant higher risk of initial as well as persistent neck symptoms compared to those facing straight forward". Kumar et al. [8] however, struck a precautionary note based on tests of 20 healthy volunteers, not finding evidence that rotation of the head at impact necessarily increases the risk of neck injury-indeed they conjectured that it may be protective. However this notion was not supported by Bunketorp and Elisson [9] who observed that although the study of Kumar et al did not find a greater injury risk when the head was rotated in their summary of electromyographic investigations, their data could only be used to evaluate the muscular reactions, not the load on the cervical spine.
Laboratory tests on specimens of the human spine have also provided mixed results. Maak et al. [10] found that, "The dynamic strains of the alar, transverse, and apical ligaments during impact did not exceed the corresponding non-injurious baseline values" while Siegmund et al. [11] reported that an axial rotation, "Doubles the MPS in the capsular ligament compared to the neutral posture". They also found that capsular strains during the simulated whiplash exposure with the head turned were not significantly different from the Maximum Principal Strain (MPS) associated with partial failure of the capsule.
Observational studies of head position and posture in vehicles have been undertaken by Parkin at el. [12], Chapline et al. [13] and Park et al. [14]. These three studies featured a relatively large number of subjects but only a snapshot of their seating position at one moment of time that was taken to represent the driver's normal condition. The relative position of the head and head restraint was measured along the longitudinal axis of the car but no account was made of head rotation in any of the studies. More recently, Jonsson et al. [15] and Shugg et al. [16] used observational recording instrumentation in the car to improve accuracy but also did not measure head rotation.
However, ideally the design of seats, including the head restraint and associated safety technologies, should be optimized across the range of postures that occupants exhibit at the moment of impact. This includes translational movements and rotations of the head. A natural approach to optimizing seat design for a range of conditions is to run computer simulations of the interaction between the occupant and seat under crash conditions thereby obtaining a prediction of the dynamic load on the spine. At present the refinement of numerical models of the occupant to a high level of biofidelity in the spinal region poses a technical challenge; nevertheless with recent work in the area (e.g. Linder [17]) and developments in computing power and digital resources, the capability to evaluate the performance of seats and anti-whiplash technologies using computer simulations will be achieved in the foreseeable future. At this time, to account for head rotation and occupant posture in the design of anti-whiplash systems, it is necessary to have exposure data.
In accordance with this requirement and to promote the ergonomic design of vehicle seats for occupant safety, the aims of this study were therefore; (a) to describe the position and rotation of drivers' heads in naturalistic driving under conditions when rear impacts may occur, (b) to summarize the experimental data in a parametric statistical model suitable for use in computer simulations.
Methodology Participants
Nine volunteers were available for the study ( Table 1), each of whom drove for around 30-60 minutes through a designated route accompanied by an experimenter. In the first series of trials (subjects 1-4, route 1), travel directions were provided verbally by the experimenter while in the second series (subjects 5-9, route 2) travel directions were provided by a portable navigation device mounted on the dashboard. The volunteers were Loughborough University staff members and associates.
Subject
Age Sex Route
Apparatus
The vehicle used for the trials, a 2010 Ford Mondeo sedan, was fitted with three main test instruments: a data logger for speed, acceleration and satellite location (GPS); an eye-tracker (faceLAB™5) for head position and rotation; and a four-track video system.
Driving route
A review of in-depth accident data indicated that occupants in passenger cars most often receive neck strain in rear impacts while stationary or moving relatively slowly in stop-go or congested traffic ( Table 2) [18]. This guided the choice of a route for the trials passing through Leicester, England, a city with a population of over 300,000. Following experience that the traffic density on the routes leading into and out of the city was too light to provide conditions relevant to the possible occurrence of a rear collision, the route was shortened to run entirely through the urban and suburban areas of the city.
Vehicle manoeuvre %
Waiting to go ahead but held up 39 Stopping on carriageway 20 Driving along straight road 15 Stopped waiting to turn 9 Driving in slow moving traffic 4 Other 13 Total 100
Procedure
The duration of the nine driving trials was over 60 minutes for trials 1-4 (route 1) and over 30 minutes for trials 5-9 (route 2). The periods when they were braking or stationary, and therefore considered to be at greater risk of rear impact, lasted around 8-18 minutes across the nine trial runs. The digital readings from the vehicle data logger and the eye-tracker were filtered to periods of interest by categorizing each moment of driving as 'stopped', 'stopping' or 'other'. 'Stopped' was defined as a continuous period of at least one second when the vehicle speed was under 8 km/h and 'stopping' was defined as a period of continuous deceleration leading to being 'stopped'. The 8 km/h threshold was an arbitrary low speed, comparable to walking pace and consistent with stop-go or slow moving traffic. Sections of the trials where the vehicle was 'stopped' or 'stopping' were then selected for detailed analysis because of their relevance to whiplash injury.
Operation of the eye-tracker was based on real-time image processing of facial features captured from two rearward-facing cameras mounted on the dashboard in front of the driver. Digital data could be lost when the head was rotated widely to the side or rotated rapidly from side to side disrupting the capability of the device to continuously track facial features. For this reason the analysis of the instrument readings was complemented by a review of the video recording to obtain a qualitative assessment of head movement. Focus was placed on periods of driving when the vehicle was stopped or stopping.
Data was extracted from the eye-tracker as text files and processed using PostgreSQL 9.2.3 and MATLAB R2013a. The data tables included a state variable that registered at each instant whether the image-processing algorithm had fixed on the facial features of the driver. Data readings during moments or periods of time when the algorithm was "searching" for the features were categorized as missing. In addition, preference was given in analysis to statistical parameters such as the median and quantiles that are insensitive to outlying values. The video analysis was conducted in slow-motion replay using a proprietary playback function supplied with the eye-tracker that linked the video frame number to the digital data [19]. Glances to the internal and external mirrors, instrument panel, passenger and exterior were recognized by the direction of the head and detail of the eyes.
Data analysis
The eye-tracker recorded the position and rotation of the head on three axes (relative to the vehicle): longitudinal, lateral and vertical. Of the six resulting parameters, some were correlated in a predictable manner, for example rotation on the vertical axis (looking to the side) was associated with a lateral movement of the face, and leaning sideways associated a lateral movement of the head with a rotation on the longitudinal axis (a sideways tilt). These physically based correlations (which arise for example from the fixation of the base of the spine on the seat cushion) will automatically appear in any computer simulation using a realistic human model. Data analysis was therefore simplified by concentrating on three independent measures with a large range during driving: displacement in the longitudinal and lateral directions and rotation (left-right) on the vertical axis.
In order to use experimental readings for the optimization of seat design using computer simulation, it is convenient to have the data modeled in parametric form. More than 20 forms of statistical distribution were considered, of which the t location-scale distribution stood out for the quality of fit to the experimental results. This distribution has the density function scale parameter σ and shape parameter ν [20]. This distribution is considered useful for modeling data distributions that are more prone to outliers than the normal distribution; smaller values of ν yield heavier tails while at larger values of ν the t location-scale distribution approaches the normal distribution.
Further details of the vehicle instrumentation, driving routes, and six measured parameters are provided in Schick et al. [21].
Results
Actual durations of the driving trials are shown in Figure 1. The duration of missing readings (when the eye-tracker was not able to fix on facial features to assess head position and rotation) are outlined at the top of each bar and shaded in yellow where the video was reviewed manually to categorise head movement. The proportion of missing readings varied widely from almost negligible in case 2 to over half in case 6. The median and interquartile range of lateral head position for the nine drivers while their vehicle was stopped or stopping is shown in Figure 2. There was a bias towards the left, i.e. center of the passenger compartment. The interquartile range lies within 0-50 mm left of center for seven of the nine cases and the median value was close to zero (centered) in the other two cases. The median and interquartile range of longitudinal head positions for the nine drivers while their vehicle was stopped or stopping is shown in Figure 3. The interquartile range lies within around 25 mm for each subject while the median value varied considerably between drivers reflecting their preferred seating distance from the steering wheel and foot pedals, with subject 8 adopting the most forward position. The position of the head restraint was identified using the eyetracker by asking the subjects to rest their heads lightly against the head restraint for a few seconds before or after each trial run ( Table 3). The values recorded are in each case somewhat higher than the recorded range of movement, consistent with the head restraint presenting a physical obstruction to further backward movement. Thus back-set (defined as the distance between the back of the head to the front of the head restraint) could be calculated. While not the focus of this study, back-set is a parameter of traditional interest to seat designers in the context of soft-tissue neck injury. Taking case 1 as an example, the position of the head on the longitudinal axis had a median value of 886 mm while driving and a value of 939 mm while rested against the head restraint; the median value of dynamic back-set was therefore 53 mm.
Subject
Longitudinal position (mm) Table 3: Reference position of head against head restraint for assessment of back-set.
The median and interquartile range of head rotation on the vertical axis (i.e. looking left-right) for the nine drivers while their vehicle was stopped or stopping is shown in Figure 4. The median value varied from close to twenty degrees towards the right (subject 2) to around eight degrees towards the left (subject 5) while the interquartile range varied from around three degrees (subject 2) to fifteen degrees (subject 4). In order to use the results presented in Figure 2, Figure 3 and Figure 4 for the optimisation of seat design using computer simulation, it is most convenient to have the data modelled in parametric form. The t location-scale distribution stood out for the quality of fit to the experimental data as mentioned above. It is shown below in comparison with a best-fit normal distribution for a single example that was fairly representative of the recorded data ( Figure 5). Table 4. These can be used directly or adapted for use in computer simulations. Approximately 23 minutes of video were manually reviewed for the four drivers with the highest proportion of missing readings while their vehicle was stopped or stopping. This video review clarified the activity of drivers during the periods of missing data within the resources available for the work. Two types of activity were observed to provide the main explanation for the missing data: firstly, rotation of the head beyond the measurable range of the eye-tracker and, secondly, rotation of the head rapidly from side to side, not necessarily beyond the range of measurement of the eye-tracker, but too fast for it to maintain continuous, real-time image processing. These are described as 'extreme head turning' (7 minutes) and 'repeated head turning' (13 minutes) in Figure 6. The explanation for missing readings in the remaining 2-3 minutes was either 'other types of head movement' or 'unknown'.
Discussion
The median values of lateral head position tended to be left of center and the median values of head rotation were also off-center (non-zero) in several cases. This phenomenon is thought to be real. Subjects were observed to adopt asymmetrical driving postures and sometimes appeared to focus their attention towards objects and activities on the side of the road, especially in urban areas. Furthermore the driver's side window and door obviously restrict lateral movement in that direction while some of the controls that the driver may reach for while the vehicle is stationary are located on the center console. These asymmetries may have implications for the optimal position and width of the head restraint. The eye-tracker generated a proportion of missing readings across all phases of the naturalistic driving trials for reasons that were fairly well understood, including ambient light fluctuations and physical obstruction of the line of sight between camera and face. Of greatest significance to this study were missing readings due to specific types of head movements because of the bias this could introduce to the results. The video review indicated that in fact two types of conditions are probably under-represented in the digital readings; (a) having the head turned to the side at an angle that is out of the range of measurement of the face-tracker as configured in the trials and (b) a relatively flat distribution of angles from wide left to wide right that is lost because of a rapid speed of motion which is a characteristic of looking in both directions for an opportunity to pull out into a carriageway. It is suggested that these two conditions could be modeled independently in computer simulations as an adjunct to the parameters of the t location-scale model.
A primary aim of this study was to clarify the range of conditions under which seat performance should be optimised to mitigate the risk of neck injury. A further outcome to the study, given that computer models of occupants are under continuing development, is that the results also indicate the range of biofidelity that is desirable in the computer models. To accommodate for example 50% of the range of head movement that drivers exhibit while most likely to incur neck strain in a rear impact, it would on the basis of the information available be necessary to deal with approximately ± 15 mm lateral movement, ± 10 mm longitudinal movement and ± 7.5 degrees leftright rotation of the head.
The degree of variation between subjects particularly exemplified in Figure 3 and Figure 4 and summarized in Table 4 was not known until the experimental program had been carried out. It was considered statistically inadvisable to combine the cases into a group analysis on the present number of cases given the level of inter-subject variability and the unequal duration of 'stopped or stopping' periods in the naturalistic driving trials. It is suggested that when the statistical parameters of the t location-scale model are used in computer simulation to appropriately randomise the position of the driver's head at impact, a choice should be made at that stage whether to optimise the seat for one or two representative drivers or across the full range of variation indicated by this study.
The core results of the seat posture driving trials ultimately derive from nine subjects driving a single vehicle on two routes through a single city for 75 minutes. It would be unsafe to recklessly extrapolate the findings to a wider population of drivers, vehicles, routes or cities. However, the value of this study lies in sketching the outlines of a picture about which little or no information was previously available: quantifying the range of head movement as a risk factor for whiplashassociated disorders among car drivers under traffic conditions when a rear impact could occur. The degree of similarity among the subjects lends a qualified confidence to the expectation that the results would be consistent with the outcome of a wider, deeper or more diverse study.
Conclusions
Of the three parameters described in detail, lateral head position demonstrated most uniformity of median value and interquartile range for the nine subjects; longitudinal position showed uniformity of the interquartile range but wide differences in the median value; while left-right rotation showed considerable differences in both the median value and interquartile range. Incorporating these three main independent movements into a computer model of a seated human body or crash test dummy using the statistical parameters provided would produce an effective first-order simulation of a driver's posture while in control of a vehicle. This would include posture under traffic conditions associated with a risk of soft-tissue neck injury from rear impact. Data would enable the design of seats to be optimised for the mitigation of whiplash taking into account head position and rotation as an aggravating risk factor.
Technical improvements for future studies of this type should aim at a reduction in the proportion of missing readings from the eyetracker and an increase in its range of measurement, particularly sideto-side rotation. A larger sample of drivers, vehicles and routes would be beneficial in future research to consolidate the pioneering results of this study and to obtain a more detailed correlation of head position with specific vehicle manoeuvres and traffic situations. | 2018-09-25T19:41:03.169Z | 2015-04-07T00:00:00.000 | {
"year": 2015,
"sha1": "ae46b74652fab9b14da5765535d5b483cbc7fa68",
"oa_license": "CCBY",
"oa_url": "https://repository.lboro.ac.uk/articles/journal_contribution/Position_and_rotation_of_driver_s_head_as_risk_factor_for_whiplash_in_rear_impacts/9348227/1/files/16957295.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d4dce05628168d9c33bbfe1732f47d6302021d89",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
264465174 | pes2o/s2orc | v3-fos-license | A multispecies coexistence based on rewilding and degrowth for the sake of global health
Abstract The climate emergency is closely linked to biodiversity loss, both in its (anthropogenic) causes (anthropogenic) and possible (nature-based) solutions. Both crises are a major threat to global health because zoonoses, global warming and other climate impacts make present and future generations vulnerable, and aggravate social inequalities. Indeed, this has led some authors to start talking about ecological determinants of health. However, based on a narrative review of recent academic literature, I will argue that trophic rewilding strategies, framed within conservation biology, can ensure global health by stopping the spread of zoonoses and animating the carbon cycle. A first aim of my contribution will be to defend this correlation. Then, my next purpose will be to address how rewilding could favor coexistence with protected or reintroduced keystone species. The recent UN Biodiversity Conference (COP15) in Montreal agreed to extend the care of wilderness areas and restore 30% of degraded ecosystems by 2030, but this is a political as well as a social challenge. The current development model in Europe, based on fossil-dependent economic growth, is a stumbling block for wildlife to flourish. I will therefore argue that degrowth could be a complementary development alternative to rewilding in order to make a more serious commitment to global health. Key messages • The ecological restoration of trophic rewilding can contribute to global health by mitigating climate change and zoonoses. • The development of European societies is based on economic growth, and as this is a threat for wildlife flourishing, a shift towards degrowh could complement rewilding and strengthen global health.
The climate emergency is closely linked to biodiversity loss, both in its (anthropogenic) causes (anthropogenic) and possible (nature-based) solutions.Both crises are a major threat to global health because zoonoses, global warming and other climate impacts make present and future generations vulnerable, and aggravate social inequalities.Indeed, this has led some authors to start talking about ecological determinants of health.However, based on a narrative review of recent academic literature, I will argue that trophic rewilding strategies, framed within conservation biology, can ensure global health by stopping the spread of zoonoses and animating the carbon cycle.A first aim of my contribution will be to defend this correlation.Then, my next purpose will be to address how rewilding could favor coexistence with protected or reintroduced keystone species.The recent UN Biodiversity Conference (COP15) in Montreal agreed to extend the care of wilderness areas and restore 30% of degraded ecosystems by 2030, but this is a political as well as a social challenge.The current development model in Europe, based on fossil-dependent economic growth, is a stumbling block for wildlife to flourish.
Background:
The last data on asthma prevalence (13.9 and 17.4%) in children in Slovenia is from 2002.The aim is to assess the asthma prevalence and environmental risk factors in two selected schools in Slovenia, as a pilot study for a national cross-sectional survey.
Methods:
We conducted a cross-sectional pilot study in two primary schools in Slovenia.Observed population were children and adolescents aged 6-7 and 12-13 years.Observed outcome was current asthma diagnosed by a physician, reported by parents.To get the desired information we formulated a new questionnaire which was content validated.In the School 1 teacher distributed the questionnaires for parents to fulfill it.In the School 2 the questionnaires were delivered to parents at the parental meeting together with the presentation of the study and short lecture.We calculated the response rate, the prevalence of asthma, asthma symptoms, allergy, genetic background and exposure to environmental risk factors.
Results:
Response rate in the School 1 was 24%, in the School 2 was 100%.We got data about 53 out of 127 children and adolescent.None of the parents reported a diagnosed asthma in their child or adolescent, 17% reported symptoms of asthma, 11% wheezing, 8% allergic rhinitis, 40% of children were living within 200 meters of a busy road, 34% were exposed to moisture and mold, 9% had chronic respiratory infection in their early childhood and 13% mothers smoked during pregnancy.
Conclusions:
Newly developed questionnaire efficiently acquired information about exposure to environmental risk factors and asthma or allergy symptom.To get higher response rate, delivering the questionnaires at the parental meeting was more effective.Considering the reported symptoms of asthma in undiagnosed cases, further investigation of health records is needed to recognize the potential undiagnosed asthma cases.The latter suggests the benefits of the HIS / HES (Health Interview / Examination Survey) methodology.
Key messages:
HIS (Health Interview Survey) and HES (Health Examination Survey) design type methodology ensure us the quality data on asthma prevalence.
To efficiently assess the asthma prevalence, the adherence to medical guidelines and unification of the methodology to diagnose asthma in children by pediatricians is of an extreme importance.
Background:
Positive mental health effects of nature have been studied before with relevant associations between the two easily found in literature.However, there is still a lack of population based studies that focus on the effect that the amount of surrounding greenness might have on well-being. Objectives: This study aims to evaluate the effects of exposure to surrounding greenness in the residential area with well-being | 2023-10-26T15:04:03.265Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "7e72c930775718307961cd8a2ae733d418214e42",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/eurpub/article-pdf/33/Supplement_2/ckad160.1176/52417086/ckad160.1176.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e1332ae861b7853151ebd63d3fea798b113b8db9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
210938853 | pes2o/s2orc | v3-fos-license | Ocean circulation causes the largest freshening event for 120 years in eastern subpolar North Atlantic
The Atlantic Ocean overturning circulation is important to the climate system because it carries heat and carbon northward, and from the surface to the deep ocean. The high salinity of the subpolar North Atlantic is a prerequisite for overturning circulation, and strong freshening could herald a slowdown. We show that the eastern subpolar North Atlantic underwent extreme freshening during 2012 to 2016, with a magnitude never seen before in 120 years of measurements. The cause was unusual winter wind patterns driving major changes in ocean circulation, including slowing of the North Atlantic Current and diversion of Arctic freshwater from the western boundary into the eastern basins. We find that wind-driven routing of Arctic-origin freshwater intimately links conditions on the North West Atlantic shelf and slope region with the eastern subpolar basins. This reveals the importance of atmospheric forcing of intra-basin circulation in determining the salinity of the subpolar North Atlantic.
T he high salinity of the northward-flowing upper waters of the North Atlantic ocean is an essential condition for the formation of deep, cold, dense waters at high latitudes, as part of the meridional overturning circulation (MOC) 1,2 . Models have shown that the addition of freshwater to the upper layer of the subpolar North Atlantic (SPNA, 47-65°N, 0-60°W) could reduce the salinity sufficiently that atmospheric cooling results only in a cold, fresh, light upper layer. In turn this potentially weakens deep convection in the Nordic Seas (65-80°N, 25°W-20°E) and the SPNA, and the density of the deep western boundary currents, leading to a reduction in the MOC and the associated heat transport [3][4][5] .
During 2012-2016, the upper 1000 m of the SPNA acquired an extra 6600 km 3 of freshwater; a rate of change and volume at a magnitude that has not been observed since the late 1960s. The distribution of the additional freshwater in 2012-2016 is not uniform over the SPNA, and tracing the development of the signal and its propagation during the modern period of a wellobserved ocean allows us to identify the mechanisms that led to its development.
The North Atlantic upper ocean acquires its high salinity signature through a combination of the advection of saline water from the Indian Ocean 6 , MOC processes in the South Atlantic 2 , and the removal of freshwater from the surface in the subtropics by evaporation and subsequent atmospheric transport to the Pacific 2 . Over long timescales, the addition of salt and removal of freshwater is approximately balanced by the introduction of freshwater from the Arctic Ocean via the shallow Labrador Current (LC) and East Greenland Current (including ice sheet melt water), the import of freshwater from the Southern Ocean by the South Atlantic subtropical gyre, and SPNA net precipitation.
The North Atlantic Current (NAC 7 ) forms the southern and eastern boundary current of the subpolar gyre circulation. The NAC formation zone lies northeast of the Grand Banks and Flemish Cap, where the subtropical gyre western boundary current (the Gulf Stream) turns sharply east. The NAC widens as it crosses the North Atlantic, separating into branches east of the Mid-Atlantic Ridge that flow into the Iceland Basin, the Rockall Trough and southward to re-join the subtropical gyre 8 .
The processes that supply salt and freshwater to the SPNA are subject to temporal variations, and the net effect is the salinity change that has been observed over interannual to decadal timescales [9][10][11][12][13][14] . The SPNA underwent a freshening period from the late 1960s to the mid-1990s 14,15 , followed by a decade of increasing salinity 16 . The rapid upper layer freshening of the late 1960s was termed the Great Salinity Anomaly, and has been associated with a reduction in Labrador Sea winter convection in subsequent years 3,17 . That event, along with the following prolonged period of low SPNA salinity was shown to originate from increased precipitation and river runoff in the Arctic that was subsequently exported to the south 18 . Interannual variability is superimposed on decadal changes, with notable minima in salinity in the 1980s and 1990s 19 .
There is a hypothesised link between varying freshwater export from the Arctic and SPNA salinity 18 . Increasing salinity (reduced freshwater content) in the SPNA from mid-1990s to late 2000s coincided with the accumulation of freshwater in the Arctic Ocean 20,21 . At the same time, the transport of fresh Arctic water into the SPNA via the Canadian Arctic Archipelago and LC was low compared to the long-term (70 year) mean transport 22 .
The large-scale ocean circulation has been invoked to explain long term North Atlantic centennial to decadal property changes 23 ; a weaker MOC is associated with decreased ocean heat transport convergence and reduced SPNA heat storage and basinwide sea surface temperature [24][25][26] . There is evidence that periods of low SPNA salinity in the twentieth century arose in part from a reduction in northward transport of salt from the subtropics by the MOC, associated with changes in wind forcing 27 . Even with a constant import of Arctic water, when the MOC slows down less fresh water is transported southwards in the deeper layers and so imported Arctic water is retained in the SPNA gyre 28 . A recent analysis argues that a reduced MOC at 26°N after 2008 led to subsequent cooling and freshening of the eastern SPNA 29 . However, during 2014-2016 the reported convergence of ocean freshwater transport between two MOC observational arrays (RAPID at 26°N 29 and OSNAP at 53-60°N 30 ) appears to be too low to account for a large change in SPNA freshwater storage.
The interaction with the atmosphere is also important; changes in net precipitation contribute to salinity changes 31 . The North Atlantic Oscillation 32 (NAO), the first leading mode of atmospheric variability, is a key driver of change and its associated patterns of local wind stress, heat loss and net precipitation can have a cumulative effect over several years 33 . The East Atlantic Pattern 34 , the second leading mode of atmospheric variability, is thought to regulate the subpolar gyre circulation and the leakage of subtropical waters into the SPNA 11 .
Within the SPNA region, atmospheric forcing (particularly the NAO and the associated wind stress curl) can alter the regional distribution of salinity through changes in the zonal spread of water masses and shifts in the location of the NAC in the Newfoundland Basin and the Iceland Basin/Rockall Trough region 10,[33][34][35][36][37] . The NAC forms a boundary zone (Subpolar Front) between the Arctic-influenced cool, fresher waters of the western and central SPNA and the subtropical-influenced warm, saline waters.
Despite the conceptual link between the salinity of the SPNA and the transport of freshwater from the Arctic and salt from the subtropics, we lack detailed knowledge of the processes and how they change over time. Here we examine the mechanisms that determine the temporal variability of the SPNA upper ocean salinity (0-1000 m) through an extraordinarily strong freshening event observed from 2012 to 2016; the fastest and greatest change in the salinity of the eastern SPNA in 120 years. We show that the cause of the change was unusual winter wind patterns driving major changes in ocean circulation.
Results
Freshening of the eastern subpolar region in 2012-2016. Integrated across the SPNA, the freshwater content of the upper 1000 m increased rapidly by~6600 km 3 during 2012-2016 ( Fig. 1). However, the change in salinity was not distributed evenly over the region. The evolution of the salinity field in two layers, 0-200 m and 200-1000 m (anomalies from a long-term mean derived from the EN4 data sets, see Methods) is shown in The EN4 profile data sets inclusive of all available Argo float data provide good annual information over a wide region. However, to resolve the location and timing of subsurface salinity anomalies associated with this freshening event with greater confidence, we turn to high resolution, high quality observations at fixed locations and transects.
First we consider the magnitude of this freshening event in a multidecadal context. Time series from the very small number of sections and stations with more than four decades of high quality subsurface observations, show that the 2012-2016 freshening event is the largest and most rapid change in salinity (up to −0.25) observed for 45 years (Fig. 4a), i.e. since the Great Salinity Anomaly that started in the late 1960s and progressed through the SPNA during the 1970s (also visible as a rapid increase in freshwater content, Fig. 1c). All the time series in Fig. 4a show a period of increasing salinity during the 1990s and 2000s, and decadal-scale freshening after 2008. However, unlike the Great Salinity Anomaly, the recent accelerated freshening signal is restricted to the eastern basins (Iceland Basin, Rockall Trough) and downstream into the southern Norwegian Sea (Fig. 2); there is no similarly large signal in the Labrador Sea as of 2016 (Fig. 4b). Multidecadal records of surface salinity from the Iceland Basin indicated that this is actually the lowest salinity in 120 years of observations, and thus is a highly unusual event in this time frame (Fig. 4c). A 25-year record of surface salinity changes on a ship-of-opportunity route at 60°N between Greenland and north of Scotland (Fig. 4d) illustrates not only the extraordinary magnitude of the surface salinity anomaly (up to −0.2), and its regional focus in the eastern basins of the SPNA, but also its rapid onset in 2015 in the Iceland Basin. There is no evidence of any change in surface salinity in the East Greenland Current (Figs. 1a and 4d) or on the Hebridean shelf, which are located outside of the NAC zone.
The time frame in which the salinity anomaly arrived in the Iceland Basin is further illustrated by mooring records from OSNAP (Overturning in the Subpolar North Atlantic Programme) 29 . The freshening event evolved most rapidly in the upper 200 m layer where the salinity decreased from 35.30 when the moorings were deployed in summer 2014, to just 35.05 2 years later (Fig. 4e). Part of that reduction in salinity (nearly 0.2), took place in an extremely high magnitude, fast event between July and November 2015. The fresh water advection was concentrated near the surface during the stratified 2015 summer season, and then mixed deeper in the winter (temporarily causing a rise in the surface salinity in December 2015).
Next we examine how salinity anomalies relate to the circulation features of the SPNA; are they evenly spread across the broad NAC zone (as suggested by EN4) or constrained in location by mode waters, eddies or density fronts? Ship-based summer hydrographic sections provide high basin-wide spatial resolution of the anomalies. Data from the OVIDE 8 NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-14474-y ARTICLE present in the east Iceland Basin NAC branch changed rapidly to a colder, fresher variety, as indicated by the temperature-salinity relationship (Fig. 5f).
A Principal Component Analysis of the North Atlantic salinity field since 1950 shows a spatial pattern for the leading mode of variability with centres of action in the Iceland Basin and the NWACSS (Fig. 6). The time series of the EOF, and the multidecadal records in the Iceland Basin (Figs. 4 and 6) confirms that the 2012-2016 event far exceeds the scale of any freshening in the past 70-120 years. The relationship between decadal-scale changes in salinity in the Iceland Basin and the NWACSS points towards processes, which have not been previously recognised or identified. In the next section, we identify the mechanisms that caused the recent acceleration of eastern SPNA freshening.
Formation mechanisms of the 2012-2016 salinity anomaly. The fate of the Arctic-origin freshwater in the LC (hereafter referred to as LC-Arctic) is to either leave the shelf break and enter the subpolar circulation or to continue tracing the shelfedge, interacting with saline Gulf Stream water in the NWACSS region 38,39 . On the Newfoundland and Labrador shelf, the LC-Arctic flows along the shelf break as a surface-intensified, fresh and buoyant baroclinic current in the upper 300 m, adjacent to an offshore (more saline) barotropic current which forms part of the boundary circulation of the Labrador Sea and subpolar gyre 22,40 . Some part of the LC-Arctic is diverted offshore and into the open ocean between the Flemish Cap and south of the Grand Banks 38,41 . Offshore of those topographic features, in the Newfoundland Basin, is the location where a negative salinity anomaly began to rapidly develop in 2012 (Fig. 2). Notably the anomaly was initially restricted to the depth zone matching the surfaceintensified LC-Arctic (0-200 m). At the same time the NWACSS to the south of the SPNA became anomalously saline, indicating that this region was likely receiving less freshwater from the LC-Arctic than in previous years (and less oxygen 41 ).
This implies an unusual diversion of Arctic freshwater in the LC away from the NWACSS and into the interior SPNA during 2012-2016. The NWACSS region freshwater content anomaly was −4600 km 3 in 2012-2016, while the eastern SPNA region anomaly was +5900 km 3 , i.e. the re-routing of the LC-Arctic water was a major contributor to the increase in the total SPNA freshwater content anomaly and the accelerated freshening in the eastern basins. The LC freshwater that was diverted offshore joined the NAC at the North West Corner and was advected eastward, as indicated by the strong negative salinity anomalies arriving within the eastern NAC jets in 2014 and 2015 (Fig. 5). This mechanism of changing the pathway of Arctic water in the LC, such that the NWACSS received less Arctic freshwater inducing an increase in salinity, explains the dominant spatial salinity pattern (dipole-like between NWACSS and Iceland Basin) of long-term changes in the North Atlantic (Fig. 6).
Next we consider changes in net precipitation. The SPNA has a climatological net gain of freshwater (higher precipitation than evaporation 42 ). There are large interannual and patchy changes in net precipitation 2009-2014 (Fig. 7). In 2015 and 2016 it is noteworthy that the pathway of the NAC zone received particularly high levels of excess freshwater. In the eastern basins there was a freshwater gain anomaly totalling 700 km 3 during 2012-2016, representing 10% of the freshwater content change (0-1000 m) over the whole SPNA. We conclude that an increase in net precipitation is a minor mechanism for increased freshwater content, but one that reinforced the signal driven by circulation changes (Fig. 7).
A major source of freshwater to the upper 1 km of the SPNA is the Arctic water that enters the region in the shallow, buoyant East Greenland Current and LC (Fig. 1a). A change in the amount of freshwater entering the SPNA could lead to a freshwater content anomaly. We examine whether Arctic sources are a major component of the change in SPNA freshwater content by looking for evidence of changes in freshwater transport within those currents, and for evidence of propagating salinity anomalies where those waters circulate.
There were no clear changes in the transport of Arctic freshwater into the SPNA through the Fram and Davis Strait from the early 2000s to mid 2010s 43 Further, while the EGC can receive additional freshwater from enhanced Greenland ice-sheet melt, the volume of freshwater from ice sheet melt is small compared to Arctic freshwater sources, and cannot explain interannual variations in SPNA salinity 48 .
The entire SPNA region accumulated an extra 6600 km 3 of freshwater in 2012-2016. Of that total, 5900 km 3 accumulated in the eastern basins (Iceland Basin and Rockall Trough). In all, 95% of the freshwater content anomaly in the eastern basins can be accounted for by the three mechanisms above. The re-routing of LC-Arctic water supplied +4600 km 3 , the net precipitation anomaly provided +700 km 3 , and a small increase in Arctic freshwater export provided+300 km 3 .
A final potential contributing process that we consider is the expansion of the subpolar gyre. The NAC and subpolar front in the Newfoundland Basin shift meridionally in response to NAO forcing (northward in NAO-positive winters, leading to positive salinity and temperature anomalies in the area south of the mean position of the front 35,49 ). This response is associated with a change in the zonal spread of the opposing cold/fresh and warm/saline water masses, and zonal shifts in the location of the boundary between them (the subpolar front) within the eastern NAC complex 10,11 . This change has been interpreted as a gyre that expands and contracts on multi-year timescales in response to buoyancy and wind forcing, and an expanded gyre is characterised by a more eastward spread of cold and fresh water in the upper and intermediate layers, and sometimes with a spinup of the gyre 11,50 . We find that in the eastern basins, the properties of the upper waters in 2014-2016 (as defined by their potential temperature-salinity relationship) have changed dramatically, indicating that the increase in freshwater content is not simply a vertical migration of isohalines, but a change in water (Fig. 5f). By 2016 the salinity distribution of the SPNA was radically different to that observed in 2004, with isohalines having shifted up to 700 km to the east and 500 km to the south (Fig. 8a). This implies a large eastward shift of the subpolar front that explains the change in water types found in the eastern basins. We consider how these observations relate to the gyre circulation in the next section and the Discussion.
Circulation change in response to wind stress curl anomalies.
In the previous section we showed that the accelerated freshening observed in the Iceland Basin and Rockall Trough originated in changes in circulation of the Labrador Current. We now consider the forcing for those changes and their relationship with the circulation within the NAC. Changes in wind stress curl associated with winter NAO index anomalies can force a rapid response in ocean circulation, including driving anomalies in the gyres 51 . Here we examine how the SPNA ocean circulation responded to changes in wind stress curl from 2010 to 2016. We show that the change in atmospheric circulation had two notable effects; it forced unusually large amounts of LC-Arctic water off the shelf, and it significantly altered aspects of the circulation of the NAC in 2014-2016.
In the 2007-2009 period the winter NAO index alternated between positive and negative values, and the wind stress curl pattern was close to the long term mean, with the zero curl line running from the Newfoundland Basin northeastward to Scotland, approximately mapping the main pathway of the NAC (Fig. 9). In the winter of 2010, and to some extent the following winter, there was a sharp drop to a strongly negative winter NAO index and the wind stress curl developed a negative anomaly over the subpolar region and a positive anomaly over the subtropics. This pattern contributed to the observed reduction in the strength of the MOC at 26°N since then 27 . The change in wind stress curl induced a cyclonic circulation anomaly in the region of the Mann eddy (Figs. 1 and 9), and a southward shift of the Years 1945Years 1950Years 1955Years 1960Years 1965Years 1970Years 1975Years 1980Years 1985Years 1990Years 1995 In 2009-2011 the wind stress curl anomaly also shows a band of positive values along the Labrador and Newfoundland shelves and over the Newfoundland Basin (Fig. 9), and that feature is associated with fresh salinity anomalies on the NWACSS, as the LC carries Arctic water southwards along the shelf and slope. In 2012 the winter NAO index switched to strong positive values and the wind stress curl field formed a typical NAO-positive pattern, with a band of negative anomaly along the Labrador and Newfoundland shelves. Negative curl also developed over the open ocean to the east of Newfoundland. This curl pattern is associated with the positive salinity anomalies on the NWACSS (Fig. 2). The anomaly represents a north-eastwards shift of the line of zero curl associated with stronger eastward zonal wind stress and strong Ekman transport off the Newfoundland shelf 51 (Fig. 10). The ocean response was increased forcing of LC-Arctic water off the shelf and into the NAC. This is consistent with a model study which shows the LC-Arctic transport to be strongly influenced by winter winds 39 .
The winter NAO index was strongly positive in 2014, and a strong positive East Atlantic Pattern was also established: an extreme wind stress curl pattern was set up across the SPNA, with a region of positive anomaly extending as far south as 45°N (Fig. 9). This pattern had a profound effect on the circulation of the NAC and consequently on the geographical reach of the LC water by altering the velocity in the main NAC branches. The speed of the northern NAC branch reduced in 2014-2016 (Fig. 10, Supplementary Fig. 1, Supplementary Note 1), a process which in itself can induce a decrease in salinity at a fixed point if there is a negative salinity gradient along the current pathway 52 .
In contrast, the speed of the southernmost branch increased and its pathway extended further east ( Fig. 10 and Supplementary Fig. 1). This increase in speed is associated with the southward and eastward relocation of the subpolar front (Fig. 8), and thus represents a shift of the baroclinic front away from the northern branch and into the southern branch. Finally we note that during this period, anomalously strong surface winter heat loss from the ocean to the atmosphere, and the subsequent deep winter mixing contributed to the development of an extreme cold anomaly north of the NAC in the central SPNA 53,54 . The production of large volumes of subarctic mode waters can force a dynamic gyre response in the form of zonal movement of fronts 10 and this process has likely contributed to the changes we observe in the baroclinic velocity of the NAC, and hence to the salinity anomalies.
In summary, the response of the LC and NAC circulation to the change in wind stress curl associated with positive NAO and East Atlantic Pattern conditions had three important consequences in 2012-2016 that accelerated a period of freshening that began after 2008. First, starting in 2012 the shallow LC-Arctic water was rerouted off the Newfoundland shelf and into the NW Corner and an unusually large amount of freshwater spread into the NAC, initiating a rapid increase in freshwater content of the SPNA. The NWACSS area simultaneously experienced a rapid increase in salinity as it was deprived of LC-Arctic water. Second, in 2014-2016 the main NAC branch that feeds the west Iceland Basin slowed because the subpolar front shifted to the location of southern branch of the NAC, increasing its speed and extending it unusually far to the east (i.e. an expansion of the gyre). The southern branch was atypically fresh because it was carrying the salinity anomaly caused by the unusual addition of fresh water masses (including LC-Arctic water, Subarctic Intermediate water, and Labrador Sea Water) from the Newfoundland basin. Third, an increase in precipitation associated with the unusual atmospheric circulation acted to reinforce the freshening caused by changes in circulation. Thus the extremely fresh 2014-2016 eastern basin conditions reflect a combination of processes, mostly involving changes in ocean circulation, that were in turn linked to the particular sequence of large changes in atmospheric circulation that occurred over the North Atlantic during 2012-2016.
Discussion
We have shown that in 2012-2016 the subpolar North Atlantic underwent a basin-scale freshening that is more rapid and with a larger magnitude than any changes observed in the previous five decades. Additionally, the salinity in the eastern basins reached a level lower than any records have shown for the past 120 years. This massive and rapid increase in freshwater content of the region resulted primarily from large scale changes in ocean circulation driven by atmospheric forcing.
Much has been written about the causes of the Great Salinity Anomaly in the late 1960s and early 1970s, but 2012-2016 event does not share the same characteristics. Most notable is that the 2012-2016 event was not evident in the Labrador Sea: during the Great Salinity Anomaly there was enhanced freshwater export through the Fram Strait, and the wind pattern (negative NAO and negative East Atlantic Pattern) forced additional freshwater of the Greenland shelves to spread directly over the Labrador Sea. Thus the processes driving the 2012-2016 event are not the same as those of the Great Salinity Anomaly 50 years earlier.
We have shown that anomalously strong wind stress curl in winters 2012-2016 increased the freshwater convergence in the subpolar region by re-routing the LC-Arctic eastward off the Newfoundland shelf, by shifting the baroclinic subpolar front to the southern branch of the NAC, and by extending southern branch further to the east. The identified linkage between the wind stress curl pattern and changes in the NAC characteristics is consistent with earlier studies suggesting that a stronger cyclonic wind stress curl over the SPNA leads to cooler fresher conditions. We have no direct evidence to say whether this is related to a reduced penetration of warm, saline subtropical water into the region as previously argued 11 , but we conclude that a new mechanism (redistribution of Arctic water) is important in generating the extreme salinity anomalies observed in 2012-2016. Studies have shown that a weak MOC favours the inclusion of subarctic water into the NAC east of the Grand Banks and this process has been linked with an intensification of the subpolar gyre circulation 10,55 and an SPNA interior cooling and freshening 56,57 . Here we add nuance to the picture; during the 2012-2016 period, the increased speed of the gyre is restricted to the southern branch of the NAC, and not the northern branch, which slowed when the baroclinic front moved zonally. Our results illustrate that these dynamical changes, initiated by the wind stress curl and buoyancy forcing, are directly reflected in the freshwater pathways and hence, larger-scale freshwater variability in the subpolar region. Consequently, the exceptionally strong atmospheric forcing, reported for the winters 2014-2016 and which caused significant heat loss 54 also contributed to an exceptional freshwater redistribution in the North Atlantic.
Our results provide some clarity around a recent debate, in which the concept of a subpolar gyre expanding and contracting zonally (and possibly associated with a spin-up or slowdown) in response to atmospheric forcing was recently called into question 55,58 . At the heart of the debate lies a lack of clarity in the definition of the gyre and the subpolar front and its relationship with the branches of the NAC. The front is a boundary between the cold/fresh Arctic/subpolar water masses, and the warm/saline subtropics-dominated water masses, and is dynamically connected to the baroclinic current cores of the NAC due to the geostrophic equation. We have shown that the subpolar front can shift location; meridionally in the Newfoundland Basin, and zonally in the eastern basins. The location is dependent on where (and how much) LC-Arctic water mixes with and modifies the originally warm/saline water in the NAC 59 . It is also dependent on the zonal reach of the southern branch (the latter does not form a closed streamline and was therefore not considered part of the gyre by ref. 55 ). Defining the extent of the gyre by the location of the spread of cold/fresh water (i.e. location of the subpolar front and its baroclinic current) gives a view of the gyre expanding and contracting. In contrast, defining the gyre by the location of the northern NAC branch alone (which forms a closed, mainly barotropic streamline) results in a view of the gyre that does not change shape or size. Our results confirm that both interpretations are consistent with the observations, but the expanding subpolar gyre is more clearly described as the expanding spread of subpolar water masses, a zonal shift of salinity and density isolines, and a zonal shift of the baroclinic NAC current. The zonal shift of the subpolar front and the subsequent impact on the speed of NAC branches may be The investigated freshwater event was the largest for nearly five decades across the SPNA as a whole, and for 120 years in the Iceland Basin, revealing a high sensitivity of the subpolar gyre dynamics and large-scale hydrographic characteristics to interannual changes in the atmospheric circulation patterns. Potential future changes in the atmospheric forcing, such as the NAO 60 and associated wind stress curl patterns will have direct consequences for the basin-wide SPNA freshwater content and the properties received downstream, and for the distribution of hydrographic properties within the SPNA and on the NWACSS.
The diversion of oxygen-rich and nutrient-rich Arctic-origin LC water into the NAC has had profound consequences for the NWACSS as well as for the eastern basins (Iceland Basin and Rockall Trough). Since 2012, the NWACSS has seen a rapid increase in marine heatwaves as well as a reduction in oxygen, associated with the flooding of Gulf Stream water onto the shelf, and serious detrimental impacts on local ecosystems 41,61 . In stark contrast, the ecosystems of the eastern basins have been shown to be stimulated into increased productivity by the arrival of nutrient-rich fresh subpolar water in pulses that were substantially weaker than the event described here 62 . Understanding the ecosystem impact of the 2015 freshening in the Rockall Trough and Iceland Basin will be an important next step.
The freshwater anomaly in the Iceland Basin and Rockall Trough is now propagating into the Irminger and Labrador Seas along the pathway of the subpolar circulation, and into the Nordic Seas 16,63 (Fig. 2). Figure 8b, c show that historical salinity anomalies have taken 4-6 years to propagate from 50°N, 30°W to Svalbard, so we might expect the Atlantic Waters there to be freshening from 2018 onwards. Changes in salinity and stratification impact the extent of deep convection [3][4][5] and contribute to density changes in the overflow waters and the subpolar deep western boundary currents 64 and hence the MOC 65 . This farreaching impact of eastern Atlantic salinity anomalies highlights the importance of understanding, and correctly simulating, interactions between the North Atlantic ocean dynamics and the atmosphere circulation for future climate predictions.
Methods
Salinity and freshwater. Salinity is reported on practical salinity scale throughout the paper. The subpolar North Atlantic (SPNA) is defined as the region 47-65°N, 0-60°W. The annual mean ocean freshwater content anomaly (m) has been derived using the EN4 hydrographic data set following the formulation of ref. 66 (Eq. 1): ρðT; S; pÞ ρðT; 0; pÞ where ρ is density of sea-water is derived based on EN4 temperature (T), Salinity (S) and depth (p). S ref is the reference salinity set to 35.0. The depth-integration is between z 1 and z 2 , which are defined as 0-1000, 0-200 or 200-1000 m. It should also be mentioned that only grid points with data deeper than 1000 m have been taken into account in the FWC calculation. Note, the FWC anomaly are with respect to the 2005-2016 climatology. The net freshwater gain maps and time series are produced by adding evaporation and total precipitation based on monthly means of daily forecast accumulation of the first 12 h from ERA interim reanalysis 67 . Consistent with earlier analysis, the annual mean net freshwater gain anomaly maps are calculated with respect to the 2005-2016 climatology. Time series of net precipitation are ERA5 monthly anomalies for 45-65°N, 30-10°W. The ERA5 data was obtained from ECMWF (https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/ era5).
The empirical orthogonal function analysis of the salinity fields was carried out following ref. 68 . Statistical significance of the correlation analysis followed ref. 69 .
Wind stress curl. The wind stress curl (Nm −3 ) was calculated as follows (Eq. 2): where τ x and τ y are the zonal and meridonal wind stress, respectively, based on the monthly means of daily means from the ERA-interim reanalysis 70 . The 2004-2016 climatology was removed from the monthly data before calculating the December-Mars winter averages. DJFM wind-stress curl anomalies are generally considered to best represent the atmospheric circulation. The grid resolution of the data set is 0.75°× 0.75°.
Velocity. The geostrophic velocities in Fig. 1a were derived from data obtained through the Copernicus Marine Environment Monitoring Service (CMEMS, http://marine.copernicus.eu), and are from CMEMS/DUACS DT2018 (product identification: SEALEVEL_GLO_PHY_L4_REP_OBSERVATIONS_008_047, ref. 71 ). The 1/4°absolute zonal (u geo ) and meridional (v geo ) surface geostrophic velocities (ms −1 ) are derived based on the 1993-2016 sea surface height, ζ, measurements from the global multimission altimeter satellites following the geostrophic approximation (Eqs. 3 and 4): where the gravitational acceleration is, g, and the Coriolis parameter is, f. The currents at 200 m shown in Fig. 10 are computed from EN4 data: geostrophic velocity at 200 m referenced to zero velocity at 1200 m. Currents from from the observation-based product ARMOR3D 72,73 are shown in Supplementary Fig. 1 for comparison to Fig. 10. The ARMOR3D product merges different sources of observations (altimetry and T and S profiles) in order to estimate weekly global temperature, salinity, geopotential height and geostrophic current from the surface down to 1500 m. In a first step the steric part of sea level anomaly is projected along the water column through statistical profiles to estimate gridded T/S fields. This step brings mesoscale patterns close to altimetry. In a second step in-situ T/S profiles are merged to correct the previous estimate. Finally, geostrophic currents are estimated through the thermal wind equation referenced at the surface with altimetry. In Fig. 10 ARMOR3D current are filtered at 3.5°in latitude and 4.5°in longitude in order to remove high energetic smaller scales.
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
All data used in this analysis are available as follows. The EN4 data set (ref. 74 (Fig. 4b) are available on request from Igor. Yashayaev@dfo-mpo.gc.ca. The anomalies are relative to a mean seasonal cycle computed at a high-resolution topographically-adjusted spatial grid, and averaged over the whole Labrador Sea. Salinity data at OSNAP mooring M4 in the Iceland Basin (58.0°N , 21.1°W, Fig. 4e) are available at https://doi.org/10.7924/r42n52w51. The Extended Ellett Line programme consists of repeat hydrographic sections along a line from Iceland to Scotland 12 (Figs. 1b and 5); data are available from https://www. bodc.ac.uk. The OVIDE 8 programme consists of repeat hydrographic sections along a line from Greenland to Portugal (Figs. 1b and 5). Data from the 2014 section are available from https://doi.org/10.17882/52153 (https://www.seanoe.org/data/00410/ 52153/) and OVIDE-BOCATS 2016 on request from pascale.lherminier@ifremer.fr, see https://archimer.ifremer.fr/doc/00480/59190/61877.pdf.
This analysis used E.U. Copernicus Marine Service Information: ARMOR3D fields available through MULTIOBS_GLO_PHY_REP_015_002 product and DUACS DT2018 through SEALEVEL_GLO_PHY_L4_REP_OBSERVATIONS_008_047 product.
Code availability
The computer codes used to analyse the data are available from the corresponding author on reasonable request. | 2020-01-29T16:29:38.255Z | 2020-01-29T00:00:00.000 | {
"year": 2020,
"sha1": "842f467bb56cd3adeacb4fe3e74151e156934d60",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-020-14474-y.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "842f467bb56cd3adeacb4fe3e74151e156934d60",
"s2fieldsofstudy": [
"Environmental Science",
"Geology",
"Physics"
],
"extfieldsofstudy": [
"Geology",
"Medicine"
]
} |
54050534 | pes2o/s2orc | v3-fos-license | The Psychiatric Nursing / Mental Health Education : advances , limitations and challenges 1
This study aimed to examine the Psychiatric Nursing/Mental Health Education in public nursing courses in the State of São Paulo, Brazil, and investigate and analyze the educational practice of professors responsible for the disciplines in the area. Interviews were conducted with twelve professors, whose verification took place through thematic analysis. This is a qualitative research. Results show that the professors are, mostly, unaware of the Pedagogical Political Project of the undergraduate courses in which they are inserted, the teaching method is predominantly traditional with desire for changes to use active methods, the teaching content has a primarily biological focus and the devaluation of the profession of professor is also present. Although we can notice a desire for advances by these professionals, we see little investment in their educational training.
Introduction
The Psychiatric Nursing/Mental Health Education has been studied by several researchers in Brazil.Most of the investigations have sought to portray the daily life of the psychiatric nursing education in the search to improve the professional practice.However, new proposals of curricular changes -Law of Guidelines and Bases of National Education (LDB/96) and the National Curriculum Guidelines of the Undergraduate Nursing Course (DCNENF/2001) -have established Souza MCBM.
the national curriculum guidelines for undergraduate nursing courses, requiring other perspectives on education (1)(2) .
Our study is justified by its proposal to investigate the Psychiatric Nursing/Mental Health Education from the aforementioned legislation and guidelines and to highlight how the professor is inserted in the teaching/ learning process in this context of change.
Today, in any area, the training of professionals with a profile that is appropriate to social needs implies providing them with the ability of working in teams, communicating and having agility in the face of new situations.These characteristics become necessary for the training of professionals of the future and are not consistent with the teaching model, according to the traditional educational design, in which the teaching is based on a type of curriculum marked by passivity, submissiveness, lack of participation and critical attitude of the student.
In this logic, universities are invited to revise their teaching methods and implement challenges inspired by the adoption of methodologies that encourage the development of critical spirit and the capacity of reflection to encourage the active participation of students in the construction of knowledge.There are several authors in the area of education that collaborate to this reflection (3)(4) .
The Psychiatric Nursing/Mental Health Education, in addition to following the laws governing education in Brazil, also follow certain principles of the Brazilian psychiatric reform, mainly seeking to insert students in open spaces of attention for persons with mental disorder, and not keeping them exclusively in the hospital context (5)(6) .Currently, the outlook is that the assistance to persons with mental disorder presents a more humanistic and social nature, with a view to the construction of citizenship.The shifting of the look from the disease to the subject is one of the issues that have been punctuated by experts in the area of mental health over several years (5)(6)(7) , among others.
It is important to know how the teaching in the area of Mental Health has been addressing these changes in its educational practice.How the construction of curricula has contributed to meeting the mental health policies?What educational trend prevails in teaching?What are the subjects addressed?How the educational training of professors happens?These questions lead us to search for the understanding on how education occurs in the area of Psychiatric Nursing and Mental Health today.
Objectives
The aim of our study is to analyze how the Psychiatric Nursing and Mental Health Education occurs in public undergraduate courses in the State of São Paulo, Brazil, and investigate and analyze the educational practice of professors of the disciplines in the area, involving the content addressed, the teaching methods used and the training practices of the professors involved.
Methodology
This is an exploratory, descriptive study of qualitative nature that is justified by the fact that it enables the researcher to observe and work with the meanings, motives, aspirations, attitudes, values and beliefs of the object being investigated (8) .The research was carried out with professors who teach disciplines in psychiatric nursing and mental health in undergraduate nursing courses.The data were collected in 2009 and 2010.During the period of data collection, thirteen public schools were operating in São Paulo, being them two federal, six state and five municipal (9) .Only one of the schools did not participate in this study because the professor contacted did not respond to the invitation in a timely manner.
The data were collected through semi-structured interviews, following a script with comprehensive questions related to the profile of the professor and other questions on the knowledge that these professors have on, for example, the structure of the course, the pedagogical political project and the content addressed.For the diagnosis of the data, we carried out a Thematic Analysis (8) .
Regarding the ethical procedures, the project was approved by the Research Ethics Committee of EERP/USP (Protocol No. 0995/2009).After approval, we contacted the Coordinator of each course and requested a list of the names of the professors responsible for the mental health disciplines.The choice of professors happened by drawing lots, and the professor drawn received an invitation via email to participate in the research, being one professional from each school.The participants were informed about the purpose of the study and signed the Informed Consent.To preserve the anonymity, the professors have been identified by fictitious names.
Professors participating in this study
The group researched is composed of twelve professors, aged 30 to 59 years, being seven of them female and five male.Regarding their training, seven had doctoral degrees, three master's degrees, one specialization in the area of psychiatric nursing and one specialization in occupational health nursing.The training of professors has its emphasis on doctoral degrees, which can be justified by the demands of the current guidelines of public universities, which have, as their premise, the hiring of professors who have at least the title of doctor.
In relation to how long they have been teaching, half of the professors have been teaching from three to thirteen years and the other half from fourteen to thirty-five years.The longer teaching time may involve greater experience, contrasting with the entry of newly trained professionals at the universities who demonstrate little experience, both in assistance practice and in teaching.
The employment contract of these professors covers the categories of: hourly professor ( 1 The interviews were analyzed in the light of the literature review.During the process of analysis, the following thematic categories emerged: The professor inserted in the undergraduate nursing course; The role of the professor in the psychiatric nursing and mental health education; and, Building oneself as a professor.In the presentation of the results, we show, implicitly or explicitly, the different conceptions of world, man, learning, knowledge, society and culture of the participating professionals.These different conceptions imply choices, both educational and theoretical approaches, which subsidize the mental health education.
The professor inserted in the undergraduate nursing course
In this category, we worked the issues related to the conditions of the courses in which the professors are inserted, regarding the construction of the Pedagogical Political Project, which is understood as a construction that is reflected on the pedagogical axes and axes related to the vision on health, student profile and guidelines of the project.
The Pedagogical Political Project (PPP) is a challenge for undergraduate courses as, through it, new ways and new guidelines can be shown to schools.There are several paths for the construction of the PPP, which must be marked by different and interdependent moments: the situational act, which describes the reality and develops the educational action; option by the theoretical framework to be followed; what concepts are needed to transform reality; and, operational, that is, how to perform the action (10) .
From the LDB/96 and DCNENF/2001, the guidelines for the preparation of PPPs are disseminated in all higher education institutions.To meet this legal requirement, the schools begin the discussion and building of their PPPs.
It is worth mentioning that it is explicit in the interviews that the professors have not always participated in the construction of the pedagogical project: However, the collective participation in the construction of the project is described as one of the general parameters of the DCNENF; thus, some of the schools, to respond to the guidelines, involved their professors in the process of construction, even if they are not yet sure about the purpose of the PPP: Actually, we constructed it when we were required because of the issue of ..
. I helped construct it, but, in general, I know what is expected of the nurse, but if you ask in detail I couldn't answer (Athena).
The construction, implementation and realization of a pedagogical project, is not an easy or quick task; on the contrary, it takes time and understanding for the need for change and, above all, willingness to break with old practices based on pedagogical efforts (10).
In relation to the professional profile desired for students, the DCNENF emphasizes the training of a generalist professional, attentive to the humanistic, critical and reflective issues (2) .Beyond the context of the educational reform, it is also necessary to consider the Health Reform Movement, culminating in the consolidation of the Brazilian Health System -SUS Souza MCBM.
-, in 1988, driving the country to a new health care model that requires the training of professionals able to recognize the required range of expertise, aiming at meeting the social needs (11) .
The Nursing Courses investigated in this study, in order to meet the DCENF, have initiated changes in their curricula and propose, in their PPPs, the formation of a generalist professional: (...) It is aimed at the formation of the generalist nurse who can act on the hospital network, but above all, on the collective health (Dionysus).
The generalist nurse, with a critical view of society (...) is basing our project (Athena).
In relation to the guidelines and/or guiding bases governing the pedagogical projects of the courses investigated, there are aspects that are related to the vision of health/health care and to the educational axes of the course, set out in the curricular structure.
Some of the participants mention the guidelines that are more significant to them or those more focused on each course in particular, although all the mentioned ones are described in DCNENF: ... There's axis, the axis of Primary Health Care, there's another axis on Skills, the framework of skills, there's also an axis, let me see... on Interdisciplinarity (Aphrodite).
Principles of the SUS, every guideline is based on them, the idea is to train professionals to work in the SUS (Hercules).
In relation to the educational axes guiding the project, according to reports of the professors, the options considered the current requirements for the undergraduate nursing education and the demands arising from the health work: The main guideline might be teaching using active methodologies.We work with the idea of the competence of the student, the work by competence through active methodologies; our entire practice, our entire project, it seeks to enable the student to have contact with the practice and with the skills of the future professional from the first year (Hercules).
The active methodologies, based on the criticalreflexive educational concept, enable the effective participation of the individuals involved in the teaching learning process (11) .
The role of the professor in the psychiatric nursing and mental health education Currently, higher education institutions, responsible for training health care workers, have been encouraged to revise their educational practices in an attempt to get closer with the social reality.Therefore, we can perceive a greater search for progressive educational approaches for teaching and learning, which allow the formation of critical, reflective individuals, with ethical, political and technical skills, empowering them to intervene in complex contexts (12) .
On the other hand, in the teaching based on traditional pedagogy, the methods are based on oral exposure of the subject, and the student's attitude is receptive, without any dialog with the professor during the lesson (3) .The teaching/learning process is often restricted to the reproduction of knowledge.The professor "gives" the contents and the student retains and repeats them, without question, in a passive attitude, becoming the spectator, without the necessary reflection and criticism.
In one of the interviews, the professor points out as limiting factor in the teaching/learning process, in addition to the traditional teaching method, the curricular structure with disciplines and compartmentalized, fragmented and not articulated content: We still have the traditional method, that traditional method with ... discipline ... one discipline and then the contents; they are well divided, each one has its class (Aphrodite).
The fragmentation of knowledge in specialized fields has led universities to be subdivided by departments and, consequently, the courses are formatted with curricular structures composed of separated disciplines (13) .
In some statements, it is possible to grasp the intention and desire of professors in using active methodologies, although they encounter the limits imposed by the university system: (...) in this way, we wanted to carry out a teaching more focused on practice with active, innovative methodologies, but at the time we were faced with the structure of the university, very focused on disciplines (Hera).
The use of active methodologies, or any other way of teaching, demands the educational training of professors.However, the need for such training is often not felt by university professors, as they believe that their training, specific for the knowledge area, is enough.
For the most part, the higher education institutions analyzed in this study are traditional institutions, in which, historically, hierarchical relationships are established, both within the central coordination and in the structures of education, influenced by power relations marked by the past.
The program content, pointed most often by the participants of this research, is highlighted.We can see that some of the content covered by the professors is related to psychopathology and the health-mental SMAD, Rev. Eletrônica Saúde Mental Álcool Drog.July.-Sept.2016;12(3):139-46 disease process, still with a biological focus, centered on the disease: And ... about disorders, about knowing more about the patient, knowing the disease, because Psychiatry emphasizes this a lot, it is something very present, the care, the assistance and especially ... the disorder (Aphrodite).
(...) psychopathology, we take only the three large groups.
Other content selected and considered as relevant to the area of mental health, by the professors, are interpersonal relationship and communication, which are taken as basic instruments for care: The first discipline, which is only theoretical, has no laboratory and internship, so this is where they learn the tools of psychiatric nursing care, the basic instruments, communication, interpersonal relationship... (Apolo).
In relation to the content on Mental Health Policies, we highlight that, with the transformations that have been taking place in Brazil, and with a closer look to extramural care in open services, professors have rethought the content of their disciplines and the practical scenarios for the development of activities together with students.Until the '90s, the nursing action was carried out in the administrative spaces of the psychiatric hospital, based on the biological model.With the Health Reform in the last decades of the 20th century and the consolidation of the SUS and the Psychiatric Reform, some changes have occurred, such as the reorientation of the care model in mental health (6,14) .
Building oneself as a professor
Historically, the area of psychiatric nursing and mental health has been devalued in the field of professional practice for reasons such as: no professional recognition, low pay and lack of training of professionals who work in this area (15) .
In this study, the reasons for the devaluation pointed out by the professors can also be see in educational institutions, as noted below: (...) it is also a challenge, because within the university itself, the professors themselves ... it's not a valued, recognized area.I can say it's a challenge.... (Artemis).Some professors refer to the demands of the university and the difficulties to meet all of them: The technical division of the organization of the educational work emphasizes a depoliticizing approach of the practice and changes the role of professors.The school has more bureaucratic and administrative burdens to the detriment of activities of political educational nature (16) .
The exercise of teaching is always a process, movement, with new places, new information, new feelings and new interactions (16) .In this sense, some professors express satisfaction in being a professor:
I love being a professor; I think being in Mental Health
Nursing is perfect for me because I love this discipline (...) (Alceste).
(...) it's an area that I have identified with since graduation and as a professor I also had as an objective to teach in the area (mental health) (Apolo).
In the testimony, we can see the idea that being a professor is always associated with the area of psychiatric/mental health nursing, and, at no point, the professors refer to the educational role to this end.
Being a professor goes beyond being a good professional.Specific skills are required, other than those relating to the professional practice.The teaching training needs to be related to the principles that guide the educational practice (16) .
Currently, in public universities, research productivity has been emphasized, what meets the market laws, scientific rationale and technical efficiency, driving professors to individualized work and dichotomizing the theory of practice, learning education, assessment education, content of medium, and, consequently, inhibiting the ways to establish relationships of analysis of the practice itself, with production of knowledge far from reality, thus preparing individuals who have difficulty with a contextualized reading of the world (10) .
The professors interviewed show what they think about the priority investment in research: Currently, in the entrance and promotion examinations in universities, in general, research productivity is highly valued.In the departments, research activities and publications are also prioritized, sometimes giving a lower value to the activities linked to education.It is a risk that the university has, because we can find excellent researchers who are not good professors (18) .Souza MCBM.
Professors, generally, do not mention the need for educational training for teaching: The proposal for the first two years is active methodologies, right?This is the proposal, but we know that it's actually hard to change the minds of professors, not all have training; this is a big challenge (Perseus).
"The practice of the profession and its domain do not occur through a direct transfer of divine wisdom"; we cannot expect that the research professor, by joining the university, is prepared to teach.Investment in educational training is a necessity (18) .
Final Remarks
This study allowed us to observe the advances, limitations and challenges faced in the process of construction of the pedagogical projects of the courses, in the educational training of professors and in their activities in the field of psychiatric nursing and mental health.
In general, many professors are unaware of the pedagogic project of the courses; sometimes, they are inserted only in order to respond to the legislation concerning the collective construction.A barrier we found concerns the tendency to prioritize content centered on psychopathology/signs and symptoms of the disease, with less focus on attitudinal and procedural content.Despite some modifications that express the approach to content related to metal health policies, the core of the disciplines still resides in psychopathology.
In relation to the training of the professors, we can see that universities invest very little in this process.The idea that 'a person who knows how to do also knows how to teach' is still present.In public examinations for the hiring of professors, the amount of scientific publications is prioritized; a lower value is assigned to the educational training of professors.A major challenge that emerges from this research is the understanding that the educational training of professors inserted in universities has been something mistakenly presumed, which requires intervention.
), professor with 20-hour work week (1) and exclusive dedication with 40-hour work week (10).Most have contract of Exclusive Dedication, what meets the standards of the public universities that favor the performance of activities of teaching, research and extension.
(
...) Mental Health Policy, the whole historical part ... Psychiatric Reform, until we reach the new model today.The network, the new services, the clinic (...) (Hera).
(
...) but with the number of professors and the demand of the university itself we can't do what we ought to do, it would be interesting if we could... (Perseu).
(
...) sometimes I feel, like, we let a little aside the role of professor, especially in the undergraduate level, because of the demands that the University imposes on us (Perseus). | 2019-05-03T13:06:35.735Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "3717158d0143e9c1f8352068a14f032deb532ab1",
"oa_license": "CCBY",
"oa_url": "https://www.revistas.usp.br/smad/article/download/120777/117844",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c0bb228d13a4bb1aa4617172ffd0005a09efd9cf",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Psychology"
]
} |
216327565 | pes2o/s2orc | v3-fos-license | PCA Forecast Averaging—Predicting Day-Ahead and Intraday Electricity Prices
: Recently, the development in combining point forecasts of electricity prices obtained with different length of calibration windows have provided an extremely efficient and simple tool for improving predictive accuracy. However, the proposed methods are strongly dependent on expert knowledge and may not be directly transferred from one to another model or market. Hence, we consider a novel extension and propose to use principal component analysis (PCA) to automate the procedure of averaging over a rich pool of predictions. We apply PCA to a panel of over 650 point forecasts obtained for different calibration windows length. The robustness of the approach is evaluated with three different forecasting tasks, i.e., forecasting day-ahead prices, forecasting intraday ID3 prices one day in advance, and finally very short term forecasting of ID3 prices (i.e., six hours before delivery). The empirical results are compared using the Mean Absolute Error measure and Giacomini and White test for conditional predictive ability (CPA). The results indicate that PCA averaging not only yields significantly more accurate forecasts than individual predictions but also outperforms other forecast averaging schemes
Introduction
In recent years, we have observed a dynamic transformation of energy markets, which encompasses changes in the generation structure and the creation of new trading opportunities. Since the establishment of competitive power exchanges, a growing share of electricity has been traded in day-ahead markets, where offers are placed before the noon of the day preceding the delivery. To give traders the opportunity to balance deviations from positions contracted in the day-ahead market (due to the highly unpredictable generation from renewable sources), the spot markets have been complemented by intraday and balancing markets. Operation in such a complex environment becomes challenging for many market participants, as it requires taking various operational decisions, for example, generators need to decide, how much electricity to offer on a day-ahead market see [1] or how to structure the intraday trade [2]. Therefore, an accurate prediction of electricity prices becomes an important issue for utility managers.
The literature is rich in publications focusing on modelling and forecasting of spot prices (see [3,4] for a comprehensive review). At the same time, there are few articles, which are dedicated to intraday markets [2,5,6]. Most of them focus on a very short term-a few hours ahead-forecast, as in [7]. These types of models could not be directly used by utilities when making operational decisions. hours and locations [30][31][32]. The results indicate that a joint exploration of the whole panel leads to more accurate short-and mid-term forecasts.
The PCA approach could be also used to combine forecasts obtained from different models or/and model specifications. Although the literature recognizes the potential of PCA in forecast averaging [33,34], there are few articles where the method is successfully applied. In [14,33,35] static factors are employed to extract information from a panel of predictions coming from different models/experts, in order to obtain point forecast of the chosen macroeconomic variables. PCA was also adopted by [36], who used the method for the construction of prediction intervals of electricity spot prices. In all the above applications, factors are estimated with relatively small and diversified panels.
In this paper, an alternative setup is explored, in which the panel of predictions is homogeneous and consists of a large number of forecasts based on the same model estimated with different calibration windows, as in [18][19][20]. This setup gives new challenges. First, the panel of forecasts is not balanced because it consists of a relatively small number of predictions calculated with short calibration windows. Second, forecasts based on long windows are almost identical, as the growing sample size gives more stable parameter estimates. What is more, different days are characterized by different patterns of relationship between panel forecasts and actual prices. In order to solve some of the above problems, we normalize the predictions across the window size dimension. Next, we use both the time dependent moments of panel variables and factor estimates to calculate the price forecast. It is shown, that PCA combined with data standardization could be a promising alternative for other weighting schemes. Moreover, the method does not require an ad hoc selection of the calibration window length. In this research, the number of factors used for forecast averaging is either fixed and chosen ad-hoc or is dynamically selected with BIC information criteria. The results indicate that in case of slightly misspecified models, the proposed PCA-based procedure significantly outperforms both best performing ex-post selected calibration window and weighted averaged windows (WAW) approach [19]. In particular, the errors obtained with the model when forecasting the German day-ahead prices are almost 4% lower in terms of MAE in comparison to the optimal single calibration window and significantly better than any other considered averaging scheme. Finally, the aggregated outcomes show that PCA(1) using only one factor is the best among different PCA specifications. The averaging scheme utilizing information criteria to choose the number of factors is only slightly worse than PCA(1) and gives more robust results. Therefore, we recommend to use information criteria, such as BIC, for the automated selection of the number of factors.
The remainder of the paper is structured as follows. In Section 2, we present the datasets illustrating the German electricity market. Section 3 describes the experiment design, introduces variance stabilizing transformation (VST) and defines models used for forecasting of day-ahead and intraday prices. Next, in Section 4, we discuss forecast averaging schemes and introduce PCA forecast combination approach. The performance of the methods is evaluated in Section 5. Finally, the conclusions of the research are presented in Section 6.
Datasets
In order to test the proposed methodology, we utilize a number of datasets from the German market-each of them spans from 1 January 2015 to 15 August 2019. We consider two different price time series: the day-ahead hourly electricity prices (top panel in Figure 1) and the corresponding time series linked to the intraday market-the ID3 index hourly prices (bottom panel in Figure 1). According to the official rules by EPEX SPOT [37], the ID3 index is calculated as the volume-weighted average price of all trades within 3 h before the delivery of the product (up to 30 min before delivery). Apart from the price series, we use data for different types of exogenous variables: the day-ahead consumption prognosis (top panel in Figure 2) as well as the day-ahead wind and solar generation forecasts (respectively: middle and bottom panels in Figure 2). The wind generation forecast consists of aggregated forecasts of offshore and onshore generation forecasts.
In the recent paper [38], authors argue that wind and photovoltaic generation forecasting errors increase the system imbalance in Germany and these directly influence electricity prices. Hence, we additionally use two other fundamental factors that impact electricity prices: natural gas spot prices (top panel in Figure 3) and the spot price of European carbon emission allowances, more precisely, EUA-Emission Unit Allowance (see bottom panel in Figure 3). Note that, in contrast to electricity prices and generation forecasts, EUA and natural gas spot prices are quoted in a daily (not hourly) resolution. The missing (corresponding to the time change in March) or 'doubled' (corresponding to the reversion to standard time in October) values were replaced by the arithmetic mean of the neighbouring observations for the missing ones and the arithmetic mean of both values for 'doubled' hours.
Descriptive statistics of the day-ahead and intraday price series are shown in Table 1. Although the mean of prices for both markets is very similar, the ID3 series exhibits greater variability and wider range of values. Day-ahead prices are negatively skewed (which is related with more frequent occurrence of bigger negative prices), the opposite can be observed for intraday prices. Finally, both electricity prices are leptokurtic, which confirms the occurrence of heavy tails caused by both positive and negative spikes. This feature has been widely discussed in the literature [3] and causes many difficulties while predicting future levels of electricity prices.
Calibration Windows
Following the majority of forecasting literature, we consider a so-called rolling window scheme. Similarly to [18,19], instead of arbitrarily choosing a fixed calibration window length, we consider a set of 673 different window lengths-ranging from 56 (ca. two months) to 728 days (ca. two years)-obtained forecasts are later averaged (see Section 4). The first 728 (=2 × 364) days are used for the initial model calibration.
Variance Stabilizing Transformation
Since electricity prices exhibit strong seasonality as well as spiky behaviour, we follow the recommendation of [39] and apply a so called variance stabilizing transformation (VST) to all datasets (to time series of prices as well as to exogenous variables). We apply the N-PIT transformation which is based on the so called probability integral transform. The transformed price X d,h for day d and hour h is given by: where P d,h is the real observation for day d and hour h,F P d,h (·) is the empirical cumulative distribution function of P d,h in the calibration sample, and N −1 is an inverse of the normal distribution function. We calibrate the models to transformed time series and then apply inverse transformation to the computed forecasts in order to obtain the price predictions:
Models
In this study, we consider both day-ahead and intraday (ID3) prices from the German electricity market. The latter ones are usually forecasted during the delivery day and modeled with the use of a broader (more recent) pool of information, compared to day-ahead price forecasting. Most of the researchers [40][41][42] focus on very short forecasting horizons (from four to three hours before the delivery). Such a modeling setup, while allowing for higher accuracy of the predictions, do not leave enough time for utilizing the forecasts and adjusting trading strategies to market conditions. To tackle this issue, we decided to extend the forecasting horizon to 6 h, as it would enable market participants to exploit the future price movements and optimize the trades [43].
Intraday prices can also be predicted in a day-ahead manner, i.e., before submitting bids in the day-ahead market. This approach is particularly important when market participants need to decide, where to sell or buy energy (day ahead vs. intraday market). In such case, they need to predict day-ahead revenues from different trading strategies, as in [1]. Such approach can also be beneficial in terms of a decision-making process and risk management.
Let us focus first on the day-ahead spot prices, DA d,h . In order to compute their point forecasts, autoregressive models with exogenous variables (ARX) estimated via the least-squares method are utilized. This type of models has been extensively used in the electricity price forecasting (EPF) literature [39,44,45]. The classical setup is expanded to include five exogenous variables: TSO forecasts of total load (L d,h ), wind (W d,h ) and solar (S d,h ) generation, as well as spot prices of carbon emission allowances and natural gas. The final model, denoted by DA, is described by the following formula: where h are the lagged day-ahead prices from previous day, two days before and a week before. DA d−1,min and DA d−1,max refer to the minimum and the maximum price from day d − 1, DA d−1,24 is the last known price from the previous day. Finally, D 1 , . . . , D 7 are weekday dummies accounting for the weekly seasonality. Note that the solar generation forecasts, S d,h , are included in the model only for hours 9-17, due to the obvious lack of generation during night and early morning hours. The second task is to predict the day-ahead intraday price, namely the value of the ID3 index for the day d and hour h. We conduct the forecasting on the day preceding the delivery, as in the DA case. This implies that the intraday and spot prices are modelled in the same manner: all 24 prices for day d are forecasted at the same time, using the same pool of information. The model, denoted by IDA (Intraday Day-Ahead), has a structure similar to (3) and extends the model proposed by [1]. It assumes that the data generating process of intraday prices could be described by the following equation: where h are the lagged intraday prices. Due to the transaction timeline (see Figure 4) the prediction are performed at 10:00, when some of the intraday prices ID3 d−1,h are yet not known. Therefore, a new variable, ID3 * d−1,h is constructed: where ID partial d−1,h is the volume-weighted average price of all transactions for a certain product, that have been made up to the moment of forecasting. In case there were no transactions, ID partial d−1,h is replaced by the corresponding day-ahead price.
Finally, we build a model for a very short-term forecasting horizon for the intraday market (6 h before the delivery), which we denote by ID. It is based on the results presented in [40,46] and assumes that the ID3 price for day d and hour h is given by: where ID3 d,h−6 refers to the ID3 price six hours before delivery and ID3 d−1,h is the price for the hour h on the previous day. The ID partial d,h is the volume-weighted average price of all transactions for a certain product, that has been made up to the 6 h before the delivery (the moment of forecasting). The next two variables link the intraday and day-ahead markets. DA d,h refers to already known day-ahead price for the day d and hour h, while DA d,h−6 gives the newest information about price level difference between whose two markets. The rest of the predictors are just like in the DA and IDA models.
Note that for a better readability, we write ID3 d,h+i to mark the product with the delivery i hours after (or before, for i < 0) the product (d, h) instead of using the correct notation
Forecast Averaging
The literature shows that the accuracy of forecasts depends on the length of the calibration window used for the estimation of the model parameters. As shown in [18,19] this relationship could be non-monotonic and hence the selection of the optimal calibration window length becomes a complex task. On the other hand, the diversity of outcomes provides a strong motivation for using forecast averaging techniques, which could improve the forecasting performance of predictive models. Moreover, combining predictions could help to solve an issue of the optimal calibration window selection and reduce the model-specification risk.
Somehow interestingly, the concept of averaging forecasts across calibration windows of different lengths is relatively new in the field of electricity price forecasting. The recent articles [18,19] were the first papers tackling this overlooked problem in a systematic way.
Weighed Averaged Windows
The simple arithmetic average of the selected predictions is one of the most popular forecasts combining approaches. This method has been proved successful in a number of different studies across the econometric and forecasting literature. In the presented setup, the averaged window (AW) averaging scheme assumes equal weights for all forecasts estimated with the calibration windows of lengths τ ∈ T .
Findings of [18,19] demonstrate that the reduction of the set of window lengths used for forecast averaging, T , could improve the method performance. The authors indicate that the average of predictions obtained with three short and three long calibration windows, in most cases, outperforms the single 'optimal' window as well as the average across all window lengths. The solution is also very efficient in terms of the computational cost-it requires calibrating the model to only six different sample lengths. [19] extended the idea of simple averaging and proposed an averaging scheme called weighed averaged windows (WAW). The weights are computed using the the inverse of the Mean Absolute Error (MAE) calculated over the averaging window of length D ave (in [19] where w where ε d,h . Using this approach, the past performance of each window is taken into consideration and bigger weights are assigned to forecast obtained from windows that performed well in the past. Despite the computational efficiency and satisfying performance of this method, the choice of calibration window lengths has to be made in an ad-hoc manner and the inappropriate choice may have a significant impact on the forecasting performance.
BMA
Bayesian analysis offers an alternative approach to classical forecast assemble methods. As stated by [47], the weights based on bayesian model averaging (BMA) can be approximated by where w (τ) d is the weight corresponding to the window of length τ and BIC(τ) is a Bayesian Information Criterion. In should be noticed that since the models are estimated with different calibration windows, BIC is computed here over the forecast averaging window, not over the estimation sample. Then where K is the number of parameters and RMSE Since the penalty component in BIC(τ) does not depend on the window length, τ, then the Equation (11) can be further simplified and becomes It can be noticed that (14) is similar to the definition of WAW weights. The differences are two-fold: first, BMA weights are based on RMSE rather than MAE and second, they are raised to power 24D ave , which represents the length of the forecast averaging window and shrinks weights of less accurate forecasts towards zero.
PCA Averaging
The majority of forecast combination approaches discussed in the literature, either use a small number of predictions or emphasise the need for prediction selection. Alternatively, one could utilize the information included in a big panel of forecasts by using the principal component analysis (PCA). The idea has been proposed by [14], who applied static factors to combine forecasts coming from different models. Similarly, [36] proposed the factor quantile regression averaging (FQRA) to construct prediction intervals using a panel of point forecasts. In both articles, PCA averaging is applied to relatively small and diversified panels of forecasts based on 27-66 individual models.
In the presented setup, the panel of forecast consists of 673 individual predictions acquired with different calibration windows. Since the growth of the window size, τ, leads to more stable parameter estimates, the forecasts obtained with long windows, for example, τ = 721 and τ = 728 are almost identical. This strong correlation, which is close to collinearity, impedes the classical, regression-based methods of forecast averaging. To avoid such problems, we propose a novel, fully-automated method for averaging forecasts, based on PCA.
In order to utilize all information from the averaging window, the data and predictions are treated as time series, with the time index, t = 24(d − 1) + h, representing consecutive hours. Similarly to the WAW approach, we use the information from D ave previous days (in our setup D ave = 182). Additionally, the data is extended by 24 forecasts of hourly prices from the day d. Therefore the final averaging window consists of 24D ave + 24 observations. Let us denote byP t,τ the prediction of the variable P t based on a calibration window of the length τ. The data set {P t,τ } could be interpreted as a panel, with the first dimension representing time and the second dimension describing the size of a calibration window. The averaging algorithm consist of the following steps:
1.
For each time period t in the averaging window, estimate the mean (μ t ) and standard deviation (σ t ) of individual forecasts across different τ; 2.
Standardize the predictions and the predicted variable; 3. Estimate the first k = 1, ..., K principal components, PC t,k , of a panel {P t,τ }, using the method described by [14,22]. Notice that the factors have a dimension (24D ave + 24) × 1 as they include the information of the price forecasts on the day d. Based on the number of principal component K used in the model we denote the model by PCA(K); 4.
Run a regression using observations from the averaging window, without the last 24 observation;
5.
Compute the prediction of the normalized dependent variable on day d at hour h.
and transform it into its original unitŝ The role of standardization should be emphasised here. The mean, which changes between days, could be interpreted as the first common factor affecting the panel of forecasts. In particular, it represents the forecast based on long calibration windows. The predictions for big τ are, by construction, very similar to each other and have the largest input to the mean. On the other hand, the impact of long windows on the demeaned panel is balanced by larger (in absolute terms) and more variable deviations from mean for short calibration windows.
The standard deviation represents the forecast uncertainty and increases when short and long windows give different predictions. If the original data was used to estimate the principal components, the days with the highest risk would have the largest input to the panel variance and hence would impact strongly factor estimates. Thanks to standardization, all days are equally represented by common factors and the outcomes are stable, even when the outliers are included in the sample used for forecast averaging. On the other hand, in the future work one can try to use the information about standard deviation and include the variance to the model for probabilistic forecasting, for example to construct the prediction intervals with quantile regression or its generalizations [48,49] It should be noticed here that the described algorithm is conditioned on several factors, K, used in the regression (17). To make the choice of K data-driven, we use Bayesian information criteria (BIC): whereσ 2 K is an estimated residuals variance from the model (17) with K components. For each day d, the optimalK is chosen, which minimizes the corresponding BIC.
Forecast Evaluation
We use the Mean Absolute Error (MAE) for the full out-of-sample test period of D = 778 days (i.e., 26.06.2017 to 15.08.2018, see Figure 1) as the main evaluation criterion. In the paper, two measures are considered where ε d,h is the prediction error at day d and hour h based on the averaging method i or i = τ for models without averaging. The first measure, MAE (i) d describes the forecast accuracy for a given day, d, and is later use for statistical comparison of individual approaches. Finally MAE (i) describes the overall performance of the method (i). Recall, that the MAE is the most commonly used measure for evaluation forecast accuracy. In the case of electricity markets, it reflects the average deviation of the revenue from selling 1 MWh from its expected level. Given a number of results, it is hard to properly rank the models accuracy. To solve this issue, following [39,44], we introduce the mean percentage deviation from the best (m.p.d.f.b.) benchmark, inspired by the m.d.f.b. measure used in [50,51] for comparing models. The m.p.d.f.b. measure for model i compares the model's performance to the best benchmark (it is the best performing calibration window length for each of models j = DA, IDA, ID: The obtained MAE values can be used to provide a ranking of models. Unfortunately, they do not allow to draw statistically significant conclusions on the outperformance of the forecasts of one model by those of another. Therefore, the conditional predictive ability (CPA) test of Giacomini and White [52] is used to compare competitive outcomes. Note that the CPA test could be viewed as a generalization of the popular Diebold and Mariano [53] test for unconditional predictive ability. Here, the test statistic is computed using the vector of average daily MAE d : where MAE (i) d is the mean absolute error of forecast obtained with model i on day d. For each pair of window sets and each model we compute the p-value of the CPA test with null H 0 : φ = 0 in the regression [52]: where X d−1 contains elements from the information set on day d − 1, i.e., a constant and ∆ X,Y,d−1 .
Point Forecast Results
As mentioned earlier, in this paper we consider training samples of lengths ranging from 56 to 728 days. Since the same model calibrated to a sample of different lengths produces differing forecasts, this gives us 673 different 'sub-models' for each of the models. The forecasting performance is evaluated separately for each calibration window, and the results are shown in Figure 5-each dot represents the MAE of forecasts obtained by calibrating the model to a sample of a certain length. Interestingly, the curves for DA and IDA models are not monotonic as one may expect, the forecasting error does not strictly fall with the increase of the calibration sample length. This behavior of MAE may suggest that the models are slightly misspecified due to, for example, assumed linearity, time-invariant parameters or omitted variables. In such a case, the parameter estimates are inconsistent and do not converge to their true values. On the other hand, the curve for the ID model is descending. The forecasting accuracy of this model increases with the length of the calibration window, leaving little room for an improvement for averaging techniques.
Another conclusion that can be drawn is that none of the calibration window lengths would be the 'optimal' choice for all models-the best-performing calibration window for DA model is 95 days, whereas for IDA the best forecasting performance would be achieved when calibrating the model on a 438-day sample and for ID the longest calibration samples perform the best. This diversified pattern of behavior shows that there is a need for the more robust way of selecting the length of calibration windows. Table 2 presents MAE and m.p.d.f.b results for forecasts obtained with the shortest (56-days), the one-year long (364-days) and the longest (728-days) calibration windows and compare them against a benchmark: the best (optimal) (ex-post) calibration window length. Next, outcomes of different averaging technique are reported, starting with AW/WAW and BMA for all τ ∈ {56, 57, ..., 728}, AW/WAW and BMA for six selected window sizes τ ∈ {56, 84, 112, 714, 721, 728} as in [18][19][20] and PCA averaging with 1 to 4 factors. Finally, outcomes of PCA(BIC) scheme are presented, in which the number of components is estimated using BIC information criterion. The results are displayed in absolute terms (MAE) and relative to the benchmark (computed as a percentage difference, %chng).
Averaging Results
The presented measures are computed with data ranging from 26 June 2017 to 15 August 2019, it is a 778-days long out-of-sample period. The three considered models, it is DA, IDA and ID, are evaluated separately and their outcomes are shown in consecutive columns. Finally, the average performance of analyzed forecasting schemes is described by m.p.d.f.b. The results lead to several important conclusions:
•
In case of DA and IDA models, the averaged forecasts are more accurate than any of the individual predictions, including those based on the best ex-post calibration window length. The gains reach up to 3.841% and 2.097% for DA and IDA, respectively. At the same time, none of the combined predictions provide results better than the benchmark for ID model. This confirms our expectation that averaging across different calibration window lengths may not lead to any improvement of forecast accuracy in case of well-specified models, which parameters can be consistently estimated.
•
When two similar averaging schemes, AW and WAW, are compared, the results indicate that the extension of WAW, originally proposed by [19], outperforms the simple arithmetic mean. The superiority of WAW is obtained for all models and both ranges of window lengths, T .
•
The outcomes for the AW/WAW averaging scheme show that the pre-selection of six window lengths improves the forecast accuracy only in case of misspecified models: DA and IDA. At the same time, results for ID suggest that, for well-specified models, the ad hoc reduction of the τ dimension increases substantially the MAE measure.
• Results for the BMA scheme perform much worse in terms of MAE for the DA and IDA models compared to non-Bayesian approaches. The situation changes for ID model, when BMA is the most accurate among forecast averaging schemes but still worse than the best individual model.
•
The PCA forecast averaging approaches lead to more accurate predictions of DA and IDA than any other combining schemes. They reduce MAE, relatively to the benchmark, by 3.841% and 2.097%, respectively. For ID model, all averaging schemes perform worse than the 2-year calibration window. Still, PCA is exhibits the smallest forecast error among presented methods.
•
The PCA based methods perform similarly, regardless of the number of factors used for forecast averaging. One could observe small differences between markets that indicate that PCA(4) is on average the most efficient. • BIC information criterion is shown to be helpful in selecting the number of components.
Although it could not beat the best PCA specification for individual models (DA, IDA and ID), it works very well on average.
Note that in the presented setup, forecast averaging weights are estimated using predictions of D ave = 182 previous days. Although it is out of scope of this paper, we conducted a limited study to analyze, how the reduction of D ave to 60 days affects the results. It turned out that the choice of D ave has a minor impact on outcomes and does not alter the major conclusions. It seems that the relative accuracy of PCA increases for longer forecast averaging windows, as more information improves estimation of principle components.
Finally, the results are evaluated with the Giacomini-White test [52] for the norm of order one. The outcomes are presented in Figure 6, on which a non-black square indicates that the forecasts of a model on the X-axis are statistically more accurate than the forecasts of a model on the Y-axis. The results confirm previous findings and show that PCA schemes outperform other methods, when the day-ahead forecasts are considered. This outcome is supported by two observations: • PCA(4) and PCA(1) are significantly the most accurate for DA and IDA models, respectively. • PCA (1) is not statistically worse than any of the predictions apart from some other PCA specifications.
When the ID model is considered, the outcomes show that approaches based on the longest and the best calibration window lengths provide forecasts of the same accuracy, which outperform almost all other prediction methods. Moreover, for this market,
•
Forecasts obtained with PCA (3) Finally, it could be noticed that PCA(BIC) is rarely outperformed by other PCA specifications. This result confirms that BIC is useful in determining the optimal number of components used for averaging and hence could be an attractive alternative to the ad hoc choice of K. Figure 6. Results of the conditional predictive ability (CPA) test [52] for forecasts of all considered models. We use a heat map to indicate the range of the p-values-the closer they are to zero (→ dark green) the more significant is the difference between the forecasts of a model on the X-axis (better) and the forecasts of a model on the Y-axis (worse).
Conclusions
In this paper, we model and predict hourly electricity prices on the German market. We consider three forecasting setups: a day-ahead forecast of spot prices, a day-ahead forecast of intraday prices and a short-term, 6 h ahead prediction of ID3 index. The analyzed problems reflect the decision process of market participants and could help in optimizing the selling/buying strategy as in [1,2].
We propose a novel approach for calculating the predictions of electricity prices, which utilize forecasts based on models calibrated to windows of different lengths. We extend the idea introduced in [18,19], which focuses on an ad hoc selection of the best set of calibration windows. In this study, we propose a principal component analysis (PCA) method for forecast averaging, which enables the automatic aggregation of information included in the large panel of predictions. The results indicate that the PCA averaging scheme can, on average, reduce the MAE measure of forecast accuracy, relative to the best ex-post calibration window length. It also outperforms other forecast averaging approaches, such as AW, WAW and BMA.
Furthermore, we show that DA, IDA and ID models have different characteristics, which correlate with the forecast horizon. The performance of day-ahead forecasts of spot and intraday prices does not improve with the growth of the calibration window length, whereas the short-term predictions of ID3 get more accurate for the longest estimation windows. This difference impacts the potential gains from forecast averaging. For the ID model, none of the proposed methods could outperform the forecasts based on the longest calibration window. At the same time, the averaging-and in particular PCA forecast combination-results in a significant decrease of MAE for DA and IDA models. The forecast accuracy improves relative to the benchmark, by almost 4% for PCA(4) scheme and DA model. In the case of IDA model, the error reduction reaches 2% for PCA (1) approach.
Finally, the results indicate that the two PCA forecast averaging methods, with one or four components, provide the most accurate predictions. PCA(1) is statistically not worse than any of the AW or WAW forecast combination schemes and outperforms all approaches for IDA. At the same time, PC(4) has on average the lowest MAE, measured by m.p.d.f.b (mean percentage deviation from the best). The PCA method, with the number of components selected with BIC information criterion, provides forecasts which are only slightly worse than those obtained with PCA(4) and PCA (1). Hence, it could be viewed as an interesting alternative for ad hoc selection of the number of components. We believe that these results encourage further research on PCA forecast averaging, which could be extended to interval and probabilistic forecasting and be applied to other commodity markets. | 2020-04-27T20:44:55.209Z | 2020-02-04T00:00:00.000 | {
"year": 2020,
"sha1": "5dd194b047cc025cfa6c87b78a9d20d6c734d118",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/13/14/3530/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "52c5a6dc2d2ec63efbbc6a4a38fa786034005800",
"s2fieldsofstudy": [
"Computer Science",
"Economics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
234858353 | pes2o/s2orc | v3-fos-license | Innate immunity impacts social-cognitive functioning in people with multiple sclerosis and healthy individuals: Implications for IL-1ra and urinary immune markers
Social-cognitive difficulties can negatively impact interpersonal communication, shared social experience, and meaningful relationships. This pilot investigation examined the relationship between social-cognitive functioning and inflammatory markers in people with multiple sclerosis (MS) and demographically-matched healthy individuals. Additionally, we compared the immune marker profile in serum and urine-matched samples. Social cognitive functioning was objectively assessed using The Awareness of Social Inference Test – Short (TASIT-S) and subjectively assessed using self-reports of abilities in emotion recognition, emotional empathy, and cognitive theory of mind. In people with MS and healthy individuals, there were moderate-to-large negative relationships between pro-inflammatory biomarkers (serum IL-1β, IL-17, TNF-α, IP-10, MIP-1α, and urine IP-10, MIP-1β) of the innate immune system and social-cognitive functioning. In MS, a higher serum concentration of the anti-inflammatory marker IL-1ra was associated with better social-cognitive functioning (i.e., self-reported emotional empathy and TASIT-S sarcasm detection performance). However, there were mixed findings for anti-inflammatory serum markers IL-4 and IL-10. Overall, our findings indicate a relationship between pro-inflammatory cytokines and social-cognitive abilities. Future studies may provide greater insight into biologically-derived inflammatory processes, sickness behaviour, and their connection with social cognition.
Introduction
Inflammation has broad-ranging effects across the body and can disrupt brain functioning. Biological markers of the innate immune system and sickness behaviour have been recently implicated in poor social cognition, including reduced mentalising of another's affective state or psychological perspective, reduced social interaction, and increased social disconnectedness (Bollen et al., 2017;Hennessy et al., 2014;Moieni and Eisenberger, 2018). Multiple sclerosis (MS) is an immune-mediated neuroinflammatory and neurodegenerative disease (Dendrou et al., 2015) for which poor social processing and difficulties in social functioning are a common outcome (Batista et al., 2017;Chalah and Ayache, 2017). However, despite the involvement of dysregulated inflammatory processes in MS (Kothur et al., 2016;Lim et al., 2017), whether these biomarkers play a role in poor social functioning is not understood.
Social cognition concerns the ability to perceive, interpret, and process interpersonal cues and socially relevant stimuli that enable a person to understand another person's intentions, thoughts, and feelings (McDonald et al., 2013(McDonald et al., , 2018. Difficulties in social-cognitive functioning are reported in around 30-40% of people with MS (pwMS) (Genova et al., 2016) and may include reduced facial emotion recognition (Henry et al., 2011;Lenne et al., 2014) and Theory of Mind (ToM; the ability to accurately predict and interpret another's mental state that may differ from one's own beliefs, emotions, and desires) (Cotter et al., 2016;P€ ottgen et al., 2013) abilities. Poor social-cognitive functioning negatively impacts communication, shared experience, and the ability to form meaningful relationships, which may contribute to social isolation and higher caregiver burden (Adams et al., 2019;Caplan et al., 2015). However, the identification of possible prognostic markers of social-cognitive difficulties may assist in identifying pwMS who may benefit from more extensive neuropsychological testing.
Research has shown that inflammatory processes and sickness behaviour can negatively affect social experience. Three prior doubleblind placebo-controlled randomised crossover design studies using the International Affective Picture System (Bradley and Lang, 2007) and the Reading the Mind in the Eyes Test (RMET) (Baron- Cohen et al., 2001) found endotoxin-induced inflammation reduced the ability to recall emotional faces and the ability to mentalise the affective and psychological states in others (Bollen et al., 2017;Grigoleit et al., 2011;Moieni et al., 2015). However, in another double-blind placebo-controlled crossover design fMRI study using the RMET, no such effects on social-cognitive performance were found (Kullmann et al., 2014). A key distinction between these studies was the differing dose to trigger the inflammatory response. Changes in behavioural and physiological functioning from endotoxic processes can be dose-dependent (Grigoleit et al., 2011). For example, Kullmann et al. (2014) used 0.4 ng/kg of body weight to induce low-grade inflammation, while Moieni et al. (2015) used 0.8 ng/kg of body weight for a more robust inflammatory response. Thus, it remains possible that inflammatory markers that typically modulate responses to illnesses selectively affect social-cognitive processes.
The innate immune mediators, particularly interferon-gammainducible protein-10 (IP-10), monocyte chemoattractant protein-1 (MCP-1), macrophage inflammatory protein-1 beta (MIP-1β), and granulocyte colony-stimulating factor (G-CSF), have been implicated in MS (Carrieri et al., 1998;Cheng and Chen, 2014;Opdenakker and Van Damme, 2011;Rust et al., 2016). Dysregulation of innate immunity (such as that which would occur during an MS relapse) can result in increased concentrations of these circulating inflammatory mediators in the brain, cerebrospinal fluid, and blood (Broux et al., 2012;Cheng and Chen, 2014). For instance, IP-10 and G-CSF can markedly increase during disease activity in relapsing-remitting MS, which may exacerbate inflammatory processes, demyelination, and lesion development (Balashov et al., 1999;Rust et al., 2016;Scarpini et al., 2002). Dysregulated innate immune responses have been reported to affect cognitive functioning in other neurological disorders such as schizophrenia, Alzheimer's disease, and autism (Novellino et al., 2020); however, their relationship with social-cognitive abilities in MS has not yet been examined.
The present study was a pilot investigation to delineate the possible relationship between the various serum-and urine-based inflammatory markers known to be implicated in MS and social-cognitive functioning. We first hypothesised that, in pwMS, there would be a negative association between pro-inflammatory markers (i.e., IL-1β, IL-17, IFN-γ, TNF-α, IP-10, MIP-1β and MCP-1) and social-cognitive abilities. Second, we hypothesised that, in pwMS, there would be a positive association between anti-inflammatory markers (i.e., IL-1ra, IL-4 and IL-10) and social-cognitive abilities. We compared these relationships to those found in healthy individuals without MS and examined if urine markers were associated with respective serum markers.
Participants
Participants comprised 20 pwMS (17 relapsing-remitting, 1 secondary-progressive, 2 primary-progressive) and 20 healthy control (HC) individuals without MS, demographically-matched for sex, age, and education (Table 1). PwMS were diagnosed according to the McDonald criteria (Novellino et al., 2020;Thompson et al., 2018). MS phenotypes were not analysed separately as this study was interested in the level of inflammation found rather than understanding the disease types' unique pathological features. Participants with MS completed the Disease Steps Scale (Hohol et al., 1999), which indicated 'mild' to 'moderate' levels of disability in the present sample (Mdn ¼ 1.50, IQR ¼ 1.00-2.75). Exclusion criteria included any psychotic, bipolar, or related disorder; a history of brain injury or other neurological illness such as stroke or epilepsy; a history of alcohol or drug abuse; inability to speak and read English fluently; uncorrected visual difficulties affecting task completion; and pregnancy. Smoking was not an exclusion criterion in this study; we had three participants, two MS and one healthy participant, who indicated they were smokers (between 10 and 15 per day). Additional exclusion criteria included use of steroid medications within the past two months or MS disease relapse within the past 14 days for the MS participants as assessed by the Relapse Status Checklist (Brown et al., 2006) to reduce the potential confounding effect of disease-related inflammatory dysregulation that occurs during an MS relapse. All aspects of this study were approved by the Tasmania Health and Medical Research Ethics Committee (H00156630) and adhered to the World Medical Association's Declaration of Helsinki.
Data and sample collection
Following recruitment (by invitation letter and referral), participants completed a questionnaire containing demographic and disease-related questions, and standardised questionnaires to assess self-reported social-cognitive functioning and mood within seven days before attending a face-to-face testing session. The self-report social cognitive questionnaires included two subscales of The Social and Emotional Questionnaire (SEQ) (Bramham et al., 2009) -Emotion Recognition and Emotional Empathy, and the Perspective Taking subscale of the Interpersonal Reactivity Index (IRI) (Davis, 1980). All objective assessment occurred in the morning (start time between 9 and 10am) and in a temperature-controlled room (24 C) to control for time of day and temperature effects on performance and biological markers. At the testing session, participants were firstly interviewed to ascertain their disease history and characteristics before completing a battery of neuropsychological tests that included tests of general cognitive functioning (to be reported elsewhere) and a social-cognitive test, The Awareness of Social Inference Test -Short (TASIT-S) (Honan et al., 2016). TASIT-S was administered approximately 50 min into the testing session. Following testing, 30 mL of venous blood was extracted from the participant by a qualified phlebotomist. Mid-stream urine was also collected either during or immediately after testing. The venous blood was processed to extract the serum and then stored at approximately À80 C until transferred for analysis. Prior to biochemical analysis, blinding measures such as unsorting and re-labelling were undertaken (Fig. 1).
Social cognitive and neuropsychological tests
TASIT-S (Honan et al., 2016) is an objectively assessed social cognition task comprised of three parts. Part 1 is a dynamic Emotional Evaluation Test (EET) containing 10 short video vignettes of professional actors depicting five basic facial emotions: happy, sad, anger, fear, disgust, and no specific emotionneutral. Part 2 Social Inference Minimal is a task that examines the comprehension of conversational meanings from paralinguistic cues (tone of voice, facial expression, and gesture). It comprises nine video vignettes depicting four sincere and five sarcastic (tapping into ToM) social exchanges. Each vignette requires the participant to answer questions that assess the participants' understanding of an actor's belief (what they are doing), meaning (what they are trying to say), intention (what they are thinking), and feeling (how they are feeling). Part 3 Social Inference Enriched is another social-cognitive task similar to Part 2, but with additional contextual information to aid interpretation. It is comprised of nine video vignettes depicting four blatant lies and five sarcastic social exchanges.
Self-reported social-cognitive abilities were assessed using two 5-item subscales of the SEQ (Bramham et al., 2009) -Emotion Recognition and Emotional Empathy. In the SEQ, statements are rated on a 5-point scale from 1 ¼ strongly disagree to 5 ¼ strongly agree. Self-reported cognitive ToM was assessed using the 7-item Perspective Taking subscale from the IRI (Davis, 1980). In the IRI, statements are rated on a 5-point scale from A ¼ does not describe me well to E ¼ describes me well. Higher scores on the SEQ and IRI are indicative of higher social-cognitive abilities.
The Hospital Anxiety and Depression Scale (HADS) (Zigmond and Snaith, 1983) was administered to profile self-reported anxiety and depression levels over the preceding week. The HADS comprises 14-items where various statements are rated on a 4-point scale where 0 ¼ not at all to 3 ¼ most of the time. Scores are summed for anxiety and depression subscales with higher scores indicative of higher symptomatology.
Profiling of immunological markers
Cytokines, chemokines, and growth and colony-stimulating factors were concurrently quantified according to the manufacturer's protocols using commercial Human Cytokine 27-plex magnetic bead-based immunoassay kits (Bio-Rad, CA, USA). In accordance with current practice, median fluorescence intensities (FI) were used to analyse immune profile data. Due to its increased sensitivity, this technique has higher statistical power to detect variance compared to the use of absolute concentration values . The urine markers are expressed as FI value/creatinine (Allen et al., 2004). Notably, serum concentrations of IFN-γ, GM-CSF and VEGF, and urine concentrations of IL-1β, MIP-1α, and IL-4 were minimally or undetectable and thus were not analysed and reported.
Statistical analysis
Statistical analyses were conducted using IBM SPSS Statistics for Windows, Version 26 (IBM Corporation, 2019). The normality of the variables was checked using histograms, visual inspection of the residual/Q-Q plots, and Shapiro-Wilk tests. Homogeneity of variance was examined utilising Levene's test with equal variances not assumed, interpreted as required. Before analyses, the data were inspected case-by-case for outliers. Subsequently, one MS (urine IP-10) and two HC (1 urine IL-1ra; 1 urine TNF-α, G-CSF, and GM-CSF) cases were removed. Independent samples t-tests assessed between-group differences on the social-cognitive measures. To normalise the biomarker data, logarithmic and square root transformations were applied; however, the variables could not be normalised, thus Spearman's ρ, and Mann-Whitney U analyses (with Hodges-Lehman 95% confidence intervals of the difference) were used. Due to a priori hypotheses about the direction of between-group effects and correlations, p-values < .05 were deemed to be statistically significant. Effect sizes that were at least moderate in size per the guidelines of Cohen (2013) were reported. This included correlations >0.30 and Cohen's d > 0.50.
Figures are provided for statistically significant biomarker comparisons and correlations, while tables containing complete descriptive data and inferential statistics for the variables are included in the supplemental information (S1 to S4).
Demographic and between-group differences on TASIT-S, SEQ, IRI, and HADS
Descriptive and inferential statistics for the questionnaire and socialcognitive measures are shown in Table 1. There were no significant between-group differences in sex, age, and education. On TASIT-S, pwMS had poorer Part 1 EET scores and Part 2 Sarcasm scores (moderate-tolarge effects). Although significance was not met for Part 3 Sarcasm, a moderate effect for poorer scores in pwMS was present. On the HADS, pwMS self-reported higher depression scores (moderate-to-large effect) and higher anxiety scores (a moderate effect).
Objective social-cognitive measures
For TASIT-S Part 1 EET, no significant correlations were found with pro-inflammatory markers from serum or urine in pwMS or HCs. For TASIT-S Part 2 and 3 Sarcasm, urine IP-10 showed large negative correlations in pwMS, but not in HCs (Fig. 3A-B). Conversely, urine MIP-1β and PDGF-bb showed moderate-to-large negative correlations in HCs, but not in pwMS (Fig. 3C-F), which also applied for urine VEGF and TASIT-S Part 3 Sarcasm (Fig. 3G). Further, serum IL-1β and G-CSF showed large negative correlations with TASIT-S Part 3 Lies in HCs, but not in pwMS ( Fig. 3H-I), while serum MCP-1 showed a moderate-to-large positive correlation with TASIT-S Part 2 Sincerity in HCs (Fig. 3J).
Subjective social-cognitive measures
For the SEQ subscales, only serum TNF-α showed large negative correlations with Emotion Recognition and Emotional Empathy in pwMS, but not in HCs (Fig. 4A & B). There was also a large positive correlation found between urine G-CSF and Emotional Empathy in pwMS that was not present in HCs (Fig. 4C). For the IRI subscale, only urine IFNγ showed a moderate-to-large positive correlation with Perspective Taking in HCs, but not in pwMS (Fig. 4D). Further, serum IP-10 showed a moderate-to-large negative correlation with Perspective Taking in pwMS, which was not present in HCs (Fig. 4E). Conversely, urine IP-10 showed a large negative correlation with Perspective Taking in HCs, but not in pwMS (Fig. 4F).
Objective social-cognitive measures
On TASIT-S in pwMS, there was a large positive correlation between Part 3 Sarcasm and serum IL-1ra (Fig. 5A) and a moderate-to-large negative correlation between Part 3 Lies and serum IL-4 (Fig. 5B). In HCs, there were no significant correlations between TASIT-S and antiinflammatory markers from serum and urine.
For TASIT-S in pwMS, a positive correlation that was moderate in size, but not statistically significant, was found between Part 2 Sarcasm and serum IL-1ra (MS ρ ¼ 0.
Further alternative exploratory analyses were conducted to examine whether sex, age, smoking, depression, and anxiety could explain any additional variance in the above relationships. The strength of all relationships remained similar with these variables added as covariates, with the maximum change in correlation size being Δr ¼ 0.05 (or 25% of variance). A chi-square test of independence showed that there were no significant association between group and time of year tested, χ 2 (3, N ¼
Discussion
This study investigated the relationship between various inflammatory markers and social-cognitive functioning in pwMS and demographically-matched healthy individuals. Our results highlight a novel role that the innate immune system may be linked to a disruption in social-cognitive functioning. Specifically, higher levels of serum IL-1β, IL-17, TNF-α, IP-10, MIP-1α, and urine IP-10, and MIP-1β were associated with poorer social-cognitive abilities relating to the detection of blatant lies and sarcasm in conversation, and poorer self-reported emotion recognition, emotional empathy, and perspective taking abilities. These Note. Spearman's ρ correlation for serum versus urine derived biomarkers overall and stratified by multiple sclerosis and healthy matched control participants. Abbreviations: G-CSF, granulocyte colony-stimulating factor; IL-1β, interleukin 1 beta; IL-1ra, interleukin 1 receptor antagonist; IL-4, interleukin 4; IL-10, interleukin 10; IL-17, interleukin 17; IP-10, interferon inducible protein-10; MCP-1(MCAF), monocyte chemotactic protein-1 and activating factor; MIP-1β, macrophage inflammatory protein-1 beta; PDGF-bb, platelet-derived growth factor-two b subunits; RANTES, regulated upon activation normal T-cell expressed and secreted; TNFα, tumour necrosis factor-alpha. Intra-correlations are not shown for pro-inflammatory markers IL-1β, IFNγ, GM-CSF, MIP-1α, VEGF and anti-inflammatory IL-4 due to minimal or undetectable concentration levels. *Denotes significant correlation p < .05. a Denotes the removal of 1 outlier case before analyses. immune markers are known as pro-inflammatory mediators in the context of immune dysregulation in MS or sickness behaviour in healthy individuals. Surprisingly, two cytokines most associated with MS, IFN-γ and IL-17 in the serum, were not higher in pwMS than HCs, suggesting its pathological dysregulation may be limited to the brain and not the peripheral regions of the body (Stromnes et al., 2008;van Langelaar et al., 2018). However, we found a negative correlation between serum IL-1ra and IL-17, indicating inhibition of IL-17 production via the IL-1-IL-17 signalling axis by IL-1ra (Table 2) (Nakae et al., 2003). This possible explanation for the attenuated IL-17 is supported by findings of prior alternative research that serum concentration of IL-17 and IFN-γ, cytokines associated with cell-mediated innate immunity (helper T cells of the Th1 and Th17 axes), can be similar in people with relapsing-remitting MS (the type of MS characterising the majority of the current sample) and healthy individuals (Arellano et al., 2017;Ghaffari et al., 2017). Our result reflects a broader implication and link with immune regulators such as the aryl hydrocarbon receptor and interactions with the kynurenine pathway in modulating MS progression, relevant to our MS cohort (Bessede et al., 2014;Yan et al., 2010). It would be of interest to examine how the kynurenine pathway fits into our current hypothesis considering its role in both mood and immune regulation in MS (Tan et al., 2021).
A novel aspect of this study was collecting paired serum and urine samples, which allowed us to investigate unique immune fingerprints in compartmentalised biological systems. Interestingly, we found that healthy individuals excreted a greater concentration of inflammatory markers in urine than pwMS, whereas the corresponding serum levels were in opposing trend ( Fig. 2A-F). Potentially, this discrepancy may either reflect differential urinary filtration processes (An and Gao, 2015), biomarker-specific fluctuation (Schenk et al., 2019), epitopic remnant accumulation that is theorised in autoimmune disease (Opdenakker et al., 2020), or is an artefact of poor urine quality from bladder dysfunction and reduced fluid intake (Katsavos and Anagnostouli, 2013). However, we controlled for creatinine levels to minimise potential confounder variables such as fluid consumption and age. Another possible explanation is based on the generalised assumption that increased serum metabolite concentration will result in increased urine metabolite excretion (Ritscher et al., 2020). However, while this may hold true for healthy individuals, in the context of MS or an alternative disease state, the possible incapacity to excrete these inflammatory mediators as efficiently as healthy individuals, may have resulted in these findings. Despite blood-based immune profiling being an established and widely used technique, the addition of a urinary immune profile may provide a feasible and non-invasive alternative technique to fully extrapolate the effects of inflammation in pwMS (Prasad et al., 2016). This approach of using urine samples may provide a new avenue for enabling longitudinal sampling in pwMS with greater disability levels.
In both pwMS and healthy individuals, we found higher concentration of numerous pro-inflammatory biomarkers to be associated with lower social-cognitive functioning (Figs. 3 and 4). Social-cognitive processing recruits a unique neural network called the "social brain", which includes the amygdala, insular cortex, superior temporal sulcus, anterior and posterior cingulate, temporoparietal junction, and ventromedial and orbitofrontal cortices (Wang et al., 2017). Our findings indicate that the social brain may be particularly vulnerable to pro-inflammatory processes related to innate immunity. Our results support current findings that inflammatory processes, or sickness behaviour, can negatively shape social perception (Grigoleit et al., 2011;Moieni et al., 2015), and negatively affect social-cognitive abilities (Bollen et al., 2017;Eisenberger et al., 2010;Hennessy et al., 2014). Thus, in pwMS, social cognition may be particularly affected, given that dysregulated inflammatory processes are a characteristic of MS. As expected, in pwMS, we found elevated serum concentration of anti-inflammatory IL-1ra relates to better social-cognitive abilities, specifically improved sarcasm detection and better self-reported emotion recognition and emotional empathy ( Fig. 5A and S4). These findings were in agreement with recent evidence in clinical and pre-clinical MS models, which suggest a multi-faceted role of the IL-1 system in MS pathophysiology, whereby IL-1ra is the only known endogenous neuroprotective antagonistic cytokine to downregulate the pro-inflammatory action of IL-1α/β (Musella et al., 2020). Furthermore, in MS, the results of prior pharmacological research reports that disease modifying therapies, such as glatiramer acetate, natalizumab, laquinimod, and interferon beta, can restore the inflammatory imbalance by increasing circulating concentration of IL-1ra; thus, reducing relapse rates and moderating the development of new brain lesions (Group, 1993;Jacobs et al., 1996;Nicoletti et al., 1996;Ruiz et al., 2019). Extending to other anti-inflammatory markers, IL-4 and IL-10, our findings were mixed (Table S3). Similar to our findings in IL-1ra, in pwMS, elevated serum concentrations of IL-4 and IL-10 were related to better self-reported social cognitive abilities. However, serum IL-4 and IL-10 were associated with poorer social-cognitive performance on objective assessment (TASIT-S; Honan et al., 2016). This contradictory finding may reflect the lack of concordance that is often seen between subjective and objective cognitive assessments (Honan et al., 2015). Nevertheless, it remains clinically valuable to evaluate one's perception of functioning as this too can be predictive of functional outcomes (Honan et al., 2015). Together, our findings indicate that IL-1ra may be an important therapeutic target for improved social-cognitive functioning.
There are limitations associated with this pilot study. The small sample size means that we have not sufficiently captured other MS types and do not have the statistical power to examine the differences that might exist among different disease profiles. However, given our pilot results show promise, we would expect that a larger exploratory study with a broader cohort of participants will be able to more thoroughly examine the relationship between inflammatory biomarkers and socialcognitive functioning, and how this may be mediated or moderated by particular disease characteristics. Given that we examined pwMS who were not in a relapse phase, our findings are limited to inflammatory processes during remission phases. Future research may benefit from exploring pwMS during relapse phases and further extending to other MS subtypes. Comparing urinary-based immune markers to those in cerebrospinal fluid may also help identify surrogate biomarkers for interventionist strategies to regulate inflammation in individuals experiencing social-cognitive difficulties. For example, current pharmaceutical interventions may benefit from research into complementary medicines such as probiotics, vitamin D, and resveratrol that may reduce inflammation (Morshedi et al., 2019).
Conclusion
Overall, our findings highlight the important implications that inflammatory processes, sickness behaviour, and the innate immune system, have on everyday social-cognitive functioning. In pwMS, better socialcognitive performance was associated with higher serum concentration of IL-1ra. Thus, it may be possible to improve social-cognitive abilities by limiting inflammatory processes, and IL-1ra may be a potential therapeutic target for future clinical trials. In both pwMS and healthy individuals, poorer social-cognitive functioning was related to pro-inflammatory biomarkers with an innate immunity signature (i.e., IL-1β, IL-17, TNF-α, IP-10, and MIP-1α, and higher urine concentration of IP-10, and MIP-1β). However, the findings were mixed for IL-4 and IL-10, such that they were negatively associated with objective social-cognitive performance, and positively related to better self-reports of social-cognitive abilities. Further cross-sectional and longitudinal research examining relationships between inflammatory markers and social-cognitive functioning according to various disease-related factors (MS subtypes) and biological sample type (serum, urine, and cerebrospinal fluid) is warranted.
Declaration of competing interest
There are no known conflicts of interest reported.
Appendix A. Supplementary data
Supplementary data to this article can be found online at https://do i.org/10.1016/j.bbih.2021.100254.
Contributions
CAH, CKL, SM conceptualised and designed the study. JAT, CAH, CKL wrote the manuscript. CAH, HMF, KDKA, CKL were involved in data acquisition. JAT, CKL, CAH, and CP analysed and/or interpreted the data. All authors contributed to the review and approved the final version of the manuscript. | 2021-05-21T16:57:01.671Z | 2021-04-14T00:00:00.000 | {
"year": 2021,
"sha1": "49af5f2fc234b74cc082bb687ea551359fb0da08",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.bbih.2021.100254",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bd47d9f1a90f6b0d44a47b690d3fbab61333dcba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231644666 | pes2o/s2orc | v3-fos-license | A Quantile Approach for Retrieving the “Core Urban-Suburban-Rural” (USR) Structure Based on Nighttime Light
: Accurate and timely information on the “core urban-suburban-rural” (USR) spatial structure in a metropolitan region is significant for both the scientific and policy-making communities. However, USR is usually considered as a single land use type, such as an impervious area, rather than three combined subcategories in remote-sensing image retrieval, especially for suburban areas, which obscures the details of the urbanization process. In this paper, we propose a quantile approach to retrieve the structure of USR based on stable nighttime light (NTL) data from the Defense Meteorological Satellite Program / Operational Linescan System (DMSP / OLS) and apply it in the Beijing-Tianjin-Hebei (JJJ) of China from 1995 to 2013. The key parameters of the NTL threshold, which is the maximum change point of the NTL intensity at the USR boundary, used to retrieve the three subcategories of USR are automatically defined based on the quantile approach with three iterations. Then, the overall accuracy and consistency of the retrieval results are evaluated using the corresponding visual interpretation map from Landsat images with a 30 m resolution. Moreover, the influence of parameter uncertainty is compared by introducing the human settlement index (HSI). According to the time-series analysis of USR retrieval in this study, the JJJ experienced rapid urbanization from 1995 to 2013, with the core urban area expanding by 7098 km 2 (average increase of 2.7 times), the suburban area expanding by 12,690 km 2 (average increase of 2.8 times), and the rural area increasing by 4986 km 2 (average increase of 0.38 times). The USR results retrieved based on the approach agree well with the validation of the visual interpretation map, with an overall accuracy (OA) of 0.904 and a kappa coe ffi cient (KC) of 0.650 at the city level. The USR result with the HSI as the input shows that NTL is more suitable for USR structure retrieval as the NTL shows less uncertainty compared with other parameters such as the vegetation index (VI). This study proposes an improved quantile approach for USR mapping from NTL images on a regional scale, which will provide a useful method for urbanization dynamics analysis.
Introduction
Urbanization is widely recognized as an important factor that is one of the grand challenges of humanity, affecting the functions of terrestrial ecosystems, climate change [1], population, medical treatment, policy [2][3][4], etc. In China, most subcategories of the extent of the urban area display significant "core urban-suburban-rural" (USR) triad structures [4,5]. Recently, rapid Chinese by combining NTL and VI, LST, or both. However, a reliable estimate of the optimal thresholds of the indices is still the key parameter for mapping the extent of the urban area, which is the same as in the threshold-based approaches. Besides, as the performance of NTL satellite sensors improve, index-based approaches face the challenge of additional hypotheses between the parameters in an urban area and introducing more data uncertainty among parameters (e.g., the retrieved VI and LST from other satellite images), which may reduce the accuracy of the retrieval of the extent of the urban area.
In light of the aforementioned problems and the fact that most studies focus on retrieving the urban area as a whole due to the "disappearance" of the boundaries among the three USR subcategories, especially suburb [33], we improve a quantile approach for retrieving the extent of USR subcategories from NTL images. Furthermore, an index-based parameter of the HSI is applied as another input in the same study area of the JJJ Metropolitan Area in China to evaluate the accuracy of the improved approach. In this paper, USR corresponds to the core urban, suburban, and rural areas that are defined as the residential land types with obvious distinctions in their NTL intensities.
Study Area
The study area of the JJJ, which is both the national capital region of China and largest urbanized region in eastern Asia, lies in North China from 36 • N to 42 • N and from 111 • E to 120 • E (as shown in Figure 1). It includes 13 major cities along the coast of China's Bohai Sea. The JJJ has experienced sustained rapid urbanization since the 1980s. In 2014, the Chinese government proposed the Beijing-Tianjin-Hebei Regional Integration (BRI) regional development project to improve the JJJ as a world-class city cluster, which will significantly promote the JJJ's urbanization process [6].
Dataset Collection and Preprocessing
The main dataset used in this study for retrieving USR was the DMSP/OLS dataset acquired from (https://ngdc.noaa.gov/eog/download.html). The DMSP/OLS NTL datasets from 1995 to 2013, including F12 (1995-1996), F14 (1997-2000), F15 (2001-2003), F16 (2004-2009), and F18 (2010-2013), were chosen to be consistent with other datasets and the rapid urbanization process of the JJJ. In addition, unstable light sources such as fires and ship lights were excluded from the DMSP/OLS NTL, and water and gas flares were removed based on the MODIS water product MOD44W and gas flare masks [21,35,36] for further analysis. The background (nonlight) values and confounding factors of auroras, fires, boats, and other temporal lights were eliminated or reduced [21]. To calculate the HSI, we used the Terra MODIS NDVI data MOD13A3 in this study, which are calendar-month composites at a 1 km spatial resolution. The quality control flags from MOD13A3 were used to remove abnormal pixels caused by clouds, snow, or other geometric problems. The MOD13A3 datasets of July, August, In addition to rapid urbanization over the past several decades, the BRI will inevitably propel the JJJ's urban and population clusters further. However, unplanned urban sprawl requires sufficient land for residential, industrial, and commercial purposes, hospitals, and schools, which generally may consume the precious agricultural and residential land surrounded by cities in the JJJ [34]. Because of the important "Permanent Basic Cropland and Regulation on the Protection of Basic Farmlands" policy in China, suburban areas and rural areas are exposed and particularly vulnerable to core urban area expansion, such as through population suburbanization and industrial transfer to the suburbs. Take the Huilongguan community in Beijing as an example, which used to be a rural area 20 km north Remote Sens. 2020, 12, 4179 4 of 16 of the core urban area before 2000. The community has been a huge living community in Asia with a population of 0.3 million and an area of 8.5 million m 2 . Numerous counties surrounding Beijing are also affected by the metropolis' urban expansion, such as the rapid urbanization of eastern Yanjiao Town and southern Gu'an County, which are in Hebei Province. Therefore, the JJJ Metropolitan Area is a typical study area.
Dataset Collection and Preprocessing
The main dataset used in this study for retrieving USR was the DMSP/OLS dataset acquired from (https://ngdc.noaa.gov/eog/download.html). The DMSP/OLS NTL datasets from 1995 to 2013, including F12 (1995-1996), F14 (1997-2000), F15 (2001-2003), F16 (2004-2009), and F18 (2010-2013), were chosen to be consistent with other datasets and the rapid urbanization process of the JJJ. In addition, unstable light sources such as fires and ship lights were excluded from the DMSP/OLS NTL, and water and gas flares were removed based on the MODIS water product MOD44W and gas flare masks [21,35,36] for further analysis. The background (nonlight) values and confounding factors of auroras, fires, boats, and other temporal lights were eliminated or reduced [21]. To calculate the HSI, we used the Terra MODIS NDVI data MOD13A3 in this study, which are calendar-month composites at a 1 km spatial resolution. The quality control flags from MOD13A3 were used to remove abnormal pixels caused by clouds, snow, or other geometric problems. The MOD13A3 datasets of July, August, and September covering the JJJ were chosen, and the best quality image from each month was applied to estimate the HSI. Furthermore, the land use maps of China produced by the Chinese Academy of Sciences were used in our study. Four annual land use datasets from 1995, 2000, 2005, 2010, which were derived from Landsat image with a 30 m resolution based on visual interpretation (the data sets are provided by the Data Center for Resources and Environmental Sciences, Chinese Academy of Sciences (RESDC) http://www.resdc.cn) [23,37], were applied for validation in this study. In this study, based on the 30 m land use data (Appendix A), a set of rural-urban scale verification data was produced manually. To maintain the consistency of spatial resolutions with that of the NTL data, some rural settlements with small areas (almost independent) were excluded. All datasets were processed to a 1 km spatial resolution for consistency and the dataset details are listed in the Table 1.
Methods
Although the current research literature shows a wide variety of sources for USR, there is still no common or consistent definition of this term. In particular, suburbs have been insufficiently considered in the land use retrieval from satellite images. USR is developed based on the flow and redistribution of people, materials, and information among the three subcategories of the extent of the urban area [38]; it essentially reveals the variations of human activities and land use, including the significant differences in NTL. In this study, USR is defined as a residential land use type that consists of core urban, suburban, and rural areas with obvious NTL digital number (DN) values. The general flowchart to delineate the extent of the three CSR subcategories is shown in Figure 2. More details are presented in the following sections.
MOD13A 3
The annual 16 day composite MOD13A2 product with 1-km spatial resolution. Band 1 is NDVI and Band 2 is EVI.
Methods
Although the current research literature shows a wide variety of sources for USR, there is still no common or consistent definition of this term. In particular, suburbs have been insufficiently considered in the land use retrieval from satellite images. USR is developed based on the flow and redistribution of people, materials, and information among the three subcategories of the extent of the urban area [38]; it essentially reveals the variations of human activities and land use, including the significant differences in NTL. In this study, USR is defined as a residential land use type that consists of core urban, suburban, and rural areas with obvious NTL digital number (DN) values. The general flowchart to delineate the extent of the three CSR subcategories is shown in Figure 2. More details are presented in the following sections.
A Quantile-Based Algorithm for USR Retrieval
In this paper, we improved a multiple iterating quantile approach, which can ignore the complicated impacts of the patterns of USR's spatial distribution and aggregation, to determine the values of the thresholds among three subcategories of USR. The three subcategories of USR were retrieved based on the NTL values from the core urban to rural areas in turn and concentrated in the entire administrative area. Firstly, other areas (e.g., sand, swamp, and undeveloped areas) with
A Quantile-Based Algorithm for USR Retrieval
In this paper, we improved a multiple iterating quantile approach, which can ignore the complicated impacts of the patterns of USR's spatial distribution and aggregation, to determine the values of the thresholds among three subcategories of USR. The three subcategories of USR were retrieved based on the NTL values from the core urban to rural areas in turn and concentrated in the entire administrative area. Firstly, other areas (e.g., sand, swamp, and undeveloped areas) with values of zero were excluded. Then, all NTL values of the remaining pixels in an administrative area were applied to construct a quantile curve, as shown in Figure 3.
As previously mentioned, optimal thresholds are always important factors when extracting the extent of the urban area from either NTL or related indices. Although the potential extent of USR has various NTL distribution features such as core-urban dominant or the other two subcategories dominant, the key hypothesis of NTL changing rapidly around boundaries among core urban, suburban, and rural areas can be also applied to retrieve the three subcategories of USR ( Figure 4). values of zero were excluded. Then, all NTL values of the remaining pixels in an administrative area were applied to construct a quantile curve, as shown in Figure 3. In (a-c), the x-coordinate is the quantile (Q), which is arranged inversely from 100th to 0th quantile, and the y-coordinate is the DN value of the NTL DMSP data (D). In (d), the threshold corresponds to the USR boundary.
As previously mentioned, optimal thresholds are always important factors when extracting the extent of the urban area from either NTL or related indices. Although the potential extent of USR has various NTL distribution features such as core-urban dominant or the other two subcategories dominant, the key hypothesis of NTL changing rapidly around boundaries among core urban, suburban, and rural areas can be also applied to retrieve the three subcategories of USR ( Figure 4). Based on the assumption that the NTL intensity decreases from urban to rural areas (see Figure 4), the three subcategories of USR were retrieved based on the NTL values from the core urban to rural areas in turn and concentrated in the entire administrative area. In the algorithm, the quantile method selects the threshold from low to high (from rural to urban), and the part of the NTL lower than the threshold is excluded to determine the maximum boundary of the USR subcategories. The NTL data within the maximum boundary are used as the new inputs to continue the next iteration process. When the iterations are finished, the range of the different subcategories of USR can be obtained. In (a-c), the x-coordinate is the quantile (Q), which is arranged inversely from 100th to 0th quantile, and the y-coordinate is the DN value of the NTL DMSP data (D). In (d), the threshold corresponds to the USR boundary.
were applied to construct a quantile curve, as shown in Figure 3. In (a-c), the x-coordinate is the quantile (Q), which is arranged inversely from 100th to 0th quantile, and the y-coordinate is the DN value of the NTL DMSP data (D). In (d), the threshold corresponds to the USR boundary.
As previously mentioned, optimal thresholds are always important factors when extracting the extent of the urban area from either NTL or related indices. Although the potential extent of USR has various NTL distribution features such as core-urban dominant or the other two subcategories dominant, the key hypothesis of NTL changing rapidly around boundaries among core urban, suburban, and rural areas can be also applied to retrieve the three subcategories of USR ( Figure 4). Based on the assumption that the NTL intensity decreases from urban to rural areas (see Figure 4), the three subcategories of USR were retrieved based on the NTL values from the core urban to rural areas in turn and concentrated in the entire administrative area. In the algorithm, the quantile method selects the threshold from low to high (from rural to urban), and the part of the NTL lower than the threshold is excluded to determine the maximum boundary of the USR subcategories. The NTL data within the maximum boundary are used as the new inputs to continue the next iteration process. When the iterations are finished, the range of the different subcategories of USR can be obtained. Based on the assumption that the NTL intensity decreases from urban to rural areas (see Figure 4), the three subcategories of USR were retrieved based on the NTL values from the core urban to rural areas in turn and concentrated in the entire administrative area. In the algorithm, the quantile method selects the threshold from low to high (from rural to urban), and the part of the NTL lower than the threshold is excluded to determine the maximum boundary of the USR subcategories. The NTL data within the maximum boundary are used as the new inputs to continue the next iteration process. When the iterations are finished, the range of the different subcategories of USR can be obtained.
Specifically, quantile curves are constructed by calculating the intensity from percentiles 0-100 in the NTL data and arranging them in reverse order. Taking the line between the starting point and the end point of the curve as the reference line, the point farthest from the reference line is defined as the turning point. The key to retrieve the USR area is to find the NTL intensity corresponding to the turning point via the quantile curve. The calculation process is as follows ( Figure 5).
Specifically, quantile curves are constructed by calculating the intensity from percentiles 0-100 in the NTL data and arranging them in reverse order. Taking the line between the starting point and the end point of the curve as the reference line, the point farthest from the reference line is defined as the turning point. The key to retrieve the USR area is to find the NTL intensity corresponding to the turning point via the quantile curve. The calculation process is as follows ( Figure 5). Figure 5. USR NTL threshold determination.
In the first iteration, the quantile curve of the NTL data of the whole target area is calculated. Through the distance between the curve and the reference line, the threshold d at the maximum light intensity change in the target area can be obtained (Figure 3a).
reflects the sudden change of the NTL from nothing, that is, the border between rural areas and other areas (basically no lighting areas). The part of the NTL data whose NTL intensity is less than is removed to determine the input data of the second iteration.
The second iteration is the same as the first iteration. In the quantile curve constructed in the second iteration, the NTL intensity ( Figure 3b) corresponding to the turning point is the boundary line between the rural and suburban areas. In addition, the part of the input data in the previous step whose NTL intensity is less than is deleted as the input of the third iteration. In the third iteration, the input data do not include other regions and rural areas, which is different from the above two iterations.
( Figure 3c) has two cases on the quantile curve. When < , the corresponding NTL intensity at the turning point, , is the boundary between the suburban and urban areas; when = , there is no obvious mutation. According to the definition of the NTL intensity of USR, > > > ℎ , and the urbanization is not completely reversible in time; therefore, the rural area retrieved in the second iteration should be classified as suburban. The spatial distribution range of USR is obtained by stacking the results of the three iterations.
Validation Algorithm
Visual interpretation of the land use map from the 30 m [37] resolution Landsat multispectral images are recognized as a relatively exact dataset that can be used to validate the accuracy of the improved approach. Then, the 30 m resolution land use data were upscaled to a 1 km spatial resolution to match the resolution of the NTL and HSI data. Moreover, in the existing land use data, suburban and rural areas are usually combined as rural residential land types. We also followed this principle in the evaluation of the results. The two accuracy indices of the kappa coefficient (KC) [39] and overall accuracy (OA) were used in this paper based on calculation of the confusion matrix (as shown in Table 2). In the first iteration, the quantile curve of the NTL data of the whole target area is calculated. Through the distance between the curve and the reference line, the threshold d at the maximum light intensity change in the target area can be obtained (Figure 3a). D rural reflects the sudden change of the NTL from nothing, that is, the border between rural areas and other areas (basically no lighting areas). The part of the NTL data whose NTL intensity is less than D rural is removed to determine the input data of the second iteration.
The second iteration is the same as the first iteration. In the quantile curve constructed in the second iteration, the NTL intensity D suburban (Figure 3b) corresponding to the turning point is the boundary line between the rural and suburban areas. In addition, the part of the input data in the previous step whose NTL intensity is less than D suburban is deleted as the input of the third iteration.
In the third iteration, the input data do not include other regions and rural areas, which is different from the above two iterations. D urban (Figure 3c) has two cases on the quantile curve. When D urban < DN max , the corresponding NTL intensity at the turning point, D urban , is the boundary between the suburban and urban areas; when D urban = DN max , there is no obvious mutation. According to the definition of the NTL intensity of USR, DN urban > DN suburban > DN rural > DN other , and the urbanization is not completely reversible in time; therefore, the rural area retrieved in the second iteration should be classified as suburban. The spatial distribution range of USR is obtained by stacking the results of the three iterations.
Validation Algorithm
Visual interpretation of the land use map from the 30 m [37] resolution Landsat multispectral images are recognized as a relatively exact dataset that can be used to validate the accuracy of the improved approach. Then, the 30 m resolution land use data were upscaled to a 1 km spatial resolution to match the resolution of the NTL and HSI data. Moreover, in the existing land use data, suburban and rural areas are usually combined as rural residential land types. We also followed this principle in the evaluation of the results. The two accuracy indices of the kappa coefficient (KC) [39] and overall accuracy (OA) were used in this paper based on calculation of the confusion matrix (as shown in Table 2).
Others R and S Urban
Others Where C ij is the number of pixles that category i in verification data classified as prediction category j.
The KC is an index used to test the spatial consistency of two maps and can also be used to measure classification accuracy. The KC is expressed by where p o and p e can be defined as: where C ii is calculated from Table 2, C i, * is the total number of pixels in the ith row in Table 2, C * ,i is the total number of pixels in the ith column in Table 2, and n is the total number of pixels in the prediction map from the proposed method. The value of KC is between 0 and 1, which has been used for assessing maps agreement in common research [40,41]. Generally, the KC can be divided into five levels [38]: slight (0.0~0.20), fair (0.21~0.40), moderate (0.41~0.60), substantial (0.61~0.80), and almost perfect (0.81~1).
OA represents the proportion of the correct categories in the total categories, which can be expressed by where C ii and n are also calculated from Table 2 as Equation (2). In this paper, the OA refers to the probability that the USR subcategory is consistent with the 30 m resolution land use data.
Spatial Distribution of Retrieved USR
Using the above method, we mapped the extent of the USR of the JJJ from 1995 to 2013 based on NTL data ( Figure 6). As shown in Figure 6, since 1995, the trend of the urbanization of the JJJ has been obvious, and the extent of the urban area has increased significantly. By 2013, The urban areas of Beijing, Tianjin, and Hebei retrieved by the quantile method were 3253 km 2 , 1528 km 2 , and 6720 km 2 , respectively, (in the 2014 Beijing Land Use Statistical Yearbook, the urban area was 2977.59 km 2 , and the other two places lack relevant statistical items [42]) and the extent of the urban area has increased by 7098 km 2 (an increase of approximately 2.7 times). In addition, the gap between suburban and rural areas has narrowed in terms of the NTL intensity. By 2004, there was no significant difference in the NTL intensity between the suburban and rural areas of Beijing. Tianjin showed the same trend. Around 2006, the gap between the intensity of the NTL in the rural and suburban areas of Tianjin almost disappeared. In Beijing and Tianjin, the rural area increased to approximately 3417 km 2 before 2004, and the rural areas gradually merged with the suburban areas after 2004. Different from Beijing and Tianjin, in the process of urbanization in Hebei Province, besides the increase of the urban area by 3955 km 2 , the rural area also expanded by approximately 8742 km 2 (an increase of approximately 0.94 times, higher than the average growth of Beijing and Tianjin), and the distribution was more concentrated and accompanied by a trend of consolidation.
Remote Sens. 2020, 12, x FOR PEER REVIEW 9 of 16 0.94 times, higher than the average growth of Beijing and Tianjin), and the distribution was more concentrated and accompanied by a trend of consolidation. Table 2 shows the classification accuracies for Beijing, Tianjin, and Hebei Province referring to the visual interpretation results. By analyzing the retrieval results in 1995, 2000, 2005, and 2010, it can be seen that the quantile method based on NTL has more stable overall classification accuracy and KC and its effects are slightly different in different regions. The OA and KC in Beijing and Tianjin are significantly higher than those in Hebei. The average OA in Beijing and Tianjin reached 0.904, and the KC averaged 0.650. According to the consistency classification of the KC, the overall classification results of Beijing and Tianjin are highly consistent [39]. In Hebei, the results are generally consistent. When USR is retrieved by NTL, the results are significantly affected by the scale of the target region.
Evaluation
As shown in Table 3, the retrieval accuracies in Beijing and Tianjin are higher than that in Hebei. The differences in the development levels among the cities in Hebei Province are also reflected in the differences in the intensity of NTL. When retrieved by the scope of the entire province, a higher NTL intensity weakens the rural-urban NTL intensity difference in areas with low NTL intensity, and it makes this difference more obvious between different cities in the province. As a result, areas with low NTL are mistakenly classified as rural or suburban areas. Table 2 shows the classification accuracies for Beijing, Tianjin, and Hebei Province referring to the visual interpretation results. By analyzing the retrieval results in 1995, 2000, 2005, and 2010, it can be seen that the quantile method based on NTL has more stable overall classification accuracy and KC and its effects are slightly different in different regions. The OA and KC in Beijing and Tianjin are significantly higher than those in Hebei. The average OA in Beijing and Tianjin reached 0.904, and the KC averaged 0.650. According to the consistency classification of the KC, the overall classification results of Beijing and Tianjin are highly consistent [39]. In Hebei, the results are generally consistent. When USR is retrieved by NTL, the results are significantly affected by the scale of the target region.
Evaluation
As shown in Table 3, the retrieval accuracies in Beijing and Tianjin are higher than that in Hebei. The differences in the development levels among the cities in Hebei Province are also reflected in the differences in the intensity of NTL. When retrieved by the scope of the entire province, a higher NTL intensity weakens the rural-urban NTL intensity difference in areas with low NTL intensity, and it makes this difference more obvious between different cities in the province. As a result, areas with low NTL are mistakenly classified as rural or suburban areas. In order to compare the retrieval accuracies of different types of land in rural, suburban, and urban areas, we separately calculated the accuracies of core urban land, rural land, suburban land, and others. The results are shown in Table 4. In Beijing and Tianjin, the "others" type of area has large surface area, and the intensity of the NTL is almost zero, which makes them easier to distinguish. The classification accuracy of the "others" type is the highest with an average of 0.939. Due to the relative concentration of the urban area and the strongest intensity of NTL, the classification accuracy of the urban type is the second best with an average of 0.854. Rural settlements are easy to classify into other areas because their discrete distribution and weak NTL, thus the minimum average accuracy is 0.825. However, with the development of rural areas and the continuous infrastructure improvements, the area and the level of NTL is increasing each year. The retrieval accuracy has also improved, and fewer rural settlements have been left out.
For further validation, we compared the results of quantile methods with the retrieval results of HSI, VANUI, and other indexes based on global-fixed threshold methods, which were acquired from previous studies [27]. As shown in Table 5, the USR retrieval of NTL data using the quantile method achieved relatively good quality results. The OA and KC of Beijing and Tianjin were almost the same as those in the previous method. However, due to the feature of automatic selecting of the optimal thresholds and un-using additional parameters, the retrieval process of the quantile method is more convenient and efficient than are the fixed threshold methods. In addition, we also introduced HSI into the threshold method in the discussion subsection so as to compare the USR retrieval results based on different data (or indicators) through the quantile method.
The Problem of Retrieving Rural Settlements
The spatial distribution of rural settlements is more discrete than that of metropolitan areas. The retrieval of rural areas using DMSP/OLS data is limited to indicating the scope of the rural distribution rather than identifying the discrete distribution of rural settlements. Rural settlements will be missed when the following two situations exist:
1.
When scattered, a DMSP/OLS sensor cannot detect the NTL intensity, and thus rural settlements will be missed. In addition, due to the blooming effect of NTL, rural settlements cannot be accurately distinguished. Due to the relatively discrete distribution of rural settlements, when there are many rural settlements, the presence of the blooming effect of the NTL causes the surrounding pixels to be identified as rural areas. As a result, the discretely distributed rural settlements are merged into one area, and so only their approximate scope can be retrieved. Unlike rural settlements, the urban land itself has a more concentrated distribution and larger area. When retrieving the urban area, the impact of the blooming effect is mostly concentrated within the city, and so the impact on the entire urban area is small (Figure 7 Case I).
2.
When using data from earlier years, NTL is not an effective way to identify rural areas, since in the past the economic development in those areas was relatively minor and there was a power shortage in some rural areas. With the construction of the rural infrastructure and the promotion of corresponding policies, the situation of rural electricity consumption has been greatly improved, and the situation of being omitted due to the absence of NTL is gradually reduced (Figure 7 Case II).
Remote Sens. 2020, 12, x FOR PEER REVIEW 11 of 16 Unlike rural settlements, the urban land itself has a more concentrated distribution and larger area. When retrieving the urban area, the impact of the blooming effect is mostly concentrated within the city, and so the impact on the entire urban area is small (Figure 7 Case I). 2. When using data from earlier years, NTL is not an effective way to identify rural areas, since in the past the economic development in those areas was relatively minor and there was a power shortage in some rural areas. With the construction of the rural infrastructure and the promotion of corresponding policies, the situation of rural electricity consumption has been greatly improved, and the situation of being omitted due to the absence of NTL is gradually reduced (Figure 7 Case II).
Differences between the Results Based on HSI or NTL
Lu, D et al. proposed the human settlement index (HSI) for mapping the extent of the urban area by considering both NTL and vegetation status, which is widely used in mapping the extent of the urban area. The index assumes that the NTL are high and the vegetation index is low in residential areas [1]. In order to compare the impacts of introducing other data or indicators on USR retrieval results, we applied the HSI as an input in the study area of the JJJ. The HSI, which combines NTL and NDVI data, can be expressed as: where is the normalized night lights value. After replacing the NTL data with the HSI data, the iterative process of the quantile method is performed. Compared with the results retrieved by the quantile method based on the NTL DN values, the OA and KC calculated using the HSI are poor. The results using the HSI are obviously biased in OA, and the average OA in the JJJ is only 0.8. The average KC is 0.485. According to the KC classification standard, the retrieval results only reach general consistency. Different from the NTLbased quantile method, the HSI is less affected by the scale of the study area. In the JJJ, the USR retrieval results of Beijing, Tianjin, and Hebei are more consistent. However, some discretely distributed areas in the city can be better retrieved. Compared with the results of NTL, the blooming effect is lower (as shown in Table 6). Therefore, it is possible to reduce the problem of excessively large USR retrieval.
Differences between the Results Based on HSI or NTL
Lu, D et al. proposed the human settlement index (HSI) for mapping the extent of the urban area by considering both NTL and vegetation status, which is widely used in mapping the extent of the urban area. The index assumes that the NTL are high and the vegetation index is low in residential areas [1]. In order to compare the impacts of introducing other data or indicators on USR retrieval results, we applied the HSI as an input in the study area of the JJJ. The HSI, which combines NTL and NDVI data, can be expressed as: (1 − NDVI max ) + OLS nor (1 − OLS nor ) + NDVI max + OLS nor * NDVI max (5) where OLS nor is the normalized night lights value. After replacing the NTL data with the HSI data, the iterative process of the quantile method is performed. Compared with the results retrieved by the quantile method based on the NTL DN values, the OA and KC calculated using the HSI are poor. The results using the HSI are obviously biased in OA, and the average OA in the JJJ is only 0.8. The average KC is 0.485. According to the KC classification standard, the retrieval results only reach general consistency. Different from the NTL-based quantile method, the HSI is less affected by the scale of the study area. In the JJJ, the USR retrieval results of Beijing, Tianjin, and Hebei are more consistent. However, some discretely distributed areas in the city can be better retrieved. Compared with the results of NTL, the blooming effect is lower (as shown in Table 6). Therefore, it is possible to reduce the problem of excessively large USR retrieval. We consider this to have been caused by the shortcomings of the HSI. While using the HSI, the assumption that the relationship between the NDVI and NTL follows the power law is introduced. When NTL reaches its maximum value, the NDVI is close to zero, and the HSI will grow exponentially. Some studies' calculations of the HSI of global cities show that it overcorrects the saturation of urban areas, resulting in a reduction in the data of urban boundary areas [31]. In previous studies, in many regions of the world, urban expansion takes place in areas with reduced or lower vegetation [43]. With the development of China's rural economy and the promotion of the construction of ecological civilization, rural infrastructure has become more complete and urban greening has gradually improved. This makes the difference in the degree of vegetation coverage and the intensity of the NTL between rural and urban areas gradually smaller. The assumption that the HSI retrieves USR is further weakened, which ultimately leads to unsatisfactory results in the final retrieval. This also proves that the retrieval of the structure of USR and the introduction of parameters other than NTL (e.g., VI, LST) will increase the uncertainty in the retrieval results.
The Weakness of DMSP and Potential Problems
DMSP data have some weaknesses, such as bloom effect and oversaturation, which may also reduce the accuracy of the proposed method. Compared with other night light sensors such as VIIRS (750 m spatial resolution) and Luojia (130 m spatial resolution), the spatial resolution is lower [24]. Bloom effect may lead to the combination of lighting in a certain range, which cannot be accurately expressed for the scattered rural areas. Moreover, the problem of oversaturation will gradually blur the boundaries between rural, suburban, and core-urban areas in the process of human activities, which may lead to the disappearance of the difference in NTL intensity of USR in DMSP data. In recent studies, other sensors were processed for spatiotemporal consistency, which adjusted for the defects of DMSP data [44]. For example, by using the data of VIIRS, the problem of merging scattered regions caused by bloom effect can be further suppressed, and there is no oversaturation problem in VIIRS, which can keep the difference of NTL intensity between USR.
In addition, with the development of lighting methods and related technologies, the light radiation also changes in NTL data [45]. Compared to different times, different space ranges at the same time are more important. Table 3 confirms the influence of the space range on the results of the quantile method. For a scale such as the municipal level, the difference in infrastructure construction (such as lights) and human activities is smaller than that of the whole province, and the spatial impact is also relatively small. As for the development of new technologies such as LED, controlling the scope of the research area can better control the lighting difference in the space to reduce the influence of lighting mode on USR retrieval results. However, in the developed countries in Europe or in the United States, there may be counter-urbanization phenomenon, so the retrieval results of USR structure may need to be further modified.
Conclusions
Considering how few studies previously focused on retrieving the three sub-categories of "core urban-suburban-rural" from nighttime light images, especially suburb areas, we proposed an improved quantile approach for retrieving the three USR subcategories from NTL images, which automatically defines the boundary thresholds (D rural , D suburban , and D urban ) of rural, suburban, and urban areas using DMSP NTL data without introducing empirical knowledge and additional data to retrieve the USR structure. Then, the approach was applied to USR retrieval for the JJJ from 1995 to 2013. According to the retrieval results, the JJJ has experienced rapid urbanization during the past decades. The core urban and suburban areas have increased significantly, and they more than doubled (7098 and 12,690 km 2 , respectively). Meanwhile, the rural areas expanded at a slower speed by approximately 0.38 times (increase of 4986 km 2 ). In Beijing and Tianjin, the average OA was 0.904 and the average KC was 0.650. Compared with the former method based on a fixed threshold, the quantile method further improves the retrieval accuracy. Through comparison with NTL and the HSI, it was found that NTL DN was more suitable for USR retrieval, being 12.5% on average higher than that of the HSI (from 0.803 to 0.904, respectively). Furthermore, the overall consistency was improved from the general consistency (KC of 0.485) of the HSI to substantial (KC of 0.650) for NTL DN. The retrieval results show that the increase in the parameter uncertainty and the deviation between the original hypothesis and the actual situation may have a serious impact on the retrieval of the USR structure. Due to the blooming effect of NTL, the actual results may overestimate the borders of the countryside, and there will also be missing points for areas that do not meet the assumption of NTL. With the improvement of the spatial resolution of NTL data and sensor sensitivity (such as Luojia No. 1 data), the accuracy of the retrieval of rural settlements with discrete partitions might be further improved. At the city level, the proposed quantile approach achieves similar results to that of the visual interpretation of USR retrieval, with lower labor and time costs, which increases the automation of the method. We believe that the proposed approach will provide an efficient, low-cost method for research on urbanization or urban expansion. | 2020-12-24T09:13:56.482Z | 2020-12-21T00:00:00.000 | {
"year": 2020,
"sha1": "c80eff381ddd7ea634430d899b6144c3df2246d4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/12/24/4179/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1b38bdede29904ae16caaf87ed914b8bdfc40474",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
54856061 | pes2o/s2orc | v3-fos-license | Structural Model of the Role of Brand Trust on Brand Identity through the Mediating Role of Brand Love among Fans of Futsal Premier League Clubs in Iran
Background. Sports brand love refers to the degree of passionate emotional attachment consumers feel towards a sports team. Brand able to be more competitive by establishing the strong brand love of the customers as well as brand identity and brand trust. Objectives. The purpose of this research is to present a model of the role of brand trust on brand identity through the mediating role of brand love among fans of Futsal premier league clubs. Methods. The present study is a descriptive-correlative research in compliance with existing standard case studies. The population of this research comprised of all fans of the Farsh Ara Mashhad club, among which 295 were selected according to the temporal and spatial domains of the study as the sample, utilizing simple random sampling method. In order to achieve the research goals, the moderated Brand Identity Questionnaire made by Meal and Ashforth (1992), Albert's Brand Love Questionnaire, 2010, and Ballester Brand Trust Questionnaire (2004) were used. Descriptive statistics and inferential statistics (structural equation modeling and path analysis) were used for data analysis at a significant level recorded as 0.05. Result. According to the value of the coefficient path between brand trust and brand identity recorded as 0.47, which is positive and the value of the t-statistics correspondence obtained as 2.09, with 95% confidence, the coefficient path at the error level recorded as 0.05 is significant. Furthermore, a significant relationship between brand trust and brand identity was confirmed and also according to the main hypothesis of the research, the path coefficient of the indirect relationship of brand trust, through the mediating variable of brand love on brand identity with the value 0.53 was also calculated and the main hypothesis of the research was confirmed. Conclusion. In general, investing on brand trust on behalf of the Futsal premier league clubs (especially Farsh Ara club) and in the following plan to increase the admiration of fans in creating and developing a brand identity is one of the significant results mentioned in this research.
INTRODUCTION
For a potential customer, a brand is a significant guide. The brand, like money, facilitates the transaction. Customers are confused when confronted with collections of products that do not have an identification certificate or products that are difficult to evaluate in a glance (1). For centuries, branding has become a tool for distinguishing between products of a manufacturer from other manufacturers. Based on Dictionary definition published by Interbrand Institute, a brand is a combination of obvious, immaterial, and symbolic trademarks of which, if properly managed, will bring great value and credit (2). The brand's goal can make decision making easier, ensure product quality, and provide the appropriate, different and valid option among contradictory options (in competition). A brand is an abstract of the identity, originality, attribute, and difference. A brand raises the information, which is focused on a word or a sign. That is why brands are vital to business exchanges. Few companies, organizations and even clubs know what their brand names are and where their uniqueness, quality and identity is located (1).
Moreover, identity is the essence and originality of the brand. If a company wants to create a lasting image of its own, then it must first create its brand identity and then inspire its message and image based on that identity. Brand Identity is the brand meaning that is brought about by the company. Brand identity shows how an organization wants to be perceived in the marketplace. Thus, each organization transfers its identity through branding and marketing strategies to the consumer. It should be said that an organization can be unique through its identity. Brand identity includes brand perspective, brand culture, brand position, personality, relationships, and their presentation. Generally, brand identity is all that organizations want their brand to look like (1). Given these features, sports in the world today is considered by large countries and companies, and prominent teams and clubs are looking to bring a lot of spectators to the stadium through marketing methods and in competition with the domestic teams and foreign leagues to receive more media outlets and thus increase club revenue. Of course, because the nature of sport is unstable, the problem of attracting loyal customers i.e. sports fans is raised and sports marketers have to take steps that contain unique associations for the fans so that they can establish a link between the team and the fan not only at the time of winning, but also at the time of failure. One of these measures is the brand identity design (3).
Accordingly, the significance of the term trust becomes apparent. Trust is a facilitator of human interaction, trusting people can result in execution of business transactions and assist in smoother increase in the economy. In addition, distrust is a useful mental state that enables us to get rid of systems or individuals and organizations that are unreliable and unhealthy (4). Another definition of trust in this way is the psychological state consisting of the acceptance of the vulnerability based on the behavior of the positive expectations of the other (5). Brand trust is the degree of brand ability and capacity to meet promises made. Customers are willing to understand the identity of brands that are more capable of fulfilling promises and creating confidence. A strong brand is a safe place for customers because it reduces the uncertainty and risk of buying and consuming a product. Fame and reputation of the brand also greatly contribute to its identity. Brand research has shown that a strong brand identity brings customers' trust (6). Investing on a brand as an investment in advertising or sponsorship of sports teams, is the basis for brand trust, through encouraging companies to be honest in their claims about the product (7). Experts in this regard believe that a brand based on social media enhances brand loyalty by building trust in the brand. Through value creation methods, brand creates close relationships, and gains values through long-term interactions that help them to love the brand and establish emotional relationships (8,9).
A new concept in marketing is raised which takes into account the greater connectivity of the consumer with the brand, which is referred to as brand love or emotional attachment to the brand (10). Specifically, love for the brand is very similar to interpersonal love. Therefore, the application of the concept of love in the study of consumer relations with the brand paves way to achieving a deeper perspective on consumer and customer sentiment towards brands as well as creation of a better understanding of consumer behavior and its favorable prediction (8). This concept became more evident in the field of sports and especially sports brands. Carroll and Ahuvia (2006) referring to the concept of love for the brand and its definition in the form of a degree of emotional attachment between the individual and the brand of a particular commodity, believe that brand love can affect customer loyalty to the brand. Moreover, their results show that higher customer love towards a brand can have a more positive effect on positive statements made by customers to the brand (8).
By examining the research literature, the significance of the subject matter becomes obvious. Moshabbeki Esfahani et al. (2013) designed the brand identity model in the Iranian Football Premier League. Dimensions of brand identity in this study were success, color, name, delivery, clothing, fan and competitor, geographic continuity and history, star player and stadium (11). The results of Alavai, and Najafi Siahroodi (2014) indicate the mediating role of loyalty in the relationship between brand love and the advocacy of sports brands. One of the significant results of this research was that as much as a sports brand fan feels more love for the brand, he tries to rebuy that brand or to think about it (more loyalty), have a sense of ownership on it and support it (more support), and in the group of acquaintances, friends and in general the community states the specific and distinctive features of the brand (12). Bengtsson and Servais (2005) stated that, in general, organizations that provide certain, specific, and relevant brand identity can pave way for their market excellence and create value for their own customers (13). Baumgarth and Schmidt (2010) concluded that creating brand identity, and promoting trust facilitates differentiation, and aids identification of consumers through brand (14). Recently, Tavormina (2013) studied the empirical test of brand love in professional sports teams. One of the significant results of this research can be the high and direct significance between brand enthusiasm and a positive feeling towards brand. Also, the results showed that love for the brand in the sports teams varies according to the marketing strategies, conditions and culture that governs the society (15).
Given the above-mentioned effects, it should be recalled that the present research in the domestic environment, especially in the field of futsal sports is significant in a number of aspects. Considering that futsal sports is one of the most popular sports in the country, loyalty and support of its customers can have significant effects on the growth of this industry. For this purpose, in this research, we tried to select one of the most passionate fans of sports teams in the country and the province, namely, Farsh Ara Mashhad. Being one of the best clubs during the years of the futsal league, they have experienced packed full stadiums in the league, and the history preceding their name, especially those who are currently playing at the first level in the world, has distinguished them among other sports clubs of the country. Another limitation of this research is the lack of similar domestic research on the effect of the love for sports brands and its relation to brand identity, where this study attempted to fill this gap. Considering that in the Futsal premier league clubs in Iran there is no long-term planning brand in the field of brand development, such studies can be effective on creating higher tendencies for the Futsal premier league clubs in Iran, and as the competition among the clubs of the premier league in Iran's futsal is followed closely, it is also revealed that lack of sponsors in this field has prevented teams and clubs from achieving a degree of growth that deserves this massive amount of talent and spectators. Such research would be able to introduce this massive potential to financial supporters and indirectly contribute to the development of the brand of the Futsal premier league clubs, and provide the growth and development of these clubs with practical and specialized solutions. In general, the purpose of this research is to investigate the role of brand trust on brand identity through the mediating role of brand love in the Futsal premier league clubs in Iran (Farsh Ara), the results of which can have favorable effects on solving issues related to the brand of sports clubs.
MATERIALS AND METHODS
Method. This research is a descriptivecorrelative research, in compliance with the case studies that were conducted as a field study in terms of data collection.
Participants estimation formula (Cochran) and simple random sampling, and considering the return of healthy questionnaires with the ability to analyze, 295 subjects formed the final samples of this study. (2010) were used. On the scale of brand love, its relevant factors are evident in two subscales of kindness to the club (5 items) and intense emotions (5 items). In order to assess the brand trust scale, the Ballester brand trust questionnaire (2004) evaluating the factors affecting brand trust in two subscales of trustworthiness (4 items) and brand intention (2 options) were used. The face and content validity of the tools was done by a team of sports management experts and the reliability of the questionnaires was ascertained in a preliminary study conducted on 30 fans where the brand Identity Questionnaire, Brand Love Questionnaire and Brand Trust Questionnaire were calculated with Cronbach's Alpha, 0.88 and 0.75, 0 and 0.82. Statistical Analysis. Descriptive statistics (mean, standard deviation, etc.) and inferential statistics (structural equation modeling and path analysis) were used at a significant level 0.05 to analyze the data. In addition, Q-Q plot was used to determine the distribution status of variables. It should be noted that all statistical calculations were performed using software SPSS 20 and Lisrel 8.50. SPSS software was used to analyze the descriptive statistics including mean and standard deviation and to determine the state of normality. Lisrel software was used for modeling structural equations and path analysis. Analysis results of the confirmatory factor is presented in the table below. As can be seen from Table 2, t values for all load factors are larger than 1.96, so it can be concluded that the selected questions provide an appropriate factor structure for measuring the variables and dimensions studied in the research model. Also the values of fitness indices are shown in Table 3. The RMSEA value is 0.074 and, given that it is less than 0.08, it shows that the model is acceptable. Also, the relative chi-square value, i.e., division of Chi-square by degrees of freedom equals 2.61 302.25 116 and between 1 and 3, and the level of AGFI, GFI, IFI, CFI and NFI indices are also 0.9 and further. In total the amount of indices corresponds to their interpretative criteria and the confirmatory factor analysis confirms the structure of the dimensions examined in the research model. Considering the confirmation of questions regarding the dimensions of the questionnaire, the following sections will test the research hypotheses. Based on the Figures 1 and 2, the summary of the results obtained from the fitness model is shown in Table 4. As it was said, the paths with s-statistics more than 1.96 or less than 1.96 are significant. Based on Table 4, the path coefficient between brand trust and brand love is equal to 0.72, which is a positive value. The value of the t statistics is 4.88, which is larger than 1.96, so with 95% confidence it can be concluded that this path coefficient is significant at the error level 0.05 and there is a significant and direct relationship (positive) between brand trust and brand love. According to Table 3 which is positive. The value of the t statistics is 3.57, which is larger than 1.96, so with 95% confidence it can be concluded that this path coefficient is significant at the error level 0.05 and there is a significant and direct relationship (positive) between brand love and brand identity. According to the results the path coefficient between brand trust and brand identity is 0.47, which is positive. The value of the t statistic is 2.9, which is more than 1.96, so with 95% confidence it can be concluded that this path coefficient is significant at the error level 0.05 and there is a significant and direct relationship (positive) between brand trust and brand identity.
RESULTS
According to Table 4, "brand trust" has a positive and significant effect on "Brand Love" with a path coefficient of 0.72, also the "Brand Love" on "Brand Identity" with a coefficient of 0.74, has a positive and significant effect; therefore, the first and second conditions are established, and the path coefficient of the indirect relationship of brand trust, through the brand mediation variable, is calculated based on brand identity as follows 0.74 x 0.72 = 0.533. Therefore, it can be said that brand trust through brand love has a positive and significant effect on brand identity.
DISCUSSION
The purpose of this research is to investigate the role of brand trust on brand identity, through the mediating role of brand love among the fans of the Futsal premier league clubs, Iran. Since brand identity is the brand's essence, the brand's most significant and unique features are reflected in its identity (11) and, on the other hand, considering that one of the factors contributing to customer loyalty is the brand trust and love, (16) the study of these three significant components in futsal clubs of the country that are yet to be addressed scientifically is considered necessary. In this section, the conclusions of this research are presented.
According to demographic findings, more than 47.5% of respondents had higher education. Accordingly, it should be acknowledged that fans with higher education have higher expectations of their favorite club and are not satisfied easily. Also the attention of the fans doubles the duties of the Iranian Futsal premier league clubs to meet their expectations. Moreover, given that almost 40% of the fans of the Farsh Ara Mashhad Futsal Club is made up of students, it can be admitted that in the path of building their brand identity, the closer the club is to the set of factors constituting student identity, such as passion, heat, questioning etc., and in spite of success in attracting trust and developing loyalty in them, as well as increasing the number of supporters, brands can be more successful in building a steady and attractive brand identity.
Descriptively, roughly 61% of the fans of the Futsal Farsh Ara club annually watch more than 5 games of their favorite team. It is clear that as clubs become more successful in attracting fans to the stadiums to watch matches, they can increase the components of trust and love for fans, along with social and economic achievements and as a result, the club turns into a strong and desirable symbol.
The demographic variables survey shown that more than 70% of Farsh Ara Mashhad audiences are not members of their supporters' club. The results of extensive research in this field showed that the fan club of the sports teams and clubs provide a very suitable place for dynamic and mutual communication between the club and its supporters. Assigning special gifts, buying personal chairs, offering great discounts for purchasing tickets, and communicating with favorite players etc. are among the benefits of joining a fan club (17).
Considering the inferential findings of the research, brand trust has a positive and significant effect on brand love of fans of Futsal premier league clubs. Brand trust, which is one of the main elements of the interface between the organization and the customer, will result in the development of long-term relationship between the organization and the customer. Due to the fact that brand trust is rooted in the past experiences and effects of the brand on customers, according to researchers, promotion of satisfaction, especial brand value will ultimately lead to customer loyalty (18). Given this finding, it can be said that if the clubs of the Premier League of the Iranian Futsal (especially Farsh Ara Mashhad) in cases of building trust; such as, meeting their expectations, satisfying and providing supporters' requests, and demonstrating honesty in reflecting information towards their supporters, then they can increase the interest, happiness, goodness and pleasure of the brand in their supporters, which also Improves brand love.
Based on the results of the inferential findings; brand love of futsal league fans (Farsh Ara Mashhad), has a positive and significant effect on the brand identity of fans. Brand love, which comes from three basic elements such as enthusiasm, intimacy, and commitment, plays an undeniable role in the presence of a loyal customer in the long run. This finding is also consistent with the results of Alnawas and Altarifi (2016) (19). Love creates motivation in people that causes them to give maximum efforts towards brand success. Hence, stirring up the emotions and feelings of supporters has become one of the ways to promote brand in today's marketing industry (20). Based on this finding, it can be stated that the development of love in the fans can have such an effect on them that the defeat of the team, will be considered a personal defeat and its success will be regarded as a success of the fan. At this level of support, the individual does not reject any irrational criticism, so it can be said that the fan has reached a high degree of brand identity.
Based on the findings, brand trust has a positive and significant effect on brand identity of Premier League Futsal club fans (Farsh Ara). Today, the saturation of markets, change in the customer tastes and eventually, increasing competition, has resulted in Iranian companies in the service and non-service sectors facing multiple challenges. In these situations, companies that can properly use their tools and facilities, and using effective advertising to build trust in their customers, can overcome these challenges and ensure their survival as a sustainable brand. This significant issue corresponds with the Premier League Futsal club, such that when brand confidence is attained, brand relationships become more valuable to fans and they try to maintain this relationship as long as it becomes a kind of psychological and emotional commitment; therefore, brand identity is a means by which a fan indicates its attachment. Generally speaking, brand identity develops when it promotes positive social identity (21). Considering the fact that with brand trust confidence increases in brand behaviors, this trust leads to the attractiveness of identity and ultimately leads to a stable brand.
Regarding the findings from the fourth hypothesis of the research, there is a positive and significant relationship between brand trust and identity of the brand through the mediating role of brand love among the fans of Farsh Ara Mashhad Club. Considering the coefficient of the effect of trust on brand identity (0.47) and coefficient of trust effect on identity with the mediating role of brand love (0.53), it can be admitted that the indirect effect of brand trust on brand identity is more than the direct effect without the mediating role of brand love. Thus the mediating and undeniable role of love is also proven in the two components of trust and identity. This finding is also consistent with the results of the Dehghani Soltani et al. (2014) (7). The positive effects of brand trust on brand identity through the mediating role of brand love leads to the significant point that futsal and soccer clubs where the interest and attention of fans is more than other sports, and the need for considering the concept of brand trust is felt more than before. Clubs in seeking to create and develop their brand identity are unaware that the main root of a strong brand identity originates from brand trust as well. It should be noted that brand trust is developed and presented by the club, and continues its development by fans and that, with the advancement of this trend, all concerns, sensitivities, intentions and thoughts are also branded (brand love) and brand identity development takes place.
CONCLUSION
With regard to the above, it is generally seen that despite the high potential of the Premier League teams such as Farsh Ara, both in terms of player and technical staff and in terms of the enthusiastic spectators of the club and the team, we do not see the presence of sponsors in these teams and the league. In addition, based on the results of the research in this study, which shows a positive and significant relationship between brand trust and brand identity through the mediating role of brand love in the fans of the Farsh Ara Mashhad club, it is strongly recommended that sports investors and capital owners consider this huge number of fans and massive human capital, and in order to introduce their products and services with more investment in teams such as Farsh Ara, it also helps to introduce their brand and promote the development of these teams and develop the sports. Moreover, it is recommended that the managers of the Futsal Premier League teams, especially the management of the Farsh Ara team create more welfare facilities for fans, especially during domestic matches, and provide a variety of services at times other than the tournament, such as: activating more fan clubs and holding tours within the club and during exercises for fans. They are also recommended to have more activity in virtual spaces such as the creation of active systems equipped with up-to-date response systems and online stores of products designed with the brand and logo of the club for fans. Such activities can attract fans and increase r brand popularity which will enhance their brand identity.
APPLICABLE REMARKS
The clubs of the Iranian Premier League (Farsh Ara) are advised that with continuous presence in cyberspace, the promotion of websites and the launch of specific social networks, as well as, ongoing communication with universities, institutes, and science centers across the country meet the demands of the fans and create brand love and trust among fans.
The clubs of the Premier League in
Iran's futsal, especially Farsh Ara Mashhad by attracting fans in their fan clubs, and subsequently presenting favorable services, as well as planning to maintain a two-sided relationship between themselves can increase their brand trust on behalf of the fans, and ultimately witness the emergence of fans with a high degree of brand love.
In general, managers and senior officials of the Futsal Premier League clubs (especially the Farsh Ara) are expected to meet expectations, guarantee the satisfaction and fix the problems of fans that embody brand trust behaviors to build a stable identity among their fans.
Finally, investing in brand trust on behalf of the Premier League Futsal clubs (in particular, Farsh Ara), and then planning to increase the level of fan admiration in the creation and development of a strong brand identity is suggested. | 2018-12-05T12:07:14.041Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "0e9cfe6c07a777041a7f8f7fa557b2482d84e36a",
"oa_license": "CCBYNC",
"oa_url": "http://aassjournal.com/files/site1/user_files_dbc6fd/kiavah-A-11-582-1-35f5c05.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9f51fe3ed88a02386446d4c8ee7fe40b53327064",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Sociology"
]
} |
259668607 | pes2o/s2orc | v3-fos-license | Durvillaea antarctica: A Seaweed for Enhancing Immune and Cardiometabolic Health and Gut Microbiota Composition Modulation
Durvillaea antarctica is the seaweed that is the most consumed by the Chilean population. It is recognized worldwide for its high nutritional value in protein, vitamins, minerals, and dietary fiber. This is a narrative review in which an extensive search of the literature was performed to establish the immunomodulator, cardiometabolic, and gut microbiota composition modulation effect of Durvillaea antarctica. Several studies have shown the potential of Durvillaea antarctica to function as prebiotics and to positively modulate the gut microbiota, which is related to anti-obesity, anti-inflammatory, anticancer, lipid-lowering, and hypoglycemic effects. The quantity of Bacteroides was negatively correlated with that of inflammatory monocytes and positively correlated with the levels of several gut metabolites. Seaweed-derived polysaccharides modulate the quantity and diversity of beneficial intestinal microbiota, decreasing phenol and p-cresol, which are related to intestinal diseases and the loss of intestinal function. Additionally, a beneficial metabolic effect related to this seaweed was observed, mainly promoting the decrease in the glycemic levels, lower cholesterol levels and cardiovascular risk. Consuming Durvillaea antarctica has a positive impact on the immune system, and its bioactive compounds provide beneficial effects on glycemic control and other metabolic parameters.
Introduction
Durvillaea antarctica, also called "cochayuyo", is the seaweed that is the most consumed by the Chilean population; its consumption represents approximately 0.5 kg per capita. It grows particularly on the Chilean and New Zealand coasts. Seaweeds are recognized worldwide for their high nutritional value in protein, vitamins, minerals, and dietary fiber [1]. Studies indicate that seaweed products will increase from USD 4.700 million to USD 6.400 million by 2026, due to its multiple health properties [1]. Cochayuyo also stands out for its low caloric content and its high content of omega-3 essential fatty acids, as well as for being a source of numerous bioactive compounds with beneficial activity for the body. There are reports that over 50% of the dry weight of this food corresponds to dietary fighting infections and cancer (Table 1). Moreover, fucoidan was found to possess antiinflammatory properties that contribute to its immunomodulatory effects ( Figure 1) [13].
A group of researchers in 2013 determined the immunomodulatory effect of β-1,3/1,6-glucan in marine algae from the southern hemisphere. Mouse cells were exposed to Durvillaea antarctica extract (50,100,250, and 500 µg/mL). The results highlight that βglucan present in D. antarctica induced a 16.9% increase in the activation of CD19+B lymphocytes compared to the control. The optimal concentration for maximum immunomodulatory activity was 100 µg of DA extract/mL [14]. Similarly, in a study on a macrophage cell line, an increase in macrophages with proinflammatory activity infiltrating the tumor was observed, prioritizing the systemic and intratumoral cell composition and, consequently, the antitumor effect in multiple types of tumor models. The authors state that highly purified, water-soluble, and low molecular weight β-1,3/1,6-glucan (BG136) present in Durvillaea antarctica is a potent immune stimulator and a valuable polysaccharide in cancer immunotherapy [15]. The components of Durvillaea antarctica exhibit the following immunomodulatory mechanisms: (a) they can act as anti-inflammatory as they modulate immune cells to decrease the synthesis and release of proinflammatory cytokines; (b) they inhibit the production of free radicals, and thus, decrease oxidative stress on cells; and (c) they possess antitumor activity and can contribute to the activation of natural killer cells, lymphocytes, and macrophages under situations that merit immune reactivity. IL: interleukin; COX: cyclooxygenase; TNF-α: tumor necrosis factor alfa; PGE: prostaglandins.
Another study published in 2018 indicates that a low molecular weight β-glucan present in Durvillaea antarctica has an immunomodulatory effect on macrophages (RAW264.7) Figure 1. Immunomodulatory effect of Durvillaea antarctica. The components of Durvillaea antarctica exhibit the following immunomodulatory mechanisms: (a) they can act as anti-inflammatory as they modulate immune cells to decrease the synthesis and release of proinflammatory cytokines; (b) they inhibit the production of free radicals, and thus, decrease oxidative stress on cells; and (c) they possess antitumor activity and can contribute to the activation of natural killer cells, lymphocytes, and macrophages under situations that merit immune reactivity. IL: interleukin; COX: cyclooxygenase; TNF-α: tumor necrosis factor alfa; PGE: prostaglandins.
A group of researchers in 2013 determined the immunomodulatory effect of β-1,3/1,6glucan in marine algae from the southern hemisphere. Mouse cells were exposed to Durvillaea antarctica extract (50, 100, 250, and 500 µg/mL). The results highlight that β-glucan present in D. antarctica induced a 16.9% increase in the activation of CD19+B lymphocytes compared to the control. The optimal concentration for maximum immunomodulatory activity was 100 µg of DA extract/mL [14]. Similarly, in a study on a macrophage cell line, an increase in macrophages with proinflammatory activity infiltrating the tumor was observed, prioritizing the systemic and intratumoral cell composition and, consequently, the antitumor effect in multiple types of tumor models. The authors state that highly purified, water-soluble, and low molecular weight β-1,3/1,6-glucan (BG136) present in Durvillaea antarctica is a potent immune stimulator and a valuable polysaccharide in cancer immunotherapy [15].
Another study published in 2018 indicates that a low molecular weight β-glucan present in Durvillaea antarctica has an immunomodulatory effect on macrophages (RAW264.7) through the Toll-like receptor 4 (TLR4) [16]. On the other hand, extracts of Durvillaea antarctica have shown significant anti-herpetic activity against HSV-1 and HSV-2 viruses. The evaluation of the extracts of this alga as a topical formulation in an animal model infected with HSV-1 revealed that the extracts reduced the severity and duration of the lesions to a greater extent than acyclovir [17].
Likewise, an investigation was carried out in four types of marine algae, among which Durvillaea antarctica was analyzed. The study consisted of the extraction, structural characterization, and the potential antioxidant activity of the polysaccharides present in each one. The results show that the polysaccharides exhibited concentration-dependent antioxidant activity. The polysaccharides present in Durvillaea antarctica also had an inhibitory effect on the free radicals [18]. In addition, in a study, it was shown that Durvillaea antarctica polysaccharides increased cell viability and inhibited EV71 virus infection. Additionally, these compounds reduced the population of apoptotic cells through the inhibition of the P53 and STAT1 signaling pathways. This study determined that polysaccharides present in cochayuyo could effectively prevent Vero cells from EV71 viral infection [19]. All of these results promote the hypothesis of a potential immunomodulatory effect of this seaweed not only in preclinical models, but also in humans. Macrocystis pyrifera and Durvillaea antarctica aqueous extracts Evaluate the potential antiviral properties of extracts obtained from two brown macroalgae against both HSV-1 and HSV-2 in humans (HeLa cells) and primary human gingival fibroblasts.
Algae extracts inhibited the growth of both viruses in a dose-dependent manner. The algae extracts also reduced the binding of HSV-1 and HSV-2 to HeLa cells, and decreased the expression of the viral proteins gB and gD.
Xu et al. [19] D. antarctica polysaccharide (DAPP) Validate how DAPP inhibits EV71 to induce the apoptosis of Vero cell.
DAPP had no toxicity on Vero cells at the concentration of 250 µg/mL. DAPP inhibited the proliferation of EV71 virus in a dose-dependent manner, inhibited the Vero cells' apoptosis induced by EV71 via P53 signaling pathway, and decreased the expression of proinflammatory cytokines.
Qin et al. [20] Sulfated polysaccharide 4 of D. antarctica (DAP4) Isolation and purification of DAP4 via methylation analysis and NMR spectrometry analysis. Evaluation of immunomodulatory activity in vitro, including lymphocyte proliferation, phagocytic activity of macrophages, NO production, and NK cell cytotoxicity in vitro.
DAP4 had immunomodulatory activity in vitro. DAP4 was non-toxic to RAW264.7 cells at concentrations of up to 400 µg/mL. DAP4 also enhanced the phagocytic activity of RAW264.7 cells. DAP4 increased the production of NO by RAW264.7 cells. DAP4 also enhanced the proliferation of splenocytes in response to ConA and LPS. DAP4 increased the cytotoxicity of NK cells against YAC-1 cells.
The in vitro studies conducted by Qin et al. [20] revealed that DAP4 (Durvillaea antarctica polysaccharide subfraction 4), a fucoidan extracted from Durvillaea antarctica, exhibits exceptional immunomodulatory activities. DAP4 promotes the proliferation of RAW264.7 cells as well as spleen lymphocytes, and also enhances the phagocytic activity of macrophages. Additionally, DAP4 increases the production of nitric oxide and the vitality of natural killer cells. The findings indicate that DAP4 has significant immune-enhancing potential and could be a promising source of immunomodulatory fucoidan with a unique structure. These results highlight the potential of DAP4 as a candidate for the development of new therapeutic agents for immune-related diseases.
Durvillaea antarctica as a Cornerstone for Gut Microbiota Modulation
The cochayuyo is a seaweed that, in its composition, stands out for its contribution of β-glucans; polysaccharides such as fucoidan, laminarin, alginate, ulvan, and porphyran are unique to seaweeds. It was reported that these dietary components have biological activity associated with anticancer, antidiabetic, and anti-inflammatory functions, and have an immunomodulatory effect [21]. Evidence suggests that β-glucans could have a significant impact on changes in the microbiota and have an improvement on human health. In this vein, several studies have shown their potential prebiotic activity and their ability to positively modulate the gut microbiota. Prebiotics enhance bacterial populations, and their production of short-chain fatty acids (SCFAs), which are the energy source for gastrointestinal epithelial cells, often provide protection against pathogens, influence immunomodulation, and induce apoptosis of colon cancer cells [22].
A study developed by He et al. [23] examined the impact of high doses of a combination of deep-sea water and fucoidan (H-CDF) on the gut microbiota of rats with T2DM using 16S rDNA sequencing. The results demonstrate that H-CDF significantly increased bacterial diversity and restored the abundance and diversity of gut microbiota to normal levels. At the phylum level, Firmicutes and Bacteroidetes were the dominant microflora in all groups. However, after H-CDF intervention, the abundance of Firmicutes increased to normal levels, and the F/B ratio increased significantly. The study suggests that H-CDF could be a potential therapeutic intervention for T2DM, but further research is needed to determine the mechanisms by which H-CDF regulates gut microbiota and its long-term effects.
In recent research, the quantity of Bacteroides was negatively correlated with that of inflammatory monocytes and positively correlated with the levels of several gut metabolites after the sodium alginate (SA) supplementation, a seaweed-derived dietary fiber [24]. The consumption of seaweed is associated with an increase in the concentrations of SC-FAs, producing a modulation of epithelial cells and leukocytes in the immune system. Seaweed-derived polysaccharides modulate the quantity and diversity of beneficial intestinal microbiota, generating a decrease in phenol and p-cresol, which are related to intestinal diseases and the loss of intestinal function. Bai et al. [25] studied the in vitro effects of alginate on the microbiota, observing that it favors the proliferation of beneficial bifidobacteria and decreases pathogenic bacterial strains, increasing the colonic fermentation time and the consequent production of SCFAs ( Figure 2). D. Antarctica possesses numerous phytochemicals with prebiotic properties, including beta-glucans and polysaccharides; this effect positively modulates the gut microbiota, promoting an increase in beneficial bacteria, which, in turn, contributes to the protection against intestinal pathogens, promotes intestinal cell turnover and physiology, regulates carbohydrate and lipid metabolism, and reduces local and systemic inflammation through immunomodulatory mechanisms.
The sulphated polysaccharides present in seaweeds have anti-obesity, anti-inflammatory, anticancer, lipid-lowering, and hypoglycemic activities, due to their activity on the intestinal microbiota, which promotes the relationship between Bacteroidetes and Firmicutes, inhibits proinflammatory bacteria, and regulates lipid and carbohydrate metabolism through the production of SCFA [26]. On the other hand, Bermano et al. [27] analyzed the prebiotic potential of seaweed in rats, obtaining a lower body weight and serum triglyceride concentration compared to the control group; in addition, in humans, it is evident that the consumption of seaweed increases the frequency of defecation, favoring the concentration of bifidobacteria.
A study conducted by Yang et al. [28] on hamsters determined that oligosaccharides from seaweed have hypoglycemic and lipid-lowering effects by stimulating the insulin secretion and improving glucose tolerance. In addition, they demonstrated that hamsters that were fed a diet that is rich in fat and sucrose, by consuming oligosaccharides from the algae, had an increased population of Bacteroidetes and decreased plasma glycaemia. Therefore, the increase in Firmicutes, together with the decrease in Bacteroidetes, are significantly related to plasma glucose concentrations. Similarly, Siddiqui et al. [29] showed that administering a crude seaweed polysaccharide to rats with type 1 diabetes mellitus (T1DM) improved diabetes symptoms, decreased body weight, and improved fasting blood glucose and pancreatic β cells. Although the mechanism is not fully established, the rats presented an increase in the population of beneficial bacteria of the intestinal microbiota, such as Lactobacillus and Bacteroidetes ( Table 2). D. Antarctica possesses numerous phytochemicals with prebiotic properties, including beta-glucans and polysaccharides; this effect positively modulates the gut microbiota, promoting an increase in beneficial bacteria, which, in turn, contributes to the protection against intestinal pathogens, promotes intestinal cell turnover and physiology, regulates carbohydrate and lipid metabolism, and reduces local and systemic inflammation through immunomodulatory mechanisms.
The sulphated polysaccharides present in seaweeds have anti-obesity, anti-inflammatory, anticancer, lipid-lowering, and hypoglycemic activities, due to their activity on the intestinal microbiota, which promotes the relationship between Bacteroidetes and Firmicutes, inhibits proinflammatory bacteria, and regulates lipid and carbohydrate metabolism through the production of SCFA [26]. On the other hand, Bermano et al. [27] analyzed the prebiotic potential of seaweed in rats, obtaining a lower body weight and serum triglyceride concentration compared to the control group; in addition, in humans, it is evident that the consumption of seaweed increases the frequency of defecation, favoring the concentration of bifidobacteria.
A study conducted by Yang et al. [28] on hamsters determined that oligosaccharides from seaweed have hypoglycemic and lipid-lowering effects by stimulating the insulin secretion and improving glucose tolerance. In addition, they demonstrated that hamsters that were fed a diet that is rich in fat and sucrose, by consuming oligosaccharides from the algae, had an increased population of Bacteroidetes and decreased plasma glycaemia. Therefore, the increase in Firmicutes, together with the decrease in Bacteroidetes, are significantly related to plasma glucose concentrations. Similarly, Siddiqui et al. [29] showed that administering a crude seaweed polysaccharide to rats with type 1 diabetes mellitus (T1DM) improved diabetes symptoms, decreased body weight, and improved fasting blood glucose and pancreatic β cells. Although the mechanism is not fully established, the rats presented an increase in the population of beneficial bacteria of the intestinal microbiota, such as Lactobacillus and Bacteroidetes (Table 2).
Another beneficial component found in seaweed is SA, which has antitumor, antico- Another beneficial component found in seaweed is SA, which has antitumor, anticoagulant, and immunomodulatory effects. When evaluating the effect of SA on immunosuppressed rats, the restoration of impaired immune functions, the decrease in T lymphocytes, and the increase in the secretion of serum immunoglobulins and proinflammatory cytokines are obtained, promoting the increase in beneficial bacteria such as Lactobacillus, which helps with inflammation and immunity [30]. At the same time, another study shows that when using SA in rats with obesity and metabolic syndrome that are fed a high-fat diet, it reduces weight gain, the accumulation of fat in the liver, and inflammation, and improves the intestinal microbiota, through the increase in Bacteroidetes and AGCC [31].
On the other hand, Wang et al. [32] studied the side effects of taking antibiotics, which alter the intestinal barrier, causing a decrease in both the immune function and drug efficacy. After administering antibiotics to mice, the authors observed that by combining them with fucoidan, a polysaccharide that is present in marine algae, the symptoms of inflammatory bowel disease were alleviated by avoiding alterations in the colonic tissue. In addition, the microbiota dysbiosis decreased by increasing the number of beneficial bacteria and promoting the synthesis of IL-10, which is an anti-inflammatory cytokine. In the same vein, Deng et al. [33] observed that other benefits of fucoidan polysaccharide are the reduction in blood glucose, improved insulin sensitivity, reduced hepatic oxidative stress, improved hepatocyte steatosis, and increased beneficial bacteria of the intestinal microbiota such as Verrucomicrobia and Akkiermansia muciniphila.
Additionally, the consumption of seaweeds generates an increase in immunoglobulin levels, both immunoglobulin A and G, in rats with supplemented diets. Alternatively, they stimulate the growth of beneficial bacteria in the colon and a slight decrease in pathogenic bacteria. Regarding colonic metabolites, a significant increase in SCFAs was observed, specifically acetic, propionic, and butyric acids, and a change in colonic morphology in relation to epithelial cells and intestinal mucosa [34]. Maintaining the balance of epithelial cells is crucial for the function of the intestinal mucosal barrier. Marine algae-derived bioactive peptides, such as phycobiliproteins, which are pigmented proteins involved in capturing light energy for photosynthesis, along with glycoproteins containing "cellulose binding domains", suggest a potential role in the cell wall structure and adhesion. Additionally, phycolectins, which are lectins that recognize and bind to specific carbohydrates, and mycosporine-like amino acids, which act as natural sunscreens against UV radiation, are also present. These diverse bioactive peptides exert their effects by stimulating the epidermal growth factor (EGF), leading to the enhanced growth, proliferation, and differentiation of intestinal epithelial cells. By modulating these cellular processes, these bioactive peptides contribute to the overall health and well-being of the host [35]. Reilly et al. [36] investigated the effects of marine algae on the intestinal morpho-physiology of pigs, discovering that the consumption of an algae extract provided a lower population of enterobacteria, bifidobacteria, and enterobacteria in the cecum and colon; increased the proportion of butyric acid; decreased the concentration of ammonia; and increased the expression of interleukin (IL)-8 mRNA.
Compositional changes in the overall gut microbiota are significantly dependent on several markers related to metabolic syndrome, including body weight, glucose-insulin homeostasis, endotoxemia-induced inflammation, and intestinal barrier integrity. According to Cheng et al. [37], although the evidence of gut microbiota is insufficient, the relationship between the factors involved in its maintenance and mediation in healthy subjects proves to be relevant in human health and disease. Significant metabolic pathologies, such as diabetes, are known to be partially caused by the imbalance of interactions between the host and the gut microbiota. The gut microbiota in individuals with T1DM is characterized by reduced bacterial and functional diversity, as well as low bacterial community stability [38]. A study presented by Du Preez et al. [39] showed that 5% S. Siliquosum supplementation in male Wistar rats with diet-induced metabolic syndrome decreased body weight and retroperitoneal fat due to an increase in the gut microbiota, which likely complements the prebiotic actions of alginates that are present in some brown algae. Other Sargassum species show identical responses related to gut microbiota regulation. Table 2. Durvillaea antarctica as a key player in gut microbiota modulation.
Phytochemical Compound Tested Rationale Results-Key Findings
He et al. [23] Combination of deep-sea water (DSW) and/or fucoidan (CDF) The combined effect of DSW and fucoidan was investigated on a T2DM rat model induced by a high-fat diet and streptozocin injection. Fecal metabolomics and 16S rDNA analysis were used to explore the relationship between these interventions and identify potential metabolic pathways.
CDF was more effective than DSW or fucoidan alone in improving blood glucose, lipid levels, and histopathological changes in T2DM rats. CDF also enhanced the phosphorylation of Akt and GSK3β, which are important steps in insulin signaling. Fecal metabolomics and 16S rDNA analysis showed that CDF altered the composition of gut microbiota and metabolic pathways.
25] Alginate
Alginate overproducing mutant of P. aeruginosa was obtained through transposon mutagenesis libraries.
The in vitro functions of human gut microbiota in degrading seaweed and mutant Pseudomonas alginates were comparatively studied.
Both bacterial and seaweed alginates were found to be completely degraded by fecal bacteria isolated from study volunteers. Moreover, their regulatory function on gut microbiota was similar, as they promoted the proliferation of beneficial bifidobacteria while reducing the abundance of pathogenic bacterial strains. Crude polysaccharide from seaweed, Dictyopteris divaricata (CDDP) The impact of streptozotocin-induced T1DM on gut barrier permeability and gut microbiota dysbiosis.
du Preez et al. [39] Sargassum siliquosum extract Evaluated the impact of S. siliquosum on metabolic syndrome parameters, including heart/liver function, plasma biochemistry, glucose/insulin responses, body composition, and gut microbiota composition.
S. siliquosum decreased body weight, fat mass, abdominal fat deposition, liver fat vacuole size, and improved glucose tolerance and insulin sensitivity. S. siliquosum also increased the population of beneficial bacteria in the gut and reduced inflammation.
New evidence concerning the prebiotic capacities of seaweeds is related to the abundance of polysaccharide components, thanks to the insights on saccharolytic fermentation by the gastrointestinal microbiota. The results from original animal studies give encouraging data regarding the use of red seaweed galactans and brown seaweed glycans, such as alginates and laminarins [40]. Thus, Li et al. [41] showed that after an unsaturated alginate oligosaccharides (UAOS) treatment, the concentration and variety of the gut microbiota increased, represented by Akkermancia spp., Lactobacillus spp., Bifidobacterium spp., and Saccharomyces spp., with the potential of lowering body weight gain and improving glucose and lipid homeostasis. High-fat diet (HFD)-induced obesity markedly altered the percentage of the gut microbe phyla, while UAOS treatment significantly reversed this tendency, which suggests that UAOS can regulate the gut microbiota.
Likewise, Fu et al. [42] surveyed a new polysaccharide named ST-P2 from Sargassum thunbergii, a brown alga. ST-P2 fermentation significantly modulated the composition and growth of salutary colonic bacterial populations. The microbiome was examined at the phylum and genus levels; the dominant bacterial communities in the original fecal sample were Firmicutes, Bacteroidetes, Proteobacteria, and Actinobacteria. A significant increase in the salutary Bacteroidetes group combined with a significant decline in the dangerous Firmicutes group after ST-P2 supplementation were noted. Thus, ST-P2 is anticipated to be a functional component for health enhancement by modulating the gut health. In this way, it is demonstrated that certain DA phytochemicals, such as fucoidan, can possibly regulate the proliferation of beneficial bacteria such as those belonging to the phylum Bacteroidetes and, on the contrary, decrease the presence of potentially pathogenic bacteria of the phylum Firmicutes. Thus, the possible therapeutic potential of DA as a modulator of the gut microbiota is highlighted; however, it is pertinent to perform clinical studies to corroborate this effect in humans.
Another possible hypothesis that could be put forward is the relationship between DA and the activation of nucleotide oligomerization domain (Nod)-like receptors (NLRs). These are cytosolic receptors that are predominantly located in immune cells, and whose activation is associated with caspases signaling cascades, leading to the release of proinflammatory cytokines such as IL-1β and IL-18 [43,44]. Such cascade is generated once ligands related to intestinal dysbiosis states, such as microbe-associated molecular patterns (MAMPs), bind to NOD1/2 receptors [44,45]. Therefore, by combating intestinal dysbiosis thanks to its prebiotic effects, DA supplementation could reduce the activity of such receptors and, thus, decrease local and systemic inflammation.
Considering the above, the consumption of not only seaweed, but also other types of algae, have a significant impact on the structural and functional composition of the intestinal microbiota. These changes are associated with health and disease processes, in which immunomodulatory and anti-inflammatory mechanisms are mainly involved, as well as an improvement in various cardiometabolic health criteria. Additionally, further investigation of the impact of seaweed on gut microbiota is needed to establish this supplementation as a possible therapeutic tool for several diseases.
Cardioprotective Role of Durvillaea antarctica
Algae, particularly Durvillaea antarctica, are a vegetable food of marine origin with a high contribution of omega-3 and omega-6 fatty acids [46,47]. Even more important, cochayuyo stands out for its content of bioactive compounds, such as vitamins (C and E), carotenoids, and many phenolic compounds, which have shown antioxidant capacity [48][49][50] (Figure 3). The illustration showcases the diverse health benefits of Durvillaea antarctica. This marine alga, which is rich in omega-3 and omega-6 fatty acids, contains bioactive compounds such as carotenoids, and phenolic compounds with potent antioxidant capacity. Notably, fucoxanthin plays a significant role in lipid metabolism, reducing cardiovascular risk by modulating leptin and adiponectin. Algae consumption inhibits enzymes related to hyperglycemia, controls T2DM, counteracts arterial hypertension, promotes weight loss, enhances thermogenesis, and improves the lipid profile. These findings support the potential of Durvillaea antarctica as a valuable dietary component. ROS: reactive oxygen species; T2DM: type 2 diabetes mellitus.
Among the carotenoids present in Durvillaea antarctica, fucoxanthin (FX) stands out, which is involved in lipid metabolism and modulating the action of leptin and adiponectin, thus reducing lipogenesis and lipolysis and decreasing cardiovascular risk [51,52]. In addition, in a study carried out by Lomartire et al. [53] in 2021, it was shown that fucoxanthin can make up to 30% of the dry weight of the algae. In humans, consuming the microalgae Phaeodactylum tricornutum (PT) led to an increase in the uptake of fucoxanthin, which is metabolized into fucoxanthinol (FXOH) and amarouciaxanthin A (AxA). The plasma The illustration showcases the diverse health benefits of Durvillaea antarctica. This marine alga, which is rich in omega-3 and omega-6 fatty acids, contains bioactive compounds such as carotenoids, and phenolic compounds with potent antioxidant capacity. Notably, fucoxanthin plays a significant role in lipid metabolism, reducing cardiovascular risk by modulating leptin and adiponectin. Algae consumption inhibits enzymes related to hyperglycemia, controls T2DM, counteracts arterial hypertension, promotes weight loss, enhances thermogenesis, and improves the lipid profile. These findings support the potential of Durvillaea antarctica as a valuable dietary component. ROS: reactive oxygen species; T2DM: type 2 diabetes mellitus.
Among the carotenoids present in Durvillaea antarctica, fucoxanthin (FX) stands out, which is involved in lipid metabolism and modulating the action of leptin and adiponectin, thus reducing lipogenesis and lipolysis and decreasing cardiovascular risk [51,52]. In addi-tion, in a study carried out by Lomartire et al. [53] in 2021, it was shown that fucoxanthin can make up to 30% of the dry weight of the algae. In humans, consuming the microalgae Phaeodactylum tricornutum (PT) led to an increase in the uptake of fucoxanthin, which is metabolized into fucoxanthinol (FXOH) and amarouciaxanthin A (AxA). The plasma levels of FX and its metabolites were measured in 22 participants before and after a two-week intervention with PT. It was found that FX was well absorbed, and the metabolites of FX were detected at higher concentrations in plasma than FX [54] (Table 3).
Alternatively, in a cohort study that evaluated the consumption of seaweed on cardiovascular risk in Japanese people of both the male and female genders, it was observed that the intake of seaweed was inversely associated with the risk of stroke in male subjects only (CI 0.42-0.94, p = 0.01) [55]. Another cohort study, carried out in the Japanese population, evaluated the correlation between the intake of seaweed and mortality from cardiovascular diseases; the results show that men and women who consumed seaweed daily had lower cardiovascular mortality than those who did not consume seaweed (CI 0.55-0.95, p = 0.72) [56].
D. antarctica as a Promising Therapeutic Dietary Agent for the Management of Metabolic Syndrome
Metabolic syndrome is a cluster of conditions that increase the risk of developing cardiovascular disease, type 2 diabetes, and other health problems [57]. The most important criteria for diagnosing metabolic syndrome include abdominal obesity, high blood pressure, hyperglycemia, and abnormal cholesterol levels. These criteria are used to identify individuals who are at high risk of developing these conditions and who may benefit from early intervention to prevent or delay their onset [58]. Recent studies demonstrated that metabolic syndrome affects over 30% of the adult population globally, with a particularly high prevalence in low-and middle-income countries. The increasing incidence of metabolic syndrome represents a significant public health concern, as it is associated with an elevated risk of cardiovascular disease and other chronic conditions. Given the potential impact of metabolic syndrome on health outcomes, it is critical to promote awareness of this condition and develop effective prevention and management strategies [59].
Metabolic syndrome is a complex disorder that involves several molecular mechanisms, including insulin resistance, chronic inflammation, oxidative stress, and mitochondrial dysfunction. Insulin resistance, a hallmark of metabolic syndrome, is characterized by impaired insulin signaling and reduced glucose uptake by insulin-sensitive tissues, such as muscle and adipose tissue. This results in increased blood glucose levels and compensatory hyperinsulinemia, which can further exacerbate insulin resistance and contribute to the development of metabolic syndrome [60].
Chronic inflammation also plays a critical role in the pathogenesis of metabolic syndrome. Inflammatory cytokines, such as interleukin-6 (IL-6) and tumor necrosis factoralpha (TNF-α), are increased in individuals with metabolic syndrome, and they contribute to the development of insulin resistance and other metabolic abnormalities [61]. In addition, oxidative stress, which results from an imbalance between reactive oxygen species (ROS) production and antioxidant defense mechanisms, can also contribute to the development of metabolic syndrome. Increased ROS production can damage cellular components, including proteins, lipids, and DNA, leading to cellular dysfunction and insulin resistance [62].
Mitochondrial dysfunction, characterized by impaired mitochondrial function and biogenesis, was also implicated in the pathogenesis of metabolic syndrome. Reduced mitochondrial function can lead to decreased ATP production, which can impair insulin signaling and glucose uptake, contributing to the development of insulin resistance. Furthermore, dysfunctional mitochondria can produce more ROS, exacerbating oxidative stress and inflammation [63].
Marine algae, especially Durvillaea antarctica, have several bioactive compounds with great antioxidant capacity, which were shown to inhibit the enzymes α-glucosidase and α-amylase, causing a decrease in postprandial hyperglycemia, delaying starch hydrolysis, and thus, controlling T2DM, as demonstrated in the study by Pacheco et al. [64] where they evaluated the antioxidant effect and the inhibition capacity of the enzymes mentioned in six Chilean algae, finding that Durvillaea antarctica had the highest amount of polyphenols, antioxidant activity, and enzyme inhibition (Table 3).
Arterial hypertension, which is present in metabolic syndrome, could be counteracted with the consumption of algae, specifically Durvillaea antarctica, because it is an important source of prebiotics, specifically alginate, which acts on intestinal bacteria producing long-chain fatty acids; these compounds have anti-inflammatory and antioxidant activity that could counteract endothelial dysfunction, which is a cornerstone in arterial hypertension [22]. On the other hand, the peptides present in the seaweed inhibit the angiotensin-converting enzyme, presenting a hypotensive effect [65].
Imbalances in the blood lipid levels, specifically cholesterol, are characteristic of dyslipidemia, a condition that is commonly associated with metabolic syndrome. Cholesterol plays a vital role in maintaining cellular homeostasis by participating in the synthesis of hormones, bile acids, and membrane structures [66]. The regulation of whole-body cholesterol homeostasis involves tightly controlled processes, including de novo biosynthesis, dietary cholesterol absorption, and biliary clearance and excretion [67]. In this context, extracts from algae such as D. antarctica and U. Lactuca could play a beneficial role. It was observed that U. Lactuca samples are rich in polyunsaturated fatty acids (PUFAs), while D. antarctica samples show the highest content of saturated fatty acids (SFAs) [68]. Furthermore, significant concentrations of fatty acids were reported in macroalgae such as Durvillaea antarctica, with a higher content of monounsaturated fatty acids and PUFAs such as oleic acid (18:1n-9c) and linoleic acid (18:2n-6), particularly in the fronds of D. antarctica [69]. These algae extracts could be a promising alternative for managing dyslipidemia in metabolic syndrome due to their composition, which is rich in beneficial fatty acids for lipid profiles. However, further research is needed to evaluate their effectiveness and safety in clinical practice (Figure 4).
In relation to the pigment fucoxanthin, which has an antioxidant effect, it inhibits the differentiation of 3T3-L1 preadipocytes into adipocytes, decreasing weight gain and blood glucose concentration [70]. In addition, it increases lipolysis and thermogenesis by stimulating uncoupling protein 1 (UCP-1) and the β3-adrenergic receptor in white adipose tissue [71]. In rodents, it was observed that fucoxanthin increases the hepatic synthesis of docosahexaenoic acid, improving the lipid profile [72].
Illustration showcasing the potential health benefits of Durvillaea antarctica supplementation. Brown algae, which is rich in polyunsaturated fatty acids (PUFAs), minerals, polysaccharides, and polyphenols, can enhance the nutritional value of meals and improve lipid profile, heart disease, obesity, and comorbidities associated with metabolic syndrome. Mechanisms include the positive modulation of lipid metabolism enzymes, reduced thrombogenicity, and antioxidant properties that combat oxidative stress and mitochondrial dysfunction. Environmental factors affect algae's composition and benefits. Incorporating marine algae, specifically Durvillaea antarctica, into the diet regulates hypertension, hyperglycemia, body weight, blood cholesterol, and cardiovascular diseases through the presence of PUFAs, dietary fiber, and antioxidants. T2DM: type 2 diabetes mellitus; PUFAs: polyunsaturated fatty acids.
Another preclinical study showed that three enzymatic hydrolysates from DA biomass, rich in sulfated polysaccharides, had a high antioxidant activity, as they were able to inhibit the activity of certain enzymes involved in human metabolism and whose function is exacerbated in the pathophysiology of MS, such as angiotensin I-converting enzyme (ACE), α-amylase, α-glucosidase, and pancreatic lipase [73]. tiveness and safety in clinical practice (Figure 4).
In relation to the pigment fucoxanthin, which has an antioxidant effect, it inhibits the differentiation of 3T3-L1 preadipocytes into adipocytes, decreasing weight gain and blood glucose concentration [70]. In addition, it increases lipolysis and thermogenesis by stimulating uncoupling protein 1 (UCP-1) and the β3-adrenergic receptor in white adipose tissue [71]. In rodents, it was observed that fucoxanthin increases the hepatic synthesis of docosahexaenoic acid, improving the lipid profile [72]. Phaeodactylum tricornutum (PT) Bioavailability and safety of consuming whole biomass of PT in humans. Intestinal health and microbiota were also assessed.
PT intake increased n-3 PUFA and EPA levels, decreased the n-6:n-3 ratio, and resulted in the uptake of fucoxanthinol (FX) and amarouciaxanthin A (A × A). No adverse effects were observed, supporting PT as a sustainable food source.
Chichibu et al. [55] Seaweed Seaweed intake was assessed through a 24 h dietary recall survey and categorized into four groups (0, 1-5.5, 5.5-15, and ≥15 g/day). The study examined the incidence of cardiovascular disease within the Circulatory Risk in Communities Study (CIRCS).
Seaweed intake was inversely associated with the risk of total stroke and cerebral infarction among men but not among women. The hazard ratios (95% confidence intervals) for the highest versus the lowest categories of seaweed intake were 0.63 (0.42-0.94; 0.01) for total stroke and 0.59 (0.36-0.97; 0.03) for cerebral infarction. Shih et al. [73] Durvillaea antarctica The potential of enzymatic hydrolysates from D. antarctica as natural antioxidants. Three hydrolysates, Dur-A, Dur-B, and Dur-C, were produced using viscozyme, cellulase, and α-amylase enzymes, respectively.
All of the following extracts demonstrated inhibitory effects on key enzymes related to metabolic syndrome: angiotensin I-converting enzyme (ACE), α-amylase, α-glucosidase, and pancreatic lipase. Dur-B showed superior antioxidant and anti-metabolic syndrome effects compared to the other extracts.
Similarly, it was shown that the supplementation of brown algae that is rich in polyunsaturated fatty acids (PUFAs), minerals, polysaccharides, and polyphenols, very similar to those contained in DA, was associated with a greater nutritional value to the meals in which it was added, as well as improving the lipid profile, heart disease, and obesity, which are all variables or comorbidities associated with MS. The proposed mechanisms contributing to such pro-protective effects are the positive modulation of enzymes involved in lipid metabolism, a lower thrombogenic index, and antioxidant properties that are able to decrease the oxidative stress and mitochondrial dysfunction of crucial cells in carbohydrate metabolism, such as the pancreas and vascular endothelium [74][75][76].
It is important to mention that all of these previously mentioned benefits depend on the type of algae and the environmental conditions in which it is found (temperature, solar radiation, and season, among others) [77]. According to the information compiled in this review, the addition of marine algae to the diet, specifically Durvillaea antarctica, provides benefits in the regulation of hypertension and hyperglycemia and in the reduction in body weight and blood cholesterol, preventing possible cardiovascular diseases through the incorporation of polyunsaturated fatty acids, dietary fiber, and antioxidant compounds present in the algae [78].
It is essential to continue with research to clearly identify the effect of Durvillaea antarctica consumption on cardiovascular health since, in studies carried out to evaluate the effect of the consumption of brown marine algae, to which Durvillaea antarctica belongs, on metabolic syndrome, positive results were obtained in the reduction in the different factors that affect the development of MS (arterial hypertension, total cholesterol, and hyperglycemia) [74]. Among the negative effects related to the consumption of seaweed is the level of heavy metals that they could present, among which are cadmium, lead, silver, and arsenic [79].
Conclusions
Durvillaea antarctica consumption generates a positive impact on the immunological system by increasing the activation of CD19+B lymphocytes, promoting macrophages and anti-herpetic activities, and inducing the immunomodulatory effect via associated macrophage pathways. Additionally, an antioxidant activity was reported. In cardiovascular risk, it was described that the intake of Durvillaea antarctica has beneficial effects related to glycemic control and other metabolic parameters. This seaweed has high contents of n-3 fatty acid and n-6 essential fatty acid. Among its carotenoids, fucoxanthin stands out, which is involved in lipid metabolism and modulating the action of leptin and adiponectin, thus reducing lipogenesis and lipolysis and, therefore, decreasing cardiovascular risk.
The effect related to D. antarctica consumption in the regular diet on the microbiota is impressive. Specific nutrients such as β-glucans, polysaccharides (particularly fucoidan), laminarin, alginate, ulvan, and porphyran are unique to seaweeds. Several studies have shown their potential to act as prebiotics from the diet and positively modify and modulate the gut microbiota. At the same time, this probiotic effect improves the metabolic response, lowering weight gain and serum triglyceride concentration through their different bioactive substances.
Alginate, a polysaccharide extracted from D. antarctica, plays a crucial role in gut health. It promotes the growth of beneficial Bifidobacteria while inhibiting pathogenic bacteria, resulting in extended colonic fermentation. This process leads to a higher production of SCFAs. The sulphated polysaccharides present in seaweeds have anti-obesity, anti-inflammatory, anticancer, lipid-lowering, and hypoglycemic activities, due to their activity on the intestinal microbiota, promoting the relationship between Bacteroidetes and Firmicutes. The number of different groups of bacteria is crucial to maintain a healthy microbiota, and the function of the intestinal mucosal barrier is important in maintaining the balance of epithelial cells.
Bioactive peptides derived from marine algae, such as phycobiliproteins, glycoproteins, phycolectins, and mycosporine-like amino acids, exert beneficial effects. Through the stimulation of the epidermal growth factor, they promote the growth, proliferation, and differentiation of intestinal epithelial cells, thereby improving the health of the host. It is important to incorporate Durvillaea antarctica in the usual diet due to its functional bioactive components in the nutritional management of metabolic diseases, especially those associated with metabolic syndrome, as well as to improve the modulation of the immune response, and especially for its recognized benefits associated with potentiating intestinal health, promoting a healthy intestinal function in adults, and improving intestinal response in particular clinical conditions.
The present study provides compelling evidence of the potential benefits of Durvillaea antarctica in improving gut health, managing metabolic syndrome, and positively modulating the immune system. Our findings demonstrate that consuming Durvillaea antarctica can enhance the composition of the gastrointestinal microbiota, leading to better metabolic health outcomes. Moreover, the bioactive compounds present in Durvillaea antarctica possess immunomodulatory properties that may help prevent and treat various immune-related diseases. However, the primary limitation of our study is the lack of well-designed randomized clinical trials to fully evaluate the effects of Durvillaea antarctica on the gut microbiota. To gain a more comprehensive understanding of the potential health benefits of Durvillaea antarctica, future research should focus on conducting well-designed studies in this regard.
In order to fully explore the potential of Durvillaea antarctica extracts, further research should focus on investigating its comprehensive physicochemical properties. This includes identifying and characterizing the various phytochemicals present in the algae, which would allow for a thorough evaluation of its biological properties. It is essential to conduct not only preclinical studies, but also human studies, to assess the efficacy and safety of these extracts. Disseminating current knowledge and findings about Durvillaea antarctica among the scientific community is crucial to inspire and promote research in this field. By doing so, we can potentially uncover a valuable therapeutic tool against intestinal, immune, and, notably, cardiometabolic diseases. Future investigations should aim to provide a deeper understanding of the therapeutic potential of Durvillaea antarctica and its applications in various health conditions. | 2023-07-12T07:00:02.604Z | 2023-06-28T00:00:00.000 | {
"year": 2023,
"sha1": "b3fa4364bcf8b647cefcd2e7fb2d6fa817b3b8b1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/13/10779/pdf?version=1687944001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "861a750c8a329d6f6d8804a75f539344e4682a90",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257561738 | pes2o/s2orc | v3-fos-license | Simple realization of a hybrid controlled-controlled-Z gate with photonic control qubits encoded via eigenstates of the photon-number parity operator
We propose a simple method to realize a hybrid controlled-controlled-Z (CCZ) gate with two photonic qubits simultaneously controlling a superconducting (SC) target qubit, by employing two microwave cavities coupled to a SC ququart (a four-level quantum system). In this proposal, each control qubit is a photonic qubit, which is encoded by two arbitrary orthogonal eigenstates (with eigenvalues 1 and -1, respectively) of the photon-number parity operator. Since the two arbitrary encoding states can take various quantum states, this proposal can be applied to realize the hybrid CCZ gate, for which the two control photonic qubits can have various encodings. The gate realization is quite simple because only a basic operation is needed. During the gate operation, the higher energy intermediate levels of the ququart are not occupied, and, thus, decoherence from these levels is greatly suppressed. We further discuss how to apply this gate to generate a hybrid Greenberger-Horne-Zeilinger (GHZ) entangled state of a SC qubit and two photonic qubits, which takes a general form. As an example, our numerical simulation demonstrates that high-fidelity generation of a cat-cat-spin hybrid GHZ state is feasible within current circuit QED technology. This proposal is quite general, which can be applied to realize the hybrid CCZ gate as well as to prepare various hybrid GHZ states of a matter qubit and two photonic qubits in other physical systems, such as two microwave or optical cavities coupled to a four-level natural or artificial atom.
Multiqubit gates play important roles in quantum computing. Especially, a controlled-controlled-Z gate (CCZ gate) with two qubits simultaneously controlling a target qubit is of significance, which has practical applications in quantum computing, such as quantum circuit construction, error correction, and quantum algorithms. [1][2][3][4][5] Experimentally, a non-hybrid CCZ gate with three matter qubits was demonstrated in various physical systems. [6][7][8][9] On the other hand, hybrid gates acting on "different types of qubits" have attracted increasing attention due to their important applications in hybrid quantum computing. Here, different types of qubits means: qubits are different in their nature (e.g., photonic qubits and matter qubits) or different in their encoding (e.g., encoding through discrete variables and encoding via continuous variables), etc. The focus of this work is on a hybrid CCZ gate with two photonic qubits simultaneously controlling a target superconducting (SC) qubit. Since a photonic qubit is different from a SC qubit in their nature, the CCZ gate considered in this work is a hybrid CCZ gate. Obviously, a hybrid CCZ gate differs from a regular non-hybrid CCZ gate with three identical qubits.
Over the past years, proposals have been put forward for implementing a hybrid two-qubit controlled phase or NOT gate with (i) a matter qubit (e.g., SC qubit, nitrogen-vacancy (NV)-center qubit, atomic qubit, quantum-dot qubit, electron-spin qubit, etc.) and a photonic qubit [10][11][12][13][14][15][16][17] or (ii) two photonic qubits with different encodings. [18][19][20] In addition, proposals have been presented for implementing a hybrid Toffoli gate or a hybrid CCZ gate with (i) two NV-center qubits and a photonic qubit 11 or (ii) two quantum-dot qubits and a photonic qubit. 17 Moreover, a hybrid Fredkin gate, with two quantum-dot qubits and a photonic qubit, has been presented. 17 We note that, in the previous proposals, 10-20 photonic qubits are encoded via cat states, coherent states, polarization states, spatial mode degrees of freedom, or the vacuum and the single-photon states. After a deep search of literature, we find that how to realize a hybrid CCZ gate, with two photonic qubits (encoded by two arbitrary orthogonal eigenstates of the photon-number parity operator) simultaneously controlling a target matter qubit (e.g., SC qubit or other matter qubit), has not been reported to date.
In this work, we will present a simple method to directly realize a hybrid CCZ gate with two photonic qubits simultaneously controlling a SC target qubit, by using two microwave cavities coupled to a SC flux ququart (a four-level artificial atom) [ Fig. 1(a)]. Each cavity can be a one-dimensional (1D) or three-dimensional (3D) cavity. For each cavity being a 3D cavity, the ququart is inductively coupled to the cavity, which can be implemented by inserting the part of the superconducting loop of the SC ququard into the cavity. 21 While, for each cavity being a 1D cavity, the ququard is capacitively coupled to the cavity, which can be realized by using a capacitor to connect the ququard and the cavity. Note that for either of these two couplings, the key Hamiltonian of this work, as described by Eq. (3), can be obtained.
This proposal is based on circuit QED, which has been considered as one of the best platforms for quantum computing. [22][23][24][25][26][27] In this proposal, the two logic states of each photonic qubit are encoded by two arbitrary orthogonal eigenstates ju e i and ju o i of the photon-number parity operatorp ¼ e ipâ þâ of a cavity. Here,â (â þ ) is the photonannihilation (creation) operator. In addition, for the SC target qubit, the two logic states are encoded by the two lowest levels jg 0 i and jgi of a SC ququart [ Fig. 1(b)]. As shown below, two hybrid two-body interactions are simultaneously turned onto produce the three-body interaction that ends up producing the desired hybrid CCZ. We should mention that this idea has also been applied in superconducting systems to realize non-hybrid three-qubit gates with SC qubits. [28][29][30] The two arbitrary encoding states ju e i and ju o i can be expressed as where m and n are non-negative integers, and the coefficients C 2m and C 2nþ1 satisfy the normalization conditions. Obviously, the two states ju e i and ju o i are orthogonal to each other. One can easily check pju e i ¼ ju e i andpju o i ¼ Àju o i; namely, the two encoding states ju e i and ju o i are the eigenstates of the photon-number parity operatorp with eigenvalues 1 and À1; respectively.
For the hybrid CCZ gate here, there are a total number of eight computational basis states, denoted by ju e iju e ijg 0 i; ju e iju e ijgi; ju e iju o ijg 0 i; ju e iju o ijgi; ju 0 iju e ijg 0 i; ju o iju e ijgi; ju o iju o ijg 0 i; and ju o iju o ijgi. The hybrid CCZ gate is described by the following state transformations: where jl 1 l 2 l 3 i ¼ ju e iju e ijg 0 i; ju e iju e ijgi; ju e iju o ijg 0 i; ju e iju o ijgi; ju 0 iju e ijg 0 i; ju o iju e ijgi; or ju o iju o ijg 0 i, which shows that when the two control photonic qubits are in the state ju o i; a phase flip happens to the state jgi of the SC target qubit; when even one of the photonic qubits is not in the state ju o i; nothing happens to the states jg 0 i and jgi of the SC target qubit. Both photonic qubits and SC qubits have been considered as promising qubits and have been widely used in quantum computing. The hybrid CCZ gate considered here is important in hybrid quantum computing, such as hybrid error correction and hybrid quantum algorithms involving photonic qubits and SC qubits. Such hybrid gate is significant for implementing large-scale hybrid quantum computing performed in a compounded information processor, which consists of SC-qubit-based quantum processors and photonic-qubit-based quantum processors. Moreover, this hybrid gate is very important in the context of quantum state transfer between a SC-qubit-based quantum processor and a photonic-qubit-based quantum memory. The architecture consisting of a SC processor and a quantum memory has been shown to provide a significant interest. [31][32][33] We should mention that the hybrid CCZ gate can, in principle, be constructed using basic two-qubit gates and single-qubit gates only. However, when using the gate-decomposing protocols, five two-qubit gates 34 or three two-qubit gates plus two single-qubit gates 35 are required to construct a CCZ gate. Therefore, building the hybrid CCZ gate may become complex since each elementary gate requires turning on and off a given Hamiltonian for a certain period of time, and each additional basic gate adds experimental complications and the possibility of more errors. However, by using the present proposal, implementing the hybrid CCZ gate is greatly simplified because it requires only a basic operation.
Our proposal also provides a simple way to realize a hybrid controlled-controlled-NOT gate (Toffoli gate) with the proposed two photonic qubits simultaneously controlling a SC target qubit, since a Toffoli gate can be constructed by a CCZ gate plus two single-qubit Hadamard gates, which are, respectively, performed on the target qubit before and after the CCZ gate. 1 The four levels of the SC ququart are labeled as jg 0 i; jgi; jei; and j f i [ Fig. 1(b)]. The jg 0 i $ jgi transition can be made weak by increasing the barrier between two potential wells. Such a four-level SC ququart can be experimentally engineered by using a three-junction flux system with the Hamiltonian described in Ref. 36. The energies and detunings of the four levels can be accurately controlled in experiment. 37,38 The ququart is initially decoupled from the two cavities. Now adjust the level spacings of the ququart or the frequency of each cavity, such that cavity 1 is dispersively coupled to the jgi $ jei transition with coupling constant g 1 and detuning d 1 ; and cavity 2 is dispersively coupled to the jei $ jf i transition with coupling constant g 2 and detuning d 2 [ Fig. 1(b)]. Note that for a SC quantum device, the level spacings can be rapidly (1 À 3 ns) adjusted via changing by changing external control parameters. 32 In addition, the frequency of a microwave cavity or resonator can be quickly tuned within a few nanoseconds. 33 In the interaction picture and after making the rotating-wave approximation (RWA), the Hamiltonian of the whole system can be written as (hereafter assuming h ¼ 1) 1 r À fg þ g 2 e Àid2tâþ 2 r À fe þ H:c:; whereâ 1 (â 2 ) is the photon-annihilation operator of cavity 1 (2), r À Fig. 1(b)]. Here, x fg (x fe ) is the jf i $ jgi (j f i $ jei) transition frequency of the ququart, while x c1 (x c2 ) is the frequency of cavity 1 (2).
For the large-detuning condition d 1 ) g 1 and d 2 ) g 2 , the Hamiltonian (3) becomes [39][40][41] (see the supplementary material) where r À eg ¼ jgihej; where v ¼ k 2 =D. When the levels jei and j f i are initially not occupied, these levels will remain unpopulated because the Hamiltonian (5) does not induce the jgi ! jei transition or the jgi ! jf i transition. In this case, the effective Hamiltonian (5) reduces to H e ¼ gn 1 jgihgj þ vn 1n2 jgihgj; where g ¼ Àk 1 þ v: Under the Hamiltonian (6), the unitary operator U ¼ e ÀiHet ; describing the state time evolution of the system, can be expressed as According to the encoding in Eq. (1), the unitary operation U results in the following state transformation: where m and n are associated with the first photonic qubit, while m 0 and n 0 are associated with the second photonic qubit, A mm 0 ¼ expðÀi2mgtÞexpðÀi2m  2m 0 vtÞ; A mn 0 ¼ exp ðÀi2mgtÞexp ½Ài2m Âð2n 0 þ 1Þvt; A nm 0 ¼ exp ½Àið2n þ 1Þgtexp ½Àið2n þ 1Þ2m 0 vt; and A nn 0 ¼ exp ½Àið2n þ 1Þgtexp ½Àið2n þ 1Þð2n 0 þ 1Þvt. For vt ¼ p and gt ¼ 2sp (s is an integer), one has A mm 0 ¼ A mn 0 ¼ A nm 0 ¼ 1 and A nn 0 ¼ À1. Thus, the state transformation (8) becomes which indicates that when the two control photonic qubits are in the state ju o i, a phase flip (from sign þ to À) happens to the state jgi of the SC target qubit.
On the other hand, since the level jg 0 i is not involved in the unitary operator U, the four basic states ju e iju e ijg 0 i; ju e iju o ijg 0 i; ju o iju e ijg 0 i; and ju o iju o ijg 0 i remain unchanged. Hence, it can be concluded from Eq. (9) that, after the above operation, the hybrid CCZ gate (2) is realized. After the gate operation, one needs to adjust the level spacings of the ququart or the frequency of each cavity, such that the ququart is decoupled from the two cavities.
From the description presented above, one can clearly see that (i) The hybrid CCZ gate is implemented through a basic operation described by the unitary operator U. (ii) The higher energy intermediate levels jei and j f i of the ququart are not occupied during the operation. (iii) We have set vt ¼ p; gt ¼ 2sp; and g ¼ Àk 1 þ v; resulting in Àk 1 t þ p ¼ 2sp; which can be met by choosing This condition can be readily achieved by adjusting d 2 , given d 1 .
Because of d 2 ¼ x fe À x c2 ; the detuning d 2 can be adjusted by varying the frequency x c2 of cavity 2. Note that the two arbitrary encoding states ju e i and ju o i can be various specific quantum states. For instances, they can be (i) ju e i ¼ j0i (vacuum state), ju o i ¼ j1i (single-photon state); (ii) ju e i ¼ jcati (cat state), ju o i ¼ jcati (cat state), here jcati ¼ Nðjai þ jÀaiÞ and jcati ¼ Nðjai À jÀaiÞ; with a normalization factor N; (iii) ju e i ¼ j2mi (Fock state with even-number photons), ju o i ¼ j2n þ 1i (Fock state with odd-number photons), and so on. Therefore, this proposal can be used to realize the hybrid CCZ gate (2), for which the two control photonic qubits can have various encodings.
Hybrid entangled states can serve as quantum channels and intermediate resources for various quantum tasks, covering the transmission, operation, and storage of quantum information between different formats and encodings. 42,43 As an application, we will show how to apply the hybrid CCZ gate to create a hybrid Greenberger-Horne-Zeilinger (GHZ) entangled state of a SC qubit and two photonic qubits, which takes a general form.
Let us go back to the physical system illustrated in Fig. 1(a). Assume that the coupler SC ququart is initially in the state ðjg 0 i þ jgiÞ= ffiffi ffi 2 p , and the two cavities are initially prepared in the entangled state ðju e iju e i þ ju o iju o iÞ= ffiffi ffi 2 p . Note that the procedure or the method, used for the initial state preparation of the two cavities, strongly depends on what the two encoding states ju e i and ju o i are. To generate the entangled state of the two cavities, the coupling between the two cavities is required. In principle, such a coupling does not affect the gate performance because it can be fully tuned on or off before the gate operation. The cavity-cavity coupling can be indirectly tuned on by the direct coupling of the SC ququart with each cavity. 44,45 It can also be tuned off by the decoupling of the SC ququart from each cavity. As mentioned previously, the coupling or decoupling of the SC ququart with each cavity can be achieved by adjusting the level spacings of the SC ququart 32 or the frequency of each cavity. 33 The initial state of the whole system is given by where jþi ¼ ðjg 0 i þ jgiÞ= ffiffi ffi 2 p . By applying the hybrid CCZ gate (2), the state (11) becomes where jÀi ¼ ðjg 0 i À jgiÞ= ffiffi ffi 2 p . Here, the two logic states of the SC qubit are encoded via the two rotated spin states jþi and jÀi of the SC ququart. The state (12) is a hybrid GHZ entangled state of a SC qubit and two photonic qubits, which takes a general form. From Eq. (12), one can see that, depending on the specific encodings of ju e i and ju o i; various hybrid GHZ states can be prepared.
As an example, we now investigate the experimental feasibility for creating a cat-cat-spin hybrid GHZ state, Based on Eq. (12), this hybrid GHZ state is obtained for the photonic qubit encoding ju e i ¼ jcati and ju o i ¼ jcat i: The physical system, used for the GHZ state generation, consists of two 1D microwave cavities coupled to a SC flux ququart (Fig. 2). According to Eq. (11), the required initial state of the system is This initial state is available in experiments because the entangled cat state ðjcatijcati þ jcatijcatiÞ= ffiffi ffi 2 p of two microwave cavities was generated in the circuit QED experiments, 46 and the state jþi can be easily prepared by applying a classical pulse resonant with the jg 0 i $ jgi transition of the SC ququart in the ground state jg 0 i.
In reality, there exist the unwanted couplings of each cavity with the SC ququart and the unwanted inter-cavity crosstalk. When they are considered, the Hamiltonian (3) is modified as þ g 00 1 e Àid 00 1 tâþ 1 r À eg þ g 000 1 e Àid 000 1 tâþ 1 r À eg 0 þ g 0 2 e Àid 0 2 tâþ 2 r À fg þ g 00 2 e Àid 00 2 tâþ 2 r À eg þ g 000 2 e Àid 000 2 tâþ 2 r À eg 0 þ g 12 e iD12tâþ 1â 2 þ H:c:; (15) where r À eg ¼ jgihej; r À eg 0 ¼ jg 0 ihej; g 0 i ; g 00 i ; and g 000 i are the coupling constants of cavity i with the corresponding inter-level transitions of the SC ququart (i ¼ 1, 2) (Fig. 3); the detunings are defined as d 0 (Fig. 3); g 12 is the two-cavity crosstalk strength, while D 12 ¼ x c1 À x c2 is the frequency detuning of the two cavities. Here, x eg (x eg 0 ) is the jgi $ jei (jg 0 i $ jei) transition frequency of the SC ququart. Note that the coupling of each cavity with the jg 0 i $ jgi and jg 0 i $ jf i transitions can be neglected because of the weak jg 0 i $ jgi transition or by adjusting the level spacings of the SC ququart, such that each cavity is highly detuned (decoupled) from these transitions.
The dynamics of the lossy system is governed by the master equation s¼f ;e;g c s;u r ss qr ss À r ss q=2 À qr ss =2 ð Þ ; where r ss ¼ jsihsj (s ¼ f ; e;g), L½K ¼ KqK þ À K þ Kq=2 À qK þ K=2 (K ¼â 1 ;â 2 ; r À fe ; r À fg ; r À fg 0 ; r À eg ; r À eg 0 ; r À gg 0 ); c fe ; c fg ; and c fg 0 are the relaxation rates of the level jf i for the decay paths j f i ! jei; j f i ! jgi, and j f i ! jg 0 i; respectively; c eg (c eg 0 ) is the energy relaxation rate of the level jei for the decay path jei ! jgi (jei ! jg 0 i), and c gg 0 is the energy relaxation rate of the level jgi; c f ;u ; c e;u , and c g;u are the dephasing rates of the levels j f i; jei; and jgi, respectively; j 1 (j 2 ) is the decay rate of cavity 1 (2). 3. Illustration of the dispersive coupling between cavity 1 and the jgi $ jf i transition (with coupling constant g 1 and detuning d 1 ), the unwanted coupling between cavity 1 and the jei $ jf i transition (with coupling constant g 0 1 and detuning d 0 1 ), the unwanted coupling between cavity 1 and the jgi $ jei transition (with coupling constant g 00 1 and detuning d 00 1 ), and the unwanted coupling between cavity 1 and the jg 0 i $ jei transition (with coupling constant g 000 1 and detuning d 000 1 ). Illustration of the dispersive coupling between cavity 2 and the jei $ jf i transition (with coupling constant g 2 and detuning d 2 ), the unwanted coupling between cavity 2 and the jgi $ jf i transition (with coupling constant g 0 2 and detuning d 0 2 ), the unwanted coupling between cavity 2 and the jgi $ jei transition (with coupling constant g 00 2 and detuning d 00 2 ), and the unwanted coupling between cavity 2 and the jg 0 i $ jei transition (with coupling constant g 000 2 and detuning d 000 2 ). Red lines correspond to cavity 1, while green lines correspond to cavity 2. | 2023-03-17T15:23:57.891Z | 2023-03-13T00:00:00.000 | {
"year": 2023,
"sha1": "9a15afae4bdbe05f88758c7b08b8a88996fd2a27",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "707045f422f9d51b4a0ec009f7e7ec0e00e2fc8d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
213280902 | pes2o/s2orc | v3-fos-license | HEAT TRANSFER AND HYDRODYNAMIC CHARACTERISTICS AT EVAPORATION OF LIQUID FILM IRRIGATING A HORIZONTAL BUNDLE OF FINNED TUBES
The paper presents experimental results on heat transfer during evaporation of an R21 freon film that irrigates a packing of horizontal finned tubes. It is shown that heat transfer on the finned tubes is noticeably more intense even with respect to the full surface of the finned tube.
Introduction
The finning of tubes is a well-proven method of heat transfer enhancement at evaporation and boiling. Unfortunately, experimental data on heat transfer at evaporation and boiling of film irrigating a horizontal bundle of finned tubes are very few [1,2]. Below there are heat transfer experimental results at the evaporation of Freon R21 irrigating a bundle of horizontal finned tubes. The experiments are carried out on an installation with forced circulation of the working substance.
Measurement technique
The temperature of the water heating experimental section is measured by semiconductor temperature sensors as well as controlled by multi-junction differential thermocouples. The sensors are calibrated with the accuracy of not less than 0.02C.
The wall temperature is calculated from the heat transfer coefficient.
The heat flux density on the experimental sections is calculated according to: The heating water flow rate is determined on the readings of rotameters as well as flowmeters with the accuracy rating 0.5. The rotameters installed in every section are calibrated individually.
The experiments are carried out at irrigation of the bundle with Freon R21 at t s = 40C. Reynolds number of the film irrigating the tube bundle is changed within the range of 380 Re 1500. The bundle consists of 12 tubes arranged in a vertical row. Experimental tubes are placed in the bottom part of the bundle. The second from the bottom tube was a plain copper (M-1) tube with technical surface finish R z 2.5 microns, the wall thickness w = 2 mm, and D = 10 mm. The third from the bottom copper (M-1) tube is a finned tube with the pitch of helical finning 1.07 mm and the finning factor 1.46. In Fig. 1 the dimensions of the finned tube are given. According to the investigation results published in [3], the distance between ribs is sufficient to hold the minimal liquid amount within a groove between ribs.
Measurement results
In Fig. 2 heat transfer experimental results are presented both for the plain and the finned tube in coordinates q -T. It is seen that boiling on the plain copper tube took place at T = 4.5 -5C (q 10 4 W/m 2 ). The value of heat flux density on the finned tube is here conditional because it is calculated relative to the surface area of a plain tube with the outer diameter 10 mm. The experimental results on the finned tubes appeared to be non-traditional. Usually boiling on finned tubes takes place at considerably lower temperature drops than on plain tubes. In these experiments, there is no boiling within the whole investigated range of heat fluxes. It is especially distinctly clear in the Fig. 3 showing that const within the whole range of parameters. In Fig. 3 the heat flux density is calculated relative to the whole surface area of the finned tube. It follows from the Fig. 3 that evaporation heat transfer is considerably higher on the finned tube than on the plain tube even if the heat flux density is calculated relative to the full surface of the finned tube. In Fig. 4 a physical model of film flow on a finned tube is presented. Supposed that surface tension force considerably exceeds gravitational force at film flow through the area between fins. Both visual observations and analysis of video films investigating hydrodynamics of film irrigating finned tube bundle justify this supposition. Micro-bubbles escaping at ethanol irrigation of finned tubes show the liquid motion trajectory along the lateral rib surface quite distinctly. A diagram of this movement is shown in Fig. 4.
Our visual observation, as well as video films, showed that under the force of surface tension liquid flows from a rib apex on the lateral rib surface to the groove bottom along the trajectory shown in Fig. 4. A thin film on the lateral area of the rib wets its surface and evaporates intensively.
It means that at liquid film irrigation of a finned tube the rib height becomes a characteristic linear dimension. Besides, the copper finned tube could be considered as an isothermal surface because the rib efficiency for this tube is determined according to the following dependency (4) and close to 1. Here l=h/; is the rib thickness near its base; h is a height of the rib. The bottom part of the rib within the groove between ribs is irrigated with the main liquid flow that flows along the groove between ribs and is turbulent.
The boundary between laminar and turbulent flows cannot be determined correctly. It can be estimated at a first approximation.
Heat transfer coefficient at film evaporation from different rib surfaces can be calculated only approximately taking into account the following assumptions: Fig. 4. b) Heat transfer both on the edge part of a rib and within the groove between ribs takes place like on a smooth tube [5]. c) The mass flow rate irrigating the rib edge is determined from the balance dependencies. d) He rib is within the initial section of the thermal boundary layer with the rib height as a characteristic linear dimension. e) A copper finned tube can be considered as isothermal surface so the rib efficiency is close to 1. f) The film thickness in the groove is calculated on the Dukler -Bergelin dependence [6]: The critical Reynolds number is determined here according to the Brauer dependence [7]: The film thickness calculated on (2) is close to a film thickness determined according to the dependency from [8] and to the experimental results presented in [9].
Dependence (4) is an empirical one. It is recorded by analogy with the analytical formula from [1] obtained to describe heat transfer on the initial section of a vertical wall. In the dependence (4) Calculating film thickness in the groove according to (5) and considering that the groove has a trapezoidal shape, we can approximately determine areas of the lateral rib part streamlined by turbulent or laminar liquid flows. Using experimental data on T = T w -T s we can calculate specific heat fluxes on each part of a finned tube. The total heat removal consists of the following components: Here n is the number of fins on the experimental tube surface; Q edge , Q groove , 2Q lam , 2Q turbheat removals from the tube edge, from the horizontal groove part between ribs, from the lateral rib part streamlined by laminar flow, by turbulent flow, n is a number of ribs on an experimental tube.
The calculated on (5) values of heat removals agree with experimental value with an error 15% within the whole parameter range of the experimental investigation.
Conclusion
The heat transfer at evaporation on finned tubes is so intense that excludes conditions which are necessary for vapor bubble origin. It should be noted that at the maximum heat flux value on the finned tube T w was always < 5C (i.e. the temperature head of the boiling inception on the smooth tube).
The presented calculation algorithm describes the experimental results satisfactorily. | 2019-11-28T12:48:24.731Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "2df054425fe80c5b88aa1741517aaeecdbd0509b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1369/1/012062",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9021a47b7f3c266e92be14f6b9473d705000a589",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
257725358 | pes2o/s2orc | v3-fos-license | Rationale and Design of the Effect of Ivabradine on Exercise Tolerance in Patients With Chronic Heart Failure (EXCILE-HF) Trial ― Protocol for a Multicenter Randomized Controlled Trial ―
Background: A high resting heart rate is an independent risk factor for mortality and morbidity in patients with cardiovascular diseases. Ivabradine selectively inhibits the funny current (If) and decreases heart rate without affecting cardiac conduction, contractility, or blood pressure. The effect of ivabradine on exercise tolerance in patients with heart failure with reduced ejection fraction (HFrEF) on standard drug therapies remains unclear. Methods and Results: This multicenter interventional trial of patients with HFrEF and a resting heart rate ≥75 beats/min in sinus rhythm treated with standard drug therapies will consist of 2 periods: a 12-week open-label, randomized, parallel-group intervention period (standard drug treatment+ivabradine group and standard drug treatment group) to compare changes in exercise tolerance between the 2 groups; and a 12-week open-label ivabradine treatment period for all patients to evaluate the effect of adding ivabradine on exercise tolerance. The primary endpoint will be the change in peak oxygen uptake (V̇O2) during the cardiopulmonary exercise test from Week 0 (baseline) to Week 12. Secondary endpoints will be time-dependent changes in peak V̇O2 from Week 0 to Weeks 12 and 24. Adverse events will also be evaluated. Conclusions: The EXCILE-HF trial will provide meaningful information regarding the effects of ivabradine on exercise tolerance in patients with HFrEF receiving standard drug therapies and suggestions for the initiation of ivabradine treatment.
Source: The Japanese Circulation Society. Guidelines for Rehabilitation in Patients
with Cardiovascular Disease (2012 revised version) 4) Subjects will perform a cool-down exercise for at least 1 minute. CPX will be evaluated by independent assessors who are blinded to each treatment.
The peak VO2 will be determined at the highest exercise work rate (WR), and the anaerobic threshold (AT) will be measured using the V-slope method. [1] The respiratory compensation point (RCP) is defined as the point at which there is an increase in VE/VCO2 and a decrease in PETCO2. [2,3] Before the RCP, the relationship between VE and VCO2 is linear (VE=aVCO2+b), where "a" is the value of the minute ventilation versus carbon dioxide production slope (VE vs. VCO2 slope), and "b" is the intercept on the VE axis (Y-int). [3,4] The minimum VE/VCO2 is determined as the nadir of the VE/VCO2 ratio during incremental exercise testing and 30 s average data. [4] Similarly, we will record tidal volume (TV), respiratory rate (RR), and PETCO2 values at rest, warm-up, AT and RCP, and peak WR of the averaged 20 seconds. The TV/RR ratio is determined at each exercise point.
The respiratory exchange ratio (R) is determined as the VCO2/VO2 ratio. Ti/Ttot is determined as the ratio of inspiratory time to total respiratory cycle time. Oscillatory ventilation was defined as 3 or more consecutive cyclic fluctuations of ventilation during CPX as follows; the amplitude of oscillatory ventilation must exceed 30% of concurrent mean ventilation with a complete oscillatory cycle within 40 to 140 seconds. [5] The oxygen uptake efficiency slope (OUES) was defined by the following formula: VO2 = a log VE+b, where a is the OUES. [6]
Data collection
The case report forms used in this study will be prepared and dated by the investigators and subinvestigators after each of the visits listed in the schedule in the protocol, signed (signature or registration and seal) by the investigators, and submitted to the Clinical Research Support Center at the University of the Ryukyus Hospital.
During the study, the following outcomes will be recorded: 1) dose of study drug and medication adherence status; 2) concomitant drug/concomitant therapy; 3) observation of adverse events; 4) subjective symptoms/objective findings; 5) vital signs (blood pressure/pulse rate); and 6) clinical examination results (hematology, blood chemistry and 12-lead electrocardiography).
Statistics
The full analysis set will consist of study participants who were enrolled in the study, received at least one dose of the study drug or at least one of the standard heart failure (HF) treatment drugs after randomization, and had peak VO2 data measured at baseline.
The per-protocol set will consist of the study participants after excluding those with serious protocol violations from the full analysis set, such as violation of inclusion criteria, violation of exclusion criteria, or use of any drug prohibited for concomitant use before CPX at Week 12.
Summary statistics for the distribution of participants' background data in each group will be calculated for each analysis set. To summarize the data distribution, summary statistics (number of cases, mean, standard deviation, minimum, 25th percentile, median, 75th percentile, and maximum) for each group will be calculated for all continuous variables. For nominal variables (categorical data), frequencies and proportions of categories in each group will be reported.
For drugs used for the treatment of chronic HF and drugs used concomitantly at baseline (until immediately before allocation), frequency distribution and summary statistics of dose, etc., will be calculated for each group.
Among the baseline efficacy endpoints (peak VO2) and exploratory endpoints, summary statistics will be calculated for continuous variables, and frequency distributions in each group will be calculated for categorical data.
For the primary endpoint, the summary statistics for the measured values and changes at Week 12 (Week 12 -baseline measurement) will be calculated. Then, the difference in the mean amount of change in peak VO2 between the standard HF treatment + ivabradine group and the standard HF treatment group (the mean amount of change in the standard HF treatment + ivabradine group -the mean amount of change in the standard HF treatment group) will be evaluated. A statistical test will be performed on the null hypothesis "the mean amount of change in the standard HF treatment + ivabradine group -the mean amount of change in the standard HF treatment group = 0" and the alternative hypothesis "the mean amount of change in the standard HF treatment + ivabradine group -the mean amount of change in the standard HF treatment group ≠ 0." Superiority will be confirmed if the point estimate of the difference in the mean amount of change is positive and the mean amount of change in the standard treatment + ivabradine group is statistically significantly greater than that in the standard HF treatment group. The significance level will be a twosided 5% P. Additionally, a two-sided 95% confidence interval for the difference in the mean amount of change will be estimated. Analysis of variance with only the treatment factor will be performed as the primary analysis, and interval estimation of the differences in mean changes and P values will be calculated. Analysis of variance with covariates of allocation adjustment factors (age and baseline peak VO2, both as categorized by the protocol) and sex will be performed as the secondary analysis. Interactions between group and covariates will be included in the exploratory analysis of variance. Secondary endpoints will be analyzed to provide additional insights into the findings of the primary analysis of this study. Changes over time in peak VO2 at Week 0, Week 12, and Week 24 will be compared between the standard HF treatment + ivabradine group and the standard HF treatment group in patients with chronic HF to reveal the effects of ivabradine In the main analysis, missing values at Week 12 or at Week 24 will be imputed with the baseline value of the same patient (Baseline Observation Carried Forward: BOCF) except for those patients in the standard HF treatment group who terminated the study at Week 12 (do not meet the indication for ivabradine). Missing values at Week 24 of the latter will be imputed with the value at Week 12 of the same patient.
In exploratory analysis with MMRM, missing values at Week 12 or at Week 24 of the former will be assumed to be missing at random, and thus will not need imputation. | 2023-03-25T15:03:04.479Z | 2023-03-24T00:00:00.000 | {
"year": 2023,
"sha1": "2d2ccc0a18bda9432b7a21545275a6febd7cbbb2",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/circrep/advpub/0/advpub_CR-22-0134/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e9251ff7770cc4e5aa82b76e375b550234e2698",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
252992669 | pes2o/s2orc | v3-fos-license | Gravitational duality, Palatini variation and boundary terms: A synopsis
We consider $f(R)$ gravity and Born-Infeld-Einstein (BIE) gravity in formulations where the metric and connection are treated independently and integrate out the metric to find the corresponding models solely in terms of the connection, the archetypical treatment being that of Eddington-Schr\"odinger (ES) duality between cosmological Einstein and Eddington theories. For dimensions $D\ne2$, we find that this requires $f(R)$ to have a specific form which makes the model Weyl invariant, and that its Eddington reduction is then equivalent to that of BIE with certain parameters. For $D=2$ dimensions, where ES duality is not applicable, we find that both models are Weyl invariant and equivalent to a first order formulation of the bosonic string. We also discuss the form of the boundary terms needed for the variational principle to be well defined on manifolds with non-null boundaries. This requires a modification of the Gibbons-Hawking-York (GHY) boundary term for gravity. This modification also means that the dualities between metric and connection formulations are consistent and include the boundary terms.
Introduction
There has been a lot of interest in modified theories of gravity, often motivated by the singularities in black hole solutions of General Relativity (GR) or the abundance of dark energy and dark matter.A popular venue for modifying GR has been to treat the connection as an independent variable.This can lead to a dual theory, as in the Eddington-Schrödinger (ES) duality, or to a new model, as is the case already in the Palatini approach to GR if the matter coupling leads to a violation of the metricity condition 1 .
In this letter we compare two ways of constructing a purely connection dependent gravity model starting from f (R) theory or from Born-Infeld-Einstein (BIE) theory 2 .We find that certain Weyl invariant f (R) theories lead to models that are equivalent to those from BIE theory for D = 2 where they both give actions involving the square root of the determinant of the Ricci tensor.The constructions break down for D = 2, where they lead to Weyl invariant models still involving both the metric and the connection.These models are then both equivalent to the first order action of the bosonic string. 1 Historical note: One of us pioneered applying the "Palatini variational principle" to certain matter coupled gravity theories [1,2], almost fifty years ago.Not many papers followed on these until the early 2000s, but since then the topic has grown impressively.Now there are many hundred papers, mainly due to applications in astrophysics.
For each model we find the appropriate generalizations of the Gibbons-Hawking-York GHY) boundary terms that ensure a consistent derivation of the field equations in manifolds with boundaries under a variational principle that we specify.These terms are needed for complete duality.
We give a unified treatment of several models, the novel feature being a description of their interrelations as summarized in Fig. 1, and some aspects of their variation.In particular, we emphasize that the two-dimensional versions are special and related to a first order formulation of the bosonic string.In addition we discuss the variational principle in the metric-affine setting and introduce new GHY type terms for manifolds with boundary.
The layout of the paper is that in secs.2-4 we remind the reader of ES duality, f (R) theory and BIE theory, respectively.There we display the purely connection and the dual purely metric theories in the cases when they exist separately, as well as the mixed theories.Sec. 5 contains the 2D string-limit of the theories.In sec.6 we discuss variational principles and boundary terms.We collect some conclusions in sec.7.
Eddington-Schrödinger
The original form of ES duality starts from the action [3,4], with metric and (symmetric) connection as independent variables and |g| the absolute value of the determinant of the metric g ab .(The original discussion has D = 4.) Eliminating Γ via its field equations determines the connection to be the Levi-Civita connection Γ (0) via the Palatini variation of the Ricci tensor This returns the GR action with a cosmological term when substituted into the action 3) The boundary contribution generated in the process reads Varying the metric using gives the field equations When D = 2 this can be solved to give From (2.6) it then follows that and thus (2.9) The dual theory, with the connection as independent variable is then, from (2.1), (2.10) 3 First order f (R) and duality The second order f (R) theories in D dimensions are defined by an action where Here we will consider the action S f , which is (3.1) but with R(Γ (0) ) replaced by R(Γ) where Γ is a general (symmetric) connection.Varying the metric and the connection independently as in the Palatini formulation of GR will in general result in field equations inequivalent to those from (3.1), [5].Here we are interested in the possibility of dualising the model to one written entirely in terms of the connection.This procedure is related to the ES duality described in sec.2. The variation of the general action reads where use is made of the relations (2.2) and (2.5).Extremising the action will then result in a boundary contribution and the field equations Taking the determinant and the trace of the first of these results in and respectively.The equation (3.7) gives a first order ordinary differential equation for f which we solve with c a constant.Combining this with (3.6) eliminates the metric completely and gives the action The action on the left hand side is invariant under Weyl transformations of the metric g ab → Ω(X)g ab (3.10) in parallel to the construction of Weyl invariant (spinning and super) p-branes [7][8][9][10][11].An alternative way to see this is by using (3.6) and (3.7) simultaneously to massage the right hand side of S Γ f in (3.9) by direct substitution and elementary linear algebra, to arrive at which is clearly Weyl invariant by (3.10).
Going back to the field equations (3.5) and tracing the second equation over (c, d) implies in all dimensions D = 2, so that, for the solution (3.8) we have 4 3 Here we are primarily interested in ES duality, but the question of special cases is interesting.In [6], a more detailed analysis of the solutions to (3.5) is given, covering also the case e.g. when the first equation of (3.5) is NOT identically satisfied (see their "case 2").There they do not find any special case where the metric can be eliminated. 4At this stage it is common to argue (in D = 4) that (3.12) leads to a Levi-Civita connection for q ab := f ′ g ab .Since this still contains the connection, the field equations are then used to replace R ab by the energy momentum tensor.See, e.g., [5] or [12].This route is not open to us since for us D is arbitrary and we have no matter.
Tracing on (d, b) and using that we find where Γ c is the contracted connection.Noting that c , the contracted Levi-Civita connection, and defining Making use of this in (3.13) yields where we have used (3.16) in an intermediate step.The last equation in (3.17) may be solved in the way the Christoffel symbols are found from the metricity condition to give with Γ (0)a cd the Levi-Civita connection 5 .This is consistent with (3.16) and shows that the connection Γa cd equals the Levi-Civita connection Γ (0)a cd up to a Weyl transformation Since the model is Weyl invariant, we can thus always choose the connection to be Levi-Civita.
In the preceding derivation we only used the equation (3.7) from the metric field equations to determine the form of f (R).So if we assume that form to begin with, the result (3.18) implies that the action S Γ f in (3.9) is dual to 5 Formally, we are still faced with the same nonlinear problem as discussed in footnote 4.However, now the problem is located in the ϕ dependence and we may use Weyl invariance to go to a particular gauge where the connection is independent of ϕ.
In D = 2 the solution contains an independent vector as part of the first order version of the bosonic string in [13].There it also corresponds to an additional symmetry of the model, unique to D = 2.
Born-Infeld-Einstein Gravity
In this section we relate the previous results to a gravitational model based on an analogy to the Born-Infeld action.We shall be brief in our presentation, since we recently discovered that several of the calculations already exist in the literature (for D = 4) [14,15].
The action for BIE gravity [16], with independent (symmetric) connection reads where the Ricci tensor is assumed symmetric and ǫ is a parameter of dimension (length) 2 .Abbreviating and extremising S BIE by varying the metric and the connection using (2.2) and (2.5), yields a boundary contribution which substituted back yields so that thus eliminating the metric in favour of the connection.Inserting this in (4.1), we find an action with the connection as the only variable this agrees with the ES result in (2.10) 6 .The two models are thus equivalent as starting points for deriving the R(Γ) theory, although only one of them is Weyl invariant.Finding a metric action dual to (4.9) by solving (4.5) meets with several obstacles.The solution to (4.5) is formally i.e. the Levi-Civita connection for h ab .But this is incomplete since h ab still contains R ab (Γ).So it is rather the full metric and connection theory with field equations (4.4) and (4.5) one has to consider.Using (4.8) we write Since this is a constant Weyl transformation, plugging it into (4.11)gives in perfect agreement with the right hand side of (4.4).The on-shell theory thus found satisfies (4.8) with connection (4.13) is (λ which is equivalent to Einstein's vacuum field equations with a cosmological constant This confirms the D = 4 discussion of the matter coupled system in [14,15] where it is shown that the vacuum solutions are the Einstein solutions. We saw in Sec.2 that in D = 4 the Γ-action (3.9) is dual to GR with a cosmological constant − 1 2 Λ = − 1 16c .For general D, it corresponds to BIE with cosmological term We thus have an interesting chain of equivalences between "metric", "affine" and "metric/affine" models.See the figure below for the interrelations.
Fig. 1.A schematic picture of the interrelation between the models discussed: Alternatively we can see this directly in the action (4.1) using that in D = 2 for a symmetric Ricci tensor and connection.Then, which is the ES action (2.1) with Λ = 2 ǫ (λ − 1).However the action (2.1) is again equivalent to the right hand side of (5.3) since tracing the field equations (2.6) now results in Plugging this back into the action (2.1) (for D = 2) again returns the mixed variable Lagrangian density (5.3) 7 .Thus, unlike the case D = 2, both models depend on both metric and connection as independent variables.Indeed, the actions (2.1) and (4.1) in two dimensions reproduces the conformally invariant first order action for the bosonic string, introduced in [13].At first sight this action may look strange, since we are taught in string theory to ignore the 2D curvature term because it is a total derivative.This is indeed true for the curvature for the spin connection, or equivalently when the connection is Levi-Civita.It is not true, however, for the curvature of an arbitrary affine connection.Briefly, (5.3) is invariant, up to a boundary contribution 8 , under Weyl rescaling of the metric and transformations of the affine connection that read (5.7) The Γ field equations lead to as usual, but unlike when D = 2, the solution is not the Levi-Civita connection Γ (0) , but with U a the most general parameter of the transformation (5.7).It can then be shown that there is a gauge choice where the model becomes the usual Nambu-Goto string.
Boundary actions
In this section we consider what boundary terms need to be added to the actions to cancel the respective contributions (2.4), (3.4) and (4.3) and make the variational problem well defined in the presence of a non-null boundary.
Variational principle
We are concerned with both the variation of the metric δg and the independent connection δΓ.When the full set of field equations hold, Γ gets expressed in terms of derivatives of the metric.In this sense the system is reminiscent of a Hamiltonian system H(q, p) with the metric corresponding to the coordinates q and Γ to the momenta p.The field equation that gives Γ = Γ(g, ∂g) then roughly corresponds to ∂H/∂p = q which may be solved for p = p(q, q).As in the variational principle for H, where the derivation of the canonical equations requires δq = 0 on the boundary of the integration volume while no conditions are required for δp, we 7 With λǫ = 2. 8 The transformation (5.7) of Γ has the form of a conformal transformation and the usual formulae for the variation of the Ricci tensor may be used to evaluate it and find δ |g|g ab R ab = −2∇ a ( |g|V a ).
shall require δg = 0 on the boundary but leave δΓ free.This choice can also be motivated by the fact that on-shell the connection becomes the Levi-Civita connection for the metric and its variation does not vanish on the boundary as well as by the wish to find a dual action reproducing the boundary terms for both the dual models.
Apart from the extra variation of Γ, our derivation follows that of the GHY boundary term [17,18] described in [19].
The boundary is defined by which leads to the tangential vectors to the boundary and a normal n a e a i a = 0 .( The induced metric is It is related to the full metric as Gauss' divergence theorem reads In the variational principle we shall keep the normal to the boundary ∂M, as well as its partial derivatives, constant See sec.4 of [19] for details.We also require that the metric is held constant when confined to the boundary This means that the induced metric γ ij is fixed during the variation.It also implies that, although δ∂ c g ab does not vanish on ∂M, the tangential derivatives must also vanish: δg ab,c e c i = 0 .(6.9)
It follows that
γ ij e a i e b j δg cb,a = 0 .(6.10)
Gibbons-Hawking-York
The GHY boundary action for GR is where is the trace of the extrinsic curvature (second fundamental form) K ab .There are various definitions of K ab .One involving the Lie derivative reads This has a trace which is only equivalent to that used in (6.12) when the connection is the Levi-Civita one.
Eddington-Schrödinger
For a general Γ the boundary term will have to be modified.To this end we note that reduces to the GHY boundary action when the connection is metric.Further varying the connection we find a n c (6.16) so that (6.14) precisely cancels the boundary contribution (2.4) that arises in the Γ variation of the ES bulk action (2.1).
f (R)
Similarly, we can cancel the boundary contribution (3.4) that arises in the Γ variation of the f (R) bulk action (3.1) by the varying the following boundary action provided that we also set δR = 0 on the boundary.When the connection is Levi-Civita, this reduces to the boundary term in [20].The vanishing of the variation of R is required also there and in [21].We refer the reader to these references for a discussion.In both references the connection is the Levi-Civita connection, so the implications of δR = 0 merits to be studied in more detail.
In the case of duality, i.e. when we can eliminate the connection as an independent variable, the connection is the Levi-Civita connection and the counterterm becomes that of the purely metric theory as mentioned above.Similarly, on the dual side, when we eliminate the metric using (3.5)-(3.8), the variation of the boundary action (6.17) becomes with R de the inverse of the affine Ricci tensor, exactly cancelling the variation of the Eddington action (3.9).Hence for the dual theories, the counterterm action (6.17) produces the correct counterterms, purely metric and purely affine, respectively, for the two formulations.
Born-Infeld-Einstein gravity
Formally, we can cancel the boundary contribution (4.3) that arises in the Γ variation of the BIE bulk action (4.1) by varying the following boundary action i.e., the same boundary action as (6.14) but with the induced metric γ h and Kh now defined with respect to the metric (4.2) so that The conditions (6.8) become Since (6.8) also holds, this means that we must set in analogy to the f (R) case in (6.17) described above.This is consistent with the equivalence of BIE theory with f (R) for the special case (3.8).
The bosonic string
Finally we discuss the boundary terms for the D = 2 first order bosonic string.This differs from (6.14) by a term proportional to U a that will cancel the term from (6.26).So the U boundary term ensures the invariance under (6.24) and by the same token takes care of a general variation of U to make the variational principle well defined.
Conclusions
We have studied first order models for f (R) theories and BIE theory with metric and connection as independent variables, and compared them to the famous ES duality.We have shown that, when the metric is eliminated, certain Weyl invariant f (R) models result in dual models that are equivalent to those of ES.Similarly, for BIE we showed that eliminating the metric again leads to the ES result.In the form where the connection is the only variable, BIE is thus equivalent to Weyl invariant f (R) in its dual form.These constructions hold when the space-time dimension D > 2 but break down for D = 2.The models are then still equivalent but now equal to the first order bosonic string.
We have discussed boundary terms for all these theories in general (no specific form of f (R)).After describing our variational principle, we showed that the necessary boundary terms carry over from the connection form to the metric form under duality.
Open problems are to include torsion in the discussion as done for e.g.metric modified gravity in [5], to understand if and how the Weyl invariance of our f (R) model can be related to symmetries of the BIE ation.
5 The
2D string limitsClearly the results in the previous sections do not hold when D = 2.When D = 2, the discussion of the model (4.1) is as follows: Taking determinants in equation (4.4) leads toλ 2 = 1 ⇒ λ = ±1 ,(5.1)so that equation (4.7) gets replaced by
1 2 2 |g|R → 1 2 |g|R − 1 2 ∂ a 4
∂Mdy√ γ K + 2U a n a .(6.23)Here K takes care of the boundary contributions from the bulk variation of R as before, while the action is still invariant under conformal symmetry of ΓΓ a bc → Γ a bc + 2V (b δ a c) − g bc V a .(6.24)This is becauseK → K − g ab 2V (a δ c b) − g ab V c n c = K (6.25) while U a → U a + 2V (a δ b b) − g ab V b = U a + 2V a (6.26)and from the R term in the bulk 1 |g|V a (6.27) which gives a boundary contribution − √ γV a n a (6.28) | 2022-10-20T01:16:17.329Z | 2022-10-19T00:00:00.000 | {
"year": 2022,
"sha1": "116df970715218c2f058cf4ff048c1ced4a6e5d8",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1361-6382/acc22f/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "116df970715218c2f058cf4ff048c1ced4a6e5d8",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
148568710 | pes2o/s2orc | v3-fos-license | Domain-specific mechanical modulation of VWF–ADAMTS13 interaction
Hemodynamic forces activate the Von Willebrand factor (VWF) and facilitate its cleavage by a disintegrin and metalloprotease with thrombospondin motifs-13 (ADAMTS13), reducing the adhesive activity of VWF. Biochemical assays have mapped the binding sites on both molecules. However, these assays require incubation of two molecules for a period beyond the time allowed in flowing blood. We used a single-molecule technique to examine these rapid, transient, and mechanically modulated molecular interactions in short times under forces to mimic what happens in circulation. Wild-type ADAMTS13 and two truncation variants that either lacked the C-terminal thrombospondin motif-7 to the CUB domain (MP-TSP6) or contained only the two CUB domains (CUB) were characterized for interactions with coiled VWF, flow-elongated VWF, and a VWF A1A2A3 tridomain. These interactions exhibited distinctive patterns of calcium dependency, binding affinity, and force-regulated lifetime. The results suggest that 1) ADAMTS13 binds coiled VWF primarily through CUB in a calcium-dependent manner via a site(s) outside A1A2A3, 2) ADAMTS13 binds flow-extended VWF predominantly through MP-TSP6 via a site(s) different from the one(s) at A1A2A3; and 3) ADAMTS13 binds A1A2A3 through MP-TSP6 in a Ca2+-dependent manner to autoinhibit another Ca2+-independent binding site on CUB. These data reveal that multiple sites on both molecules are involved in mechanically modulated VWF–ADAMTS13 interaction.
INTRODUCTION
The multimeric plasma glycoprotein Von Willebrand factor (VWF) mediates platelet adhesion to the subendothelium at sites of vascular injury by interacting with the glycoprotein (GP) Ib-IX-V complex and GPIIb-IIIa on platelets (De Marco et al., 1986;Franchini and Lippi, 2006). The adhesive activity of VWF correlates with the multimer's size, which is in part regulated proteolytically by the zinc metalloprotease a disintegrin and metalloprotease with thrombospondin motifs-13 (ADAMTS13). Deficiency in ADAMTS13 results in accumulation of hyperactive ultralarge (UL) VWF, leading to disseminated platelet-rich microthrombosis in the microvasculature, as seen in patients with thrombotic thrombocytopenic purpura (TTP; Tsai, 2007). ADAMTS13 consists of an N-terminal metalloprotease domain, a disintegrin-like domain, a TSP1 motif, a cysteine-rich domain, a spacer domain, seven additional TSP1 domains, and two CUB domains at the C-terminus (Zheng et al., 2001; Figure 1A). ADAMTS13 specifically cleaves the Tyr 1605 -Met 1606 peptide bond in the VWF A2 domain (Furlan et al., 1996). The cleavage converts the prothrombotic ULVWF to smaller and less adhesive, but hemostatically critical VWF multimers in the circulation (Dong, 2005). The Tyr 1605 -Met 1606 peptide bond is buried in the tertiary structure of the A2 domain FIGURE 1: Experimental setup. (A) Domain diagrams of the ADAMTS13 variants, A1A2A3, and VWF monomer. (B) AFM schematic and functionalization (not to scale). A cantilever was mounted on a PZT above the polystyrene surface. Its deflection was measured by a laser beam reflected by the back of the cantilever tip onto a photodiode. ADAMTS13 and VWF variants were respectively adsorbed on the cantilever tip and polystyrene surface. (C) BFP schematic and functionalization (not to scale). Two micropipettes respectively aspirate an RBC on the left and a target bead on the right. The probe bead is attached to the apex of the RBC. The deformation of the RBC is measured by tracking the image of the probe bead. A1A2A3 was covalently linked to the probe beads, and the ADAMTS13 variants were adsorbed onto target beads. (Zhang et al., 2009) as well as in the quaternary structures of the A1A2A3 tridomain polypeptide (Wu et al., 2010) and the VWF multimer (Dong, 2005). This cryptic site can be exposed by hemodynamic forces to facilitate or enhance VWF proteolysis by ADAMTS13 (Furlan et al., 1996;Tsai, 1996). On one hand, a recombinant truncated ADAMTS13 lacking the C-terminal TSP1 and CUB domains remains active in cleaving VWF under static conditions (Zheng et al., 2003;Feys et al., 2009), indicating that these C-terminal domains are not required for ADAMTS13 activity. On the other hand, cooperation between the middle C-terminal TSP1 repeats and the distal C-terminal CUB domains is required for ADAMTS13 to recognize VWF under flow conditions (Zhang et al., 2007) and to downregulate thrombus formation in vivo (Banno et al., 2009).
The apparent discrepancies suggest that ADAMTS13 may interact with VWF multimers in ways undetectable by conventional and biochemical means under static conditions with a long incubation. Prolonged incubation is unlikely to be physiologically permissible, because circulating VWF and ADAMTS13 interact rapidly and transiently in flowing blood. Here we utilized adhesion frequency and force-clamp experiments with atomic force probe (AFM) and biomembrane force probe (BFP) (Figure 1, B and C) to measure force-free or force-modulated binding kinetic rates at distinctive binding sites between ADAMTS13 or its truncation variants and VWF or A1A2A3 with short contacts.
RESULTS
To study the rapid, transient, and mechanically modulated binding of ADAMTS13 to VWF, we made recombinant full-length human ADAMTS13 and two truncation variants, MP-TSP6 (from the N-terminal to the sixth TSP motif), which preserves nearly the same cleavage activity as full-length ADAMTS13 in vitro (Banno et al., 2004), and CUB (two C-terminal CUB domains), which inhibits ADAMTS13 cleavage activity (Tao et al., 2005;Muia et al., 2014;South et al., 2014; Figure 1A). The activity of the recombinant ADAMTS13 and the functionality of plasma VWF and A1A2A3 were confirmed by ADAMTS13 proteolytic assays (Supplemental Figure S1).
ADAMTS13 binds coiled VWF via CUB in a divalent cation-dependent manner
We used AFM to measure the adhesion frequency between ADAMTS13 constructs and VWF at contact time 2 s to mimic the interaction under blood flow, but sufficiently long to reach steady-state binding (see Figure 4B later in the paper). No cleavage product was detected by incubating recombinant ADAMTS13 and VWF multimers for 30 min in the presence of 1 mM of urea (Supplemental Figure S1B), suggesting that cleavage is unlikely to occur during the short contact of 2 s. The nonspecific adhesion between ADAMTS13 (or its truncation variants) and BSA was <4% (Figure 2A, gray boxes), indicating that adhesions with much higher frequencies (>20%) were predominately mediated by specific VWF-ADAMTS13 interaction. Under static conditions, most VWF assumed a coiled conformation (Figure 2, C and D, first column, and F, brown box). Therefore, binding was considered to occur between the coiled VWF and ADAMTS13.
Because the adhesion frequency depends on the binding affinity, the contact area, and the molecular densities of the enzyme and the substrate (see Materials and Methods), in the experiments testing the VWF-ADAMTS13, VWF-MP-TSP6, and VWF-CUB binding, we kept the same coating concentrations of ADAMTS13 constructs and VWF (see Materials and Methods). Assuming that the contact area was the same, we could compare the affinities of three ADAMTS13 constructs to the coiled VWF. In the presence of Ca 2+ , coiled VWF bound comparably to ADAMTS13 and CUB ( Figure 2A, black boxes). In contrast, VWF-MP-TSP6 binding was higher than for the negative control. Adding EDTA eliminated VWF-CUB binding, but did not affect VWF-MP-TSP6 binding, and reduced VWF-ADAMTS13 binding to the level of VWF-MP-TSP6 binding (Figure 2A, blue boxes). Adding CUB to the reaction solution reduced VWF-ADAMTS13 binding to the level of VWF-MP-TSP6 binding but did not affect VWF-MP-TSP6 binding, whereas adding MP-TSP6 had no effect on VWF-CUB binding (Figure 2A, red boxes). These data suggest that ADAMTS13 binding to VWF multimers under static conditions was primarily through the CUB domains and required divalent cations. We next measured lifetimes of VWF-ADAMTS13 and VWF-CUB bonds under force with AFM to further define their characteristics. Both VWF-ADAMTS13 and VWF-CUB bonds displayed indistinguishable slip-bond characteristics, where the bond lifetime decreased with increasing force (Marshall et al., 2003;Yago et al., 2004; Figure 2B). These force-dependent bond lifetimes corroborate the adhesion frequency data, suggesting that ADAMTS13 binds to coiled VWF multimers through the CUB domains under static conditions. ADAMTS13 binds flow-extended VWF via MP-TSP6 with a much higher affinity than and a distinct force-dependent bond lifetime pattern from that via CUB It has been shown that shear flow can elongate globular VWF multimers (Schneider et al., 2007). To investigate ADAMTS13 binding to shear-extended VWF, we immobilized a low density of VWF on the Petri dish surface precoated with a high density of a polyclonal antibody, then exposed the surface to a high fluid shear stress to elongate VWF. VWF was unable to recoil back after the flow was stopped, as it was further captured on additional points by the excessive antibody (Supplemental Figure S3). Fluorescenceconjugated VWF appeared as isolated round spots imaged by both AFM ( Figure 2C, first column) and superresolution microscopy (SIM) (Gustafsson, 2005; Figure 2D, first column) without shear, but became elongated stringlike structures after long shear exposure (8000 s -1 for 10 min; Figure 2, C and D, second to fourth columns). The length of VWF along the long axes in AFM images increased threefold (p < 0.0001) after exposure to shear stress ( Figure 2E).
Regardless of the difference of protein densities, the two sets of data ( Figure 2, A, black boxes, and F, brown boxes) showed the same qualitative pattern: comparable levels of VWF binding to ADAMTS13 and CUB, which were higher than that to MP-TSP6. 27,26,12,29,11,5,5,5,8,18,5,13 spots were tested for the adhesion frequencies. The statistical analysis results are reported only if the differences are significant. (B) Lifetimes of bonds between plasma VWF and ADAMTS13 (black squares) or CUB (gray triangles) were measured by AFM force-clamp assay and plotted vs. force. Data are presented as mean ± SEM of several tens to several hundreds of measurements per point. (C) AFM scanning images of plasma VWF before (first column) or after (second to fourth columns) experiencing 8000 s -1 wall shear rate for 10 min. (D) Superresolution images of plasma VWF before (first column) or after (second to fourth columns) experiencing high shear. Blue and green arrows indicate the coiled and extended VWF in C and D, respectively. (E) Quantitation of VWF lengths along the long axes from AFM images (median ± quarter percentile and max/min of >90 measurements each). (F) Adhesion frequencies (median ± quarter percentile and max/min of n = 27, 5, 20, 10, 28, 15, 21, 10 pairs of spots at 50 contacts each) resulted from 0.8 s contact of AFM tips bearing ADAMTS13 variants with BSA (first box), low concentration (2.5 µg/ml) VWF-coated polystyrene surface before flow (brown boxes) or after flow (green boxes). The estimated affinity fold increase is labeled in the figure. (G) Lifetimes of elongated plasma VWF bonds with ADAMTS13 (black squares) or MP-TSP6 (gray circles) were measured by AFM force-clamp assay and plotted vs. force. Data are presented as mean ± SEM of several tens to several hundreds of measurements per point. These data also suggest that differences in conformation between directly adsorbed and antibody-captured VWF molecules on the surface, if any, do not detectably impact their binding to ADAMTS13 and its truncation variants. Remarkably, shear-elongated VWF showed greatly enhanced binding to ADAMTS13 and MP-TSP6, but not to CUB ( Figure 2F, green boxes). Using the same coating concentration of the ADAMTS13 constructs and VWF, we expected the same molecular densities of the enzyme and substrate, respectively. The changes in affinity were calculated from the increases in adhesion frequencies (see Materials and Methods). ADAMTS13 and MP-TSP6 showed respectively 6.7-and 14-fold higher affinities for the flow-extended than for the coiled VWF captured by antibodies. These results suggest that, instead of binding to the CUB domains, elongated VWF exposes a new site for ADAMTS13, and this new site binds to MP-TSP6 with a much higher affinity than the coiled VWF-CUB binding affinity ( Figure 2F).
We also compared the force-dependent lifetimes of elongated VWF-MP-TSP6 and elongated VWF-ADAMTS13 bonds by AFM. Both interactions exhibited indistinguishable catch-slip bonds, where the bond lifetime first increased with force until reaching a peak, and then decreased as force further increased (Marshall et al., 2003;Yago et al., 2004; Figure 2G), showing a pattern qualitatively distinct from that of the slip bond of the coiled VWF interactions with ADAMTS13 and with CUB ( Figure 2B). The bond lifetime measurements corroborated with the adhesion frequency measurements, strongly indicating that elongated VWF interacts primarily with ADAMTS13 through a site on MP-TSP6. The zero-force extrapolation of the extended VWF-ADAMTS13 bond lifetime is much shorter than that of the coiled VWF-ADAMTS13 bond lifetime (Figure 2, B and G), indicating a much faster off-rate for ADAMTS13 dissociation from flowextended than from coiled VWF. Combined with affinity differences, the finding translates to an order of magnitude-faster onrate for ADAMTS13 to associate with flow-extended than coiled VWF.
A1A2A3 and VWF bind ADAMTS13 through different sites
To further dissect the binding site for MP-TSP6, which is exposed only on elongated VWF multimers, we analyzed the binding of a VWF A1A2A3 polypeptide to the ADAMTS13 constructs. A1A2A3 contains the sites for VWF interactions with glycoprotein Ibα (GPIbα) on platelets (A1) and collagen in the subendothelial matrix (A1 and A3), as well as the peptide bond cleaved by ADAMTS13 (A2) (Ruggeri and Mendolicchio, 2015). A1A2A3 is thought to be buried in coiled VWF, but becomes exposed upon VWF extension by shear stress. The adhesion frequency between an ADAMTS13 and an A1A2A3 was specific and not affected by EDTA ( Figure 3A). A1A2A3-MP-TSP6 binding is similar to that of A1A2A3-ADAMTS13 in the presence of Ca 2+ , but was abolished by EDTA ( Figure 3A). The Ca 2+ -dependence of the A1A2A3-MP-TSP6 binding was confirmed when the reaction buffer was recalcified ( Figure 3B). The different divalent cation requirements for A1A2A3 to bind MP-TSP6 and fulllength ADAMTS13 suggest the presence of another A1A2A3 binding site outside MP-TSP6. This prediction was confirmed by the specific and divalent cation-independent binding between CUB and A1A2A3 ( Figure 3A). Thus, ADAMTS13 contains two binding sites for A1A2A3: one on MP-TSP6 that is calcium-dependent and the other on the CUB domains that is calcium-independent. The divalent-cation dependencies of A1A2A3 interactions with the three AD-AMTS13 constructs ( Figure 3A) are opposite to those of the coiled VWF (Figure 2A), suggesting that A1A2A3 and coiled VWF bind ADAMTS13 using distinct binding sites.
The two ADAMTS13 binding sites for A1A2A3 exhibit distinct force-dependent bond lifetimes The calcium requirement for A1A2A3 binding to MP-TSP6, but not to ADAMTS13 and CUB ( Figure 3A), implies that ADAMTS13 used FIGURE 4: Binding kinetics and inhibition. (A) 2D effective on rates (mean ± SEM of 3-6 bead pairs making 30 contacts of 900-1800 s total contact time, several tens to several hundreds of waiting time events) of A1A2A3 association with indicated ADAMTS13 variants measured by BFP thermal fluctuation assay. (B) Adhesion frequencies of 50 contacts (mean ± SEM of n = 6-12 tip-surface spot pairs per positive data point and n = 4 per negative control data point) of AFM tips (bearing indicated ADAMTS13 variants) with BSA (open symbols)-or A1A2A3 (closed symbols)-coated polystyrene surfaces were plotted against contact time and fitted by Eq. 1 (curves). (C) Adhesion frequencies (median ± quarter percentile and max/min of n = 5, 9, 10, 8, 11 tip-surface spot pairs of 50 contacts each) at 5 s contact time of MP-TSP6-bearing AFM tips to 1) BSA surface (first box), 2) CUB surface (second box), 3) CUB surface with EDTA in the solution (third box), and 4) CUB surface after EDTA was washed out and 5 mM Ca 2+ was added (fourth box). (D) Solution MP-TSP6 inhibited CUB binding to A1A2A3 in a cation-and dosedependent manner (circles.). Nonspecific binding of AFM tips coated with MP-TSP6 (triangles) to BSA-coated surfaces serves as negative control. (E) Solution CUB did not inhibit MP-TSP6 binding to A1A2A3 (squares). Nonspecific binding of AFM tips coated with CUB (inverted triangles) to BSA-coated surfaces serves as negative control. Data in D and E are presented as mean ± SEM of 6-15 tip-surface spot pairs of 50 5-s contacts each.
CUB to bind A1A2A3 when Ca 2+ was chelated. When Ca 2+ was present, however, it was not clear whether A1A2A3 bound to MP-TSP6, CUB, or both. To address this question, we compared the force-dependent lifetimes of bonds between A1A2A3 and ADAMTS13 in the presence and absence of EDTA. In the presence of Ca 2+ , lifetimes of A1A2A3-ADAMTS13 ( Figure 3C, top left) and A1A2A3-MP-TSP6 ( Figure 3C, top right) bonds showed similar triphasic force dependency: first decreasing, followed by a modest increase, and then decreasing. This pattern of slip-catch-slip bonds has previously been reported for E-selectin-ligand (Wayman et al., 2010) and VWF A1-GPIbα (Ju et al., 2013) interactions. In contrast, the A1A2A3-CUB interaction behaved as catch-slip bonds (Marshall et al., 2003;Yago et al., 2004) (Figure 3C, top middle). These data indicate that in the presence of Ca 2+ , ADAMTS13 binds A1A2A3 through a binding site on MP-TSP6, as the binding site on CUB makes no detectable contribution to the measured A1A2A3-ADAMTS13 binding properties.
We next compared the lifetime versus force curves of the A1A2A3 bonds with ADAMTS13 and CUB under distinct cation conditions. The A1A2A3-CUB bond lifetime versus force curve was not changed by chelating calcium (Figure 3C, top and bottom middle). By comparison, the triphasic lifetime versus force curve of the A1A2A3-ADAMTS13 bond ( Figure 3C, top left) was converted to a biphasic curve when divalent cations were chelated by EDTA ( Figure 3C, bottom left), which was comparable to the A1A2A3-CUB bond lifetime curve ( Figure 3C, bottom middle). These data further confirmed that ADAMTS13 binds through a binding site on CUB to A1A2A3 in the absence of divalent cations. The force-dependent lifetime curves of A1A2A3 bonds with the three ADAMTS13 constructs ( Figure 3C) are distinct from those of the coiled and flow-extended VWF (Figure 2, B and G), again, ruling out the possibility that A1A2A3 uses sites identical to either form of VWF to bind ADAMTS13, MP-TSP6, or CUB.
MP-TSP6 binding inhibits CUB-A1A2A3 binding, but not vice versa
The results presented so far have led to two hypotheses: 1) MP-TSP6 has a much higher on-rate than CUB for A1A2A3 and 2) A1A2A3-CUB binding is autoinhibited in native ADAMTS13. To test the first hypothesis, we directly compared the on-rate of A1A2A3 association with ADAMTS13 or its truncation variants in the presence and absence of EDTA using a thermal fluctuation assay with BFP (Chen et al., 2008a,b) and an adhesion frequency assay (Chesla et al., 1998;Chen et al., 2008b) with AFM. The thermal fluctuation assay measured the apparent effective 2D on-rates A c k on and zero-force off-rates k off 0 of A1A2A3 interactions with ADAMTS13, MP-TSP6, and CUB, where A c is a contact area in the experiment. A c k on is shown in Figure 4A and listed in Table 1. A1A2A3 had comparable effective 2D on-rates for ADAMTS13 and MP-TSP6 in the presence of calcium, while its on-rate for CUB was independent of divalent cations. The frequency versus contact time curves of A1A2A3 adhesion to ADAMTS13, MP-TSP6, and CUB are also indistinguishable ( Figure 4B). These results rejected the first hypothesis.
To test the second hypothesis, we first measured the adhesion frequency of MP-TSP6 to CUB in the presence of Ca 2+ at contact time 5 s. The adhesion was minimal until Ca 2+ was added to the buffer. The binding was abolished by EDTA and rescued by recalcification ( Figure 4C), suggesting the possibility of inhibition of one binding site by the other for A1A2A3 binding.
We next tested whether the binding of A1A2A3 by one ADAMTS13 truncation variant could be inhibited by the other ADAMTS13 truncation variant in solution. Interestingly, soluble MP-TSP6 inhibited the A1A2A3-CUB interaction in a dose-dependent manner ( Figure 4D, black circle line), whereas the A1A2A3-MP-TSP6 interaction was not affected by the addition of CUB ( Figure 4E, open square line). Moreover, the inhibitory effect of soluble MP-TSP6 on A1A2A3-CUB interaction was abolished by 5 mM of EDTA ( Figure 4D, open circle), further supporting the assertion that the inhibition of CUB by MP-TSP6 requires calcium-dependent binding of soluble MP-TSP6 to CUB on the cantilever, A1A2A3 on the surface, or both. Together, these data have provided strong support for autoinhibition between the two AD-AMTS13 binding sites for A1A2A3.
DISCUSSION
Hemodynamic forces in the circulation stretch ULVWF or immobilized VWF to a more extended configuration to enhance platelet binding, which may induce thrombosis formation (Schneppenheim et al., 2019), and to expose the cryptic proteolytic site in the A2 domain for cleavage by ADAMTS13 (Furlan et al., 1996;Tsai, 1996;Zhang et al., 2009;Wu et al., 2010), which may reduce VWF activity.
Here we further demonstrated that mechanical forces modulate VWF-ADAMTS13 interaction when the enzyme and the substrate come into rapid, repetitive, but brief contacts, which is likely to occur in flowing blood.
The mechanical modulation studied here includes two aspects. One is mechanical modulation of the dissociation kinetics of ADAMTS13 or its truncation variants from either coiled or flowextended VWF, as well as the A1A2A3 tridomain, under different divalent conditions. In particular, we found that these interactions exhibit multiple bond characteristics: slip bonds, catch bonds, biphasic bonds, and triphasic bonds that depend on the divalent cation conditions and are site-specific, which has not been previously demonstrated in enzyme-substrate interaction. These dynamic bonds may be critical for the regulation of the ADAMTS13 interaction with, and subsequent cleavage of, VWF, as such interaction can be mechanically strengthened or weakened by the formation of various dynamic bonds between the different binding sites in the circulation where comparable mechanical forces are present.
The second aspect of mechanical modulation is shifting of the ADAMTS13 binding site(s) from CUB to MP-TSP6 by shear-induced extension of VWF. Here we used biophysical characteristics as mechanical signatures to distinguish different binding sites between ADAMTS13 or its truncation variants and coiled or flow-extended VWF or A1A2A3. We found that ADAMTS13 binds to coiled VWF primarily through CUB. This finding is consistent with the previous report that the C-terminus of ADAMTS13 binds to a constitutively exposed binding site on the D4 domain of the globular VWF surface (Zanardelli et al., 2009). We also found that, when VWF is anchored, it could be extended by shear flow to allow rapid ADAMTS13 binding through MP-TSP6 with greatly increased affinity. This finding is consistent with the previous report that discontinuous exosites in the disintegrin to spacer domains only bind extended VWF (Gao et al., 2006(Gao et al., , 2008 The significance of this second aspect of mechanical modulation of VWF-ADAMTS13 interaction is illustrated by drastically increased kinetic on and off rates of the binding under flow conditions. The A1A2A3-CUB interaction is inhibited in intact ADAMTS13 through an MP-TSP6-CUB interdomain interaction, which may have masked the binding site on CUB for A1A2A3. This finding is consistent with a previously reported conformation of inactive ADAMTS13 (Muia et al., 2014;South et al., 2014) and further suggests that MP-TSP6 may be in an active conformation. In our study, this MP-TSP6-CUB interaction has no detectable effect on the A1A2A3 binding site on MP-TSP6, which is consistent with the previous report that the efficiency of cleavage of the D1596-T1668 region of mouse VWF by mouse ADAMTS13 is comparable to that by mouse ADAMTS13 lacking the two C-terminal TSP1 and two CUB domains (Banno et al., 2004). It is conceivable that the MP-TSP6-CUB interdomain interaction may also exhibit dynamic bond characteristics, allowing hemodynamic forces to modulate the conformational change of ADAMTS13 and facilitate the cleavage of VWF. CUB has been reported to bind the spacer domain, which masks the previously found binding exosites on the spacer domain (Gao et al., 2006;Akiyama et al., 2009;Muia et al., 2019;Zhu et al., 2019). Thus, the A1A2A3 interactions with ADAMTS13 and MP-TSP6 observed in this study may not involve the exosites on the spacer domain. Considering that A1A2A3-MP-TSP6 interaction is Ca 2+ -dependent and a Ca 2+ -binding site had been identified in the metalloprotease domain, which is critical to VWF interaction with and cleavage by ADAMTS13 (Gardner et al., 2009), the metalloprotease domain might be involved in the A1A2A3-MP-TSP6 interaction observed in this study. Moreover, a Ca 2+ binding site was found in the VWF-A2 domain as well, which can stabilize the folded A2 conformation and promote the extended A2 to refold (Xu and Springer, 2012). Thus, the Ca 2+ dependency of A1A2A3-MP-TSP6 binding may indicate that Ca 2+ is directly involved in the binding, or this dependency may be introduced by a conformational change in the metalloprotease domain or A2 domain along with the Ca 2+ condition change. In addition, MP-TSP6-CUB and VWF-CUB binding depends on Ca 2+ . Future studies will clarify the role of Ca 2+ in A1A2A3-MP-TSP6 binding.
Combined with recent findings of ADAMTS13 conformational changes (Muia et al., 2014;South et al., 2014South et al., , 2017, our data suggest a model of multistep activation of interaction between VWF and ADAMTS13, which prepares the two proteins for proteolytic reaction by a series of activation and binding steps through different sites ( Figure 5). Given that VWF assumes a globular conformation under static or low-shear conditions (Schneider et al., 2007), we propose that ADAMTS13 in the autoinhibited conformation may initially bind globular VWF through its CUB domains. The slip bond between CUB and globular VWF may dissociate more rapidly under blood flow; however, blood flow may also elongate tethered VWF and expose its cryptic site(s) for additional ADAMTS13 binding through MP-TSP6 with a higher affinity and a catch bond characteristic, which may be facilitated by the existing VWF-ADAMTS13 binding through CUB via dimeric interaction ( Figure 5). The catch bond between MP-TSP6 and extended VWF may stabilize their interaction under force. The flow-induced extension of VWF may also expose the A1A2A3 tridomain and unfold the A2 domain. The binding of VWF via sites outside A1A2A3 to ADAMTS13 may induce its opening, relieving the autoinhibition of the CUB binding site by MP-TSP6 and allowing ADAMTS13 binding to A1A2A3 via the site on MP-TSP6 and the newly exposed site on CUB. Through such multistep binding involving shifts of sites, ADAMTS13 is well positioned on VWF, assuming an active conformation for the eventual cleavage of the peptide bond in the VWF A2 domain. Our model supports the molecular zipper model (Crawley et al., 2011) and further depicts the binding sites shifting in VWF-ADAMTS13 binding along with the affinity change. Future studies will test this model to further elucidate the detailed process of ADAMTS13 activation, binding and cleavage of VWF under dynamic flow conditions.
AFM experiments
AFM was used in the adhesion frequency assay for measuring twodimensional (2D) kinetics at zero force (Chesla et al., 1998;Chen et al., 2008b) and the force-clamp assay for measuring single-bond lifetimes in a range of forces (Marshall et al., 2003; Supplemental Figure S2, A and B). Our home-built AFM ( Figure 1B) and its use for single-molecule experiments have been described previously (Wu et al., 2010). Briefly, AFM cantilevers (MLCT model from Bruker, Billerica, MA) (Right) In blood flow, tethered VWF may experience high shear stress, forcing it to adopt an elongated conformation and expose cryptic binding sites. ADAMTS13 may shift its binding site for VWF from CUB to MP-TSP6 by binding to the newly exposed binding sites on the extended VWF. This interaction has high affinity and rapid kinetics and can sustain force with catch bonds. Further shear stress exposes more binding sites on A1A2A3 and the cleavage site in the unfolded A2 domain, leading to eventual VWF cleavage.
were functionalized by incubation with various ADAMTS13 variants in buffer (40 µg/ml for each variant, 10 µl per cantilever) at 4°C overnight, rinsed, and soaked in phosphate-buffered saline (PBS) containing 1% bovine serum albumin (BSA) to block nonspecific binding. Polystyrene dishes were cleaned with absolute ethanol and dried with argon gas before protein adsorption. Surfaces were incubated with 15 µl per spot of 1% BSA (for nonspecific control), 40 µg/ ml A1A2A3, or 250 µg/ml plasma VWF at 4°C overnight and rinsed three times with PBS. Dishes were then filled with PBS containing 1% BSA in the presence or absence of 5 mM EDTA. For inhibition assays, ADAMTS13 variant-containing buffer (2.5-10 µg/ml) was added to the dish and incubated for 30 min at room temperature before experiments were run. Similarly, for Ca 2+ -dependency assays, sample was incubated with or without 5 mM Ca 2+ -containing buffer for 30 min at room temperature before experiments, when conditions changed.
AFM experiments were performed by force ramp for adhesion frequency measurements or force clamp for bond lifetime measurements. A piezoelectric translator (PZT) drove the AFM cantilever to approach and contact the surface (Supplemental Figure S2, A and B, black curves), retracted ∼15 nm above the surface to reduce the nonspecific binding (Supplemental Figure S2, A and B, blue curves), held for a given contact time to allow bond formation (Supplemental Figure S2, A and B, green curves), retracted at a speed of 50 pN/s to detect adhesion (Supplemental Figure S2, A and B, red curves), and then this cycle was repeated 50 times to enumerate adhesion frequency for each contact time. For bond lifetime measurements, when adhesion was detected and reached the desired force (Supplemental Figure S2B), the feedback control stopped the PZT movement and maintained the force level until bond dissociation, when the force could no longer be kept at a constant level. In lifetime measurement, proteins were coated at low densities to ensure infrequent adhesions (∼20%), a condition previously shown to be necessary for binding at a single-bond level (Chesla et al., 1998). Adhesion frequency and lifetime measurements between different molecular pairs were repeated multiple times with different AFM cantilever tips. Our previous study demonstrated that forceinduced conformational change is a prerequisite for A1A2A3 to be cleaved by ADAMTS13 (Wu et al., 2010). To exclude the cleavage events, only single-force-drop events were analyzed in the present study so that the measured lifetime data reflected the dissociation kinetics of bonds between ADAMTS13 variants and A1A2A3 but not the enzymatic kinetic properties of A2 cleavage by ADAMTS13. In addition, we previously observed similar GPIbα binding and ADAMTS13 cleavage properties for both adsorbed and antibody captured A1A2A3 on polystyrene surfaces (Wu et al., 2010), suggesting that adsorption was unlikely to alter the A1A2A3 binding properties to ADAMTS13 constructs.
Extension of VWF by shear flow
The Petri dish or glass surfaces were preincubated with 15 µl per spot of 80 µg/ml polyclonal anti-VWF polyclonal antibody (ab6994) at 4°C overnight and rinsed three times with PBS. In the day of the experiment, surfaces were incubated with 15 µl of plasma VWF with indicated concentration at room temperature for 30 min. A GlycoTech parallel plate flow chamber (Cat. # 31-001; GlycoTech Corporation) was assembled using the plasma VWF coated on the Petri dish surface as the chamber floor. A continuous flow of buffer (viscosity ∼1 cP) was applied for 10 min at a wall shear rate of 8000 s -1 . The VWF-coated surfaces were recovered. The Petri dishes were used for further AFM adhesion frequency or force clamp experiments. The glass surfaces were scanned with AFM (MFP-3D Stand Alone AFM, Asylum Research, Oxford Instruments), or examined by superresolution fluorescence microscopy (Zeiss LSM 780 and Elyra PS.1, Carl Zeiss Microscopy GMbH) to obtain SIM images (Gustafsson, 2005). The VWF multimers used for superresolution microscopy were preconjugated with a fluorescent dye (Cy5) according to the Lumiprobe protocol.
BFP experiments
BFP was used in the thermal fluctuation assay to measure 2D on/off rates of single bonds at zero force (Chen et al., 2008a,b;Supplemental Figure S2C) and force-clamp assay for measuring singlebond lifetimes at low forces . The coating density of proteins was controlled to ensure infrequent adhesions (∼20%).
Our home-built BFP and its use for single-molecule experiments have been previously described (Chen et al., 2008a,b;Ju et al., 2017). Briefly, two micropipettes, placed in a cell chamber mounted on the stage of an inverted microscope, were used to aspirate a biotinylated red blood cell (RBC) and a target bead, respectively ( Figure 1C). A probe bead was attached to the apex of the RBC through biotin-streptavidin coupling using a third micropipette to form a force transducer. The position of the probe was tracked by a highspeed camera (1500 frames per second) with a spatial resolution of a few nanometers. A1A2A3 and streptavidin were covalently linked to probe beads using a previously described chemistry (Chen et al., 2008a). ADAMTS13 variants were coated on target beads by overnight incubation of each variant (400 µg/ml) at room temperature, rinsed with HEPES, and soaked in HEPES with 1% BSA to block nonspecific binding. For experimental testing of Ca 2+ dependency, both target and probe beads were incubated with or without EDTA for 30 min before the experiment.
Bond lifetimes in the low-force regime were measured by BFP, as it has a much softer (small-spring constant) force transducer than AFM. Nonspecific adhesion in the BFP was controlled using A1A2A3-coated bead and BSA-coated bead, which resulted in a 4% adhesion frequency. The bond lifetime experiment procedures were similar to that described for AFM, except that BFP was used. For the thermal fluctuation assay, the target bead was driven by the PZT to contact the probe bead, retract, and hold at a null position (0 pN) for 10 s. The thermal fluctuation of the probe bead was monitored to identify events of bond formation and dissociation, respectively, from its sudden reduction and resumption of the fluctuation amplitude, gauged through the 90-point sliding SD of the probe bead displacement over time (Supplemental Figure S2C). The thermal fluctuation changed because bond formation between molecules on the two beads restricted the movement of the probe bead. The time it took to form a bond (waiting time) and the time the bond lasted (bond lifetime) were measured and pooled for kinetic analysis.
Kinetic analysis and modeling
After the nonspecific binding was removed, the specific adhesion frequency P a was fitted by nonlinear regression to a probabilistic kinetic model for single-step monovalent receptor-ligand interaction (Chen et al., 2008b): where m AD and m VWF denote the densities of the ADAMTS13 and VWF variants, respectively, A c is the contact area, K a is the 2D binding affinity, and k off 0 is the off rate. The superscript 0 indicates the off rate evaluated at zero force to distinguish from those obtained from bond lifetime measurements under force.
To calculate fold increase in affinity (K a ratio) of the same AD-AMTS13 variant for the flow-extended VWF over the coiled VWF, we used the steady state version of Eq. 1 (by letting t c → ∞) and solved for m AD m VWF A c K a = -ln(1 -P a ). The K a ratio can be obtained by the ratio of ln(1 -P a ) calculated from the measured P a in Figure 2F, because the molecular densities and contact area remained unchanged. These calculations indicate that flow-extended VWF bound, respectively, ADAMTS13 and MP-TSP6 with 6.7-and 14-fold higher affinities than coiled VWF.
To analyze the waiting time and bond lifetime data measured from the BFP thermal fluctuation assay, pooled waiting times t w and bond lifetimes t b were respectively fitted by the following equations for single-step irreversible association and dissociation of single monomeric bonds (Chen et al., 2008a,b): Combined with the molecular densities on beads, the effective on-rate A c k on can be derived from the fitting.
Measurement of molecular densities on beads
Beads coated with A1A2A3 or ADAMTS13 variants were respectively incubated with mouse anti-A1 mAb 5D2 or goat anti-ADAMTS13 polyclonal antibody 151 (for ADAMTS13 and MP-TSP6) or 158 (for CUB) at room temperature for 30 min, washed three times with HEPES containing 1% BSA, incubated with respective PE-conjugated goat-anti-mouse or rabbit-anti-goat antibody at room temperature for 30 min, washed three times with HEPES containing 1% BSA, and analyzed by a BD LSR flow cytometer (BD Biosciences). The number of molecules per bead was determined from the fluorescence intensity after calibration by standard beads (BD Quantibrite PE Beads). The site density was obtained by dividing the total number of molecules by the surface area of the bead.
Statistical tests
We used 50 touches to measure one adhesion frequency value from each spot. The numbers of spots (n values) are reported in the figure legends. The variability of spot-to-spot is indicated using box and whisker plots.
For statistics, a standard two-tailed Student's t test was performed to test the difference in adhesion frequency and VWF length between paired conditions. The p values below 0.05 were deemed statistically significant and are denoted by *. **, ***, and **** for p < 0.05, 0.01, 0.001, and 0.0001, respectively. | 2019-05-10T13:05:42.218Z | 2019-07-22T00:00:00.000 | {
"year": 2019,
"sha1": "72ee0ea78a76fc7b677d67e787182a11af175970",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.1091/mbc.e19-01-0021",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "10671e50fe1cdd07d20381c418daf4453dbbf519",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
20185805 | pes2o/s2orc | v3-fos-license | Comparison of the spatial patterns of schistosomiasis in Zimbabwe at two points in time, spaced twenty-nine years apart
Temperature, precipitation and humidity are known to be important factors for the development of schistosome parasites as well as their intermediate snail hosts. Climate therefore plays an important role in determining the geographical distribution of schistosomiasis and it is expected that climate change will alter distribution and transmission patterns. Reliable predictions of distribution changes and likely transmission scenarios are key to efficient schistosomiasis intervention-planning. However, it is often difficult to assess the direction and magnitude of the impact on schistosomiasis induced by climate change, as well as the tempo-ral transferability and predictive accuracy of the models, as prevalence data is often only available from one point in time. We eval-uated potential climate-induced changes on the geographical distribution of schistosomiasis in Zimbabwe using prevalence data from two points in time, 29 years apart; to our knowledge, this is the first study investigating this over such a long time period. We applied historical weather data and matched prevalence data of two schistosome species ( Schistosoma haematobium and S. mansoni ). For each time period studied, a Bayesian geostatistical model was fitted to a range of climatic, environmental and other potential risk factors to identify significant predictors that could help us to obtain spatially explicit schistosomiasis risk estimates for Zimbabwe. The observed general downward trend in schistosomiasis prevalence for Zimbabwe from 1981 and the period pre-ceding a survey and control campaign in 2010 parallels a shift towards a drier and warmer climate. However, a statistically significant relationship between climate change and the change in prevalence could not be established. Reliable predictions of key distribution and transmission patterns. Reliable predictions of dis-o distribution and transmission patterns. Reliable predictions of dis-Center in two rural areas in Zimbabwe and their relationship to village location and snail infection rates. their relationship to village location and snail infection rates.
Introduction
Schistosomiasis is a human disease caused by parasitic trematodes that have a freshwater snail as intermediate host. Of the five known species adapted to humans, Schistosoma haematobium, S. mansoni and S japonicum are the most common species, while S. mekongi and S. intercalatum are limited to minor areas, the first in Asia and the second in Africa. Schistosoma haematobium and S. mansoni are found in large parts of the African continent, the former causing urinary schistosomiasis with Bulinus globosus as intermediate host and the latter intestinal schistosomiasis with Biomphalaria pfeifferi as intermediate host. Overall, an estimated 207 million people are infected and 779 million live in areas where transmission is on-going (Chitsulo et al., 2000;Fürst et al., Stensgaard et al., 2013Stensgaard et al., , 2016. Schistosomiasis is present in the eleven provinces of Zimbabwe and mainly predominant in the North, Northeast and Central highlands, with the prevalence ranging from 3.3 to 39.3% amongst 10-15 year olds according to the most recent survey (Midzi et al., 2014).
Climate variables, such as temperature and precipitation have been shown to drive schistosomiasis distribution (Malone et al., 2001;Moodley et al., 2003;Stensgaard et al., 2006Stensgaard et al., , 2013Schur et al., 2011Schur et al., , 2013 and used for the prediction of infection risk at locations where schistosomiasis prevalence has not been surveyed. These predictions can be displayed as smoothed risk maps and serve as a decision support tool for health planning and control (Raso et al., 2005;Simoonga et al., 2009;Woodhall et al., 2013;Assare et al., 2015). However, due to the central role of climate, the spatial distribution of schistosomiasis might change in the future with anticipated climate change (McCreesh and Booth, 2013). Reliable projections of future distributions and risk are thus needed for timely and efficient schistosomiasis intervention planning. A number of studies have used climate projection data in an attempt to produce such future transmission scenarios (Martens et al., 1997;Moodley et al., 2003;Yang et al., 2005;Zhou et al., 2008;Stensgaard et al., 2013Stensgaard et al., , 2016McCreesh et al., 2015). However, the effect of climatic change has never been investigated using historical climate data, which could otherwise help to validate the direction and magnitude of changes in prevalence.
Here, we use S. mansoni and S. haematobium prevalence data from two national surveys conducted in Zimbabwe in 1981 and 2010, as well as climate data covering the same two periods. The objective was to describe the spatial distributional changes of urogenital and intestinal schistosomiasis prevalence in Zimbabwe between these two points in time and to explore whether any observed changes can be attributed to changes in environmental or other risk factors over this time period.
Schistosomiasis survey data
Two datasets on prevalence of S. mansoni and S. haematobium in school-aged children were provided by the Zimbabwe National Institute of Health Research (former Blair Research Laboratories) ( Figure 1). The first survey was carried out in 1981 (Taylor and Makura, 1985) where urine and stool samples were collected from 14,619 school-going children aged eight to ten years from 166 different schools. The observed average prevalence of S. haematobium and S. mansoni was 41% (range: 0-97%), and 8% (range: 0-81%), respectively. The second survey was conducted in 2010 (Midzi et al., 2011) where urine and stool samples were obtained from 13,187 school-going children aged 10 to 15 years in 330 rural and urban schools with an over-all prevalence of 22.7%. The S. haematobium prevalence was 18% (0-76%) and that of S. mansoni 7.2% (0-64%).
Children were enrolled randomly in both surveys and all rural districts in the country were represented. Schools were semi-randomly selected allowing adjustment for district representation and samples of urine and faeces were obtained from 50 children at each school. Egg counts from urine samples were determined through sedimentation followed by microscopy and by Kato-Katz from faeces following the method as described by Katz et al. (1972). An additional formol-ether concentration of faecal samples and urine filtration technique for urine samples was added in the 2010-survey to improve sensitivity (Cheesbrough, 2006).
The two comparative surveys were conducted 29 years apart focussing on contemporary baseline information on schistosomiasis prevalence and on health intervention purposes; hence, attention was not specifically paid to homogeneity between the studied age groups.
Environmental data
Data on climate, environment, and risk factors (Table 1) were taken from publicly available databases or provided directly by the data-holder and all climate variables were implemented in the model as averages of March to May. This period is of importance because of the rapid development of the intermediate host snail populations following the rainy season that takes place from December to February (Mukaratirwa and Kristensen, 1995) in Zimbabwe. Climate variables used for fitting of the model were rainfall, temperature and the climate proxy for moisture availability, the normalised difference vegetation index (NDVI). The rainfall data were modelled from rain-gauge measurements and remotely sensed data from satellite-borne instruments (Novella and Thiaw, 2012). For NDVI, the average for 1982, 1983and 1984(Tucker et al., 2005Pinzon and Tucker, 2014) was used as a substitute for 1981 because no data are available for March, April and May of that year. Temperature data were from the WorldClim dataset (Hijmans et al., 2005), which is a long-term average as no year-specific temperature data is available for 1981. Other predictor variables included elevation (Jarvis et al., 2008), soil pH and total available water capacity (TAWC) (Batjes, 2012) and the Human Footprint dataset (Wildlife Conservation Society and Center for International Earth Science Information Network, 2005). In order to express the risk for schistosome infection, data on population density 2010 (Tatem et al., 2007) and 1990 (Center for International Earth Science Information Network et al., 2011) as well as on rivers (Lehner et al., 2006) were utilised together with the Human Transmission Index (HTI) being a product of distance to nearest river from a given grid cell and the population density in the grid cell. All environmental parameters were re-sampled to a 10 by 10 arc degree grid with a resolution of 0.1 arc degree (~10 km) using ArcMap v. 10.2 (ESRI, Redlands, CA, USA).
Model implementation
A total of four Bayesian geostatistical logistic regression models were fitted to the S. haematobium and S. mansoni prevalence data from each time period (i.e. 1981 and 2010). Spatial correlation was modelled via a normally distributed spatial random effect with exponential correlation function. A Bayesian stochastic search variable selection (Edward and McCulloch, 1993;Dellaportas et al., 2000) was carried out to identify the combination of the most important predictors and their functional form. The forms considered were continuous and categorical with three and four categories based on each variable's quintiles. Classified sets of highly correlated covariates (Pearson's correlation coefficient >0.9) were selected among and only one of the variables was selected to avoid collinearity. The set of covariates with the highest posterior probability was selected to fit the final Bayesian geostatistical model following the approach of Chammartin et al. (2013). Bayesian Kriging was applied, using the final model, to predict the risk of schistosomiasis at the 0.1 arc degree resolution (≈10 x 10 km) in a grid of 10,000 pixels covering Zimbabwe and parts of neighbouring countries.
Spatial statistical modelling and variable selections
The most important predictors identified in each of the four models (two forms of schistosomiasis at two points in time), following the Bayesian variable selection procedure are presented in Table 2. The higher category of the HTI was found to significantly affect the spatial distribution of S. haematobium in 1981 with a positive association. In 2010 on the other hand, a higher risk of S. haematobium infection was significantly associated with lower soil pH values, while the S. mansoni distribution was negatively associated with high levels of human disturbance (human footprint) and the lower category of HTI in 1981, whereas a positive association with high water availability (TAWS) was found. Low levels of HTI, high TAWS and altitude were all negatively associated with S. mansoni infection in 2010.
Temperature and precipitation were not selected as important predictors in any of the four Bayesian geostatistical models following the variable selection procedure. NDVI was selected for inclusion in the 1981 urogenital schistosomiasis model, however it did not explain disease distribution at a significant level.
Predictive risk maps of Schistosoma mansoni and Schistosoma haematobium infection in 1981 and 2010
The predicted prevalence's are mapped in Figure 2 with red colours indicating higher predicted prevalence. It is apparent that the extent of areas with high-predicted prevalence was reduced in 2010 compared to 1981 for S. haematobium and that S. mansoni prevalence was reduced in some parts, while it increased in other parts of the country.
Schistosoma haematobium infection occurred with high prevalence in most areas in 1981 ( Figure 2A), whereas, in 2010, hotspots were concentrated in the North-east and in a southern section, but with infections still found in all parts of Zimbabwe ( Figure 2B). The predicted S. mansoni infection risk was generally lower than that of S. haematobium in both years ( Figure 2C and 2D), but with high-predicted prevalence in the same northern (Edward and McCulloch, 1993;Dellaportas N o n variable selection (Edward and McCulloch, 1993;Dellaportas 2000) was carried out to identify the combination of the most hotspot areas as S. haematobium in 2010. A predicted high S. mansoni prevalence area in the South in 2010 was positioned further to the Southeast than the hotspot of S. haematobium. Comparison of the prediction maps for S. mansoni, revealed that there had been a geographical shift in high-risk areas from the western to the eastern part of the country during the 29-year study period.
Predicted changes in schistosomiasis from 1981 to 2010
To follow the development from 1981 to 2010, the predicted prevalence maps were subtracted pixel-by-pixel to produce maps of change in absolute prevalence values (Figure 3). The change of urogenital schistosomiasis depicted in Figure 3A, ranges from an 80 percentage-point decrease to a 53 percentage-point increase. Large areas of substantial decrease are present in the Northwestern part of Zimbabwe (blue colour). Large areas experienced unchanged prevalence (+/-20 percentage points) (yellow/orange), while some saw an up to 53 percentage-points increase (light red). The reduction of the predicted distribution of S. mansoni was most pronounced in the Northeast, where an up to 84 percentage-point decrease was observed ( Figure 3B). Parts of the highveld and southern Zimbabwe experienced a low-level decrease, while there were zones with increases reaching 65 percentage-points in some clusters (dark orange).
Estimates of the number people infected
The total number of people infected was derived from the predicted prevalence and two datasets on population density, i.e. 1990 and 2010 (Tatem et al., 2007;Center for International Earth Science Information Network et al., 2011) (Table 1). The number of people estimated to have been infected with S. haematobium in 2010 was 2,129,309 (population adjusted prevalence of 17.3%), i.e. a strong reduction from 4,382,835 (population adjusted prevalence of 41.5%) in 1981, while the number of S. mansoni infections was reduced to 622,549 (5.1%) in 2010 from 992,249 (9.4%) in 1981.
Discussion
Our results show that the prevalence of S. haematobium and S. mansoni declined in Zimbabwe over the past three decades, while the spatial patterns remained unchanged with hotspots at the same locations, albeit at lower levels. These findings are in line with the findings of Lai and colleagues who demonstrated a general, declining trend in S. mansoni and S. haematobium in sub-Saharan Africa (SSA) from 2000 onwards. They are also partly in line with Schur and colleagues (2013), whose study in East Africa showed a lower infection risk for S. haematobium in the past two decades, and a lower infection risk for S. mansoni since 2000. The trend in historical data on schistosomiasis and climate for Zimbabwe presented in this paper, corroborates these findings and indicates that the response from schistosomiasis to climatic changes is an on-going phenomenon. The Bayesian geostatistical models with the highest posterior probabilities did still not select any of the climatic covariates except for NDVI, which was not significant. The HTI, soil pH, TAWS and altitude, on the other hand, were selected as either positively or negatively associated with schistosomiasis risk. This means that human activities, soil characteristics and altitude are more important drivers than the climate data in the implemented modelling framework. Therefore we cannot conclude that observed changes in schistosomiasis over the three decades is attributable to the observed changes in climatic conditions (drying and warming) in Zimbabwe. Local scale factors such as health education, migration, etc. is also known to influence risk patterns, yet such data were not available at the national level, nor at the specific time points investigated. This, notwithstanding, our results are in line with other, recent studies that have shown that a reduction in the geographical distribution of the schistosome intermediate host snails is linked to changes in temperature, precipitation and humidity (Stensgaard et al., , 2016Pedersen et al., 2014aPedersen et al., , 2014b. These studies found that currently climatically suitable areas in many places in Africa will either become too hot or too dry under future climate change scenarios to sustain many of the intermediate host snail species.
Article
A number of issues concerning both the survey data and the environmental variables used to parameterize the geostatistical models might have had an impact on the results and therefore could potentially influence the conclusions drawn. First and foremost, different age groups were tested in the two national surveys and as prevalence is known to vary amongst age groups (see review in , this could have introduced a systematic error. In 1981, 8-10 year old school-aged children were tested, whereas in 2010, the age group was 10-15 year olds. Several studies show that prevalence peaks towards 15-20 years of age (Hairston, 1965;Taylor and Makura, 1985;Chandiwana et al., 1988;El-Khoby et al., 2000;Kabatereine et al., 2004) but the peak may occur earlier in high-prevalence areas (Traoré et al., 1998). The prevalence in both years is relatively high in the highprevalence areas compared to other study areas (Lwambo et al., 1999;Kabatereine et al., 2004;Clements et al., 2006;Raso et al., 2007;Schur et al., 2011) thereby justifying a direct comparison of the two surveys. The age group in 2010 is in fact closer to the peak prevalence age class and the reduction observed from 1981 to 2010 would, in theory, had been more pronounced should the age groups tested have been the same, thereby providing support for the observed reduced prevalence.
Secondly, it could be argued that the observed reduction of both forms of schistosomiasis might be attributed to mass drug administration campaigns (MDAs) and to changes in (risk) behaviour. However, according to national health authorities (Midzi et al., 2011), no interventions at the national scale has been carried out in the interim period. Local MDAs may, however, have occurred, but these would only have affected this comparative study if carried out in the immediate years leading up to the surveys as prevalence is known to return to pre-MDA levels within few years (Clements et al., 2009).
Risk behaviour, inadequate toilets facilities, swimming and domestic use of unsafe water sources, may have changed and contributed to the observed prevalence reduction. We did not investigate these factors quantitatively because no historical data are available three decades back, though reports obtained during the 2010 survey (Midzi et al., 2011) indicates that the environment where many of the schoolchildren in the rural districts live, still is characterized by unsafe water, poor lavatory facilities and low awareness/compliance with best practises.
The total number of infections in Zimbabwe is calculated as a simple product of predicted prevalence among schoolchildren and population density. We did not use specifically age-adjusted population estimates as these were not available historically. Since the prevalence is known to peak among school-aged children (Fulford et al., 1998;Guyatt et al., 1999;Kabatereine et al., 2004), the presented results are likely to be overestimated, though less so for intestinal schistosomiasis due to the fact that the prevalence of intestinal schistosomiasis remains closer to the school-age level and the early adulthood compared to that of urogenital schistosomiasis (Chandiwana et al., 1988;Traoré et al., 1998;Guyatt et al., 1999;Kabatereine et al., 2004). Lai et al. (2015) made predictions regarding population-adjusted prevalence and the number of school-aged children infected with Schistosoma spp in 2012 for all countries in SSA. For Zimbabwe, they estimated an overall prevalence of 25.2% for S. haematobium corresponding to 960.000 school-aged children, compared to our overall estimated population adjusted prevalence of 17.3% (2,129,309,000 people -all ages). For S. mansoni, they estimated an overall prevalence of 7.6%, corresponding to 290.000 school-aged children in 2012, compared to our estimates of 5.1% overall prevalence (622,549 people -all ages). Furthermore, the earliest available population data of a satisfactory quality and resolution are from 1990. Appling the 1981 prevalence predictions with the knowledge that the Zimbabwe population grew from 7.2 to 10.5 million, from 1981 to 1990, the number of predicted infections for 1981 appears overestimated. This would result in an even more pronounced decline in prevalence than what was found in the current analysis.
Finally, the two surveys include the infections acquired in the corresponding year because they were both conducted in the period September-December allowing enough time for the parasite to infect, mature and excrete eggs after the period when transmission is highest in the post-rainy season. Because the surveys also registered the infections acquired in previous years, climate data for these years could, with benefit, have been included but a paucity of climate data from the period leading up to the early survey prevented this. Furthermore, rainfall was the only predictor available for both 1981 and 2010, whereas the NDVI was only obtainable from late 1981 and an average of 1982-1984 (March through May) was used. Additionally, both prevalence and climate data were obtained at two specific points in time and do not in its real sense represent the development of prevalence or climate over the 30-year period. No information on fluctuations in prevalence during the 29 year period is available, but some information on climate is available in the form of yearly country averages (see suppl. II), which indicate that both temperature and precipitation are representative for the period, i.e. the years 1981 and 2010 were neither particularly cold/warm nor wet/dry years.
Conclusions
This study presents the first up-to-date Bayesian risk maps for S. haematobium and S. mansoni infection in Zimbabwe. To the best of our knowledge, this is the first study investigating the his- torical change of schistosomiasis distribution and prevalence in relation to changes in climate using prevalence and climate data matching in time spanning a period as long as three decades. With the implementation of real-time climate data for the two periods, the results add to our knowledge about the impact of climate change on schistosomiasis and highlight the importance of a historical perspective when studying the impact of climate change on vector-borne diseases in general. Although a statistically significant relationship between climatic changes and the change in prevalence could not be established, the observed downward trend in schistosomiasis prevalence in Zimbabwe over the last three decades parallels a shift towards a drier and warmer climate. u s e o n l y bining land cover and census in East Africa. PloS One 2:e1298. | 2018-04-03T04:30:20.099Z | 2017-05-08T00:00:00.000 | {
"year": 2017,
"sha1": "10b0a9f9998bd1ea41e5a892bcf616a767ab39bf",
"oa_license": "CCBYNC",
"oa_url": "https://www.geospatialhealth.net/index.php/gh/article/download/505/527",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d9cdc2f891f387721d247e2fb3c06b8d3b1940b3",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
256881788 | pes2o/s2orc | v3-fos-license | Duloxetine for Postoperative Pain Control Following Knee or Hip Replacement: A Systematic Review and Meta-Analysis
Background Duloxetine is a Food and Drug Administration–approved selective norepinephrine reuptake inhibitor for treating depression, anxiety, fibromyalgia, and neuropathic and chronic musculoskeletal pain. This meta-analysis aims to evaluate the efficacy of duloxetine in reducing pain and postoperative opioid use following lower extremity total joint arthroplasty. Methods A literature search was performed, identifying randomized controlled trials investigating duloxetine for pain management after total hip and total knee arthroplasty. Data from the visual analog scale (VAS) for pain during movement and at rest were extracted for postoperative days (PODs) 1, 3, 7, and 14, as well as postoperative week 6 and postoperative month 3. Opioid use data were obtained at 24, 48 and 72 hours. All data were analyzed using inverse variance with random effects and presented as weighted mean difference. Results Eight unique studies were identified and included, 7 of which were analyzed quantitatively. Duloxetine decreased postoperative opioid consumption at 48 and 72 hours. For VAS for pain at rest, significantly reduced pain was reported by duloxetine-treated patients at POD 3, POD 7, and postoperative week 6. For VAS for pain at movement, significantly reduced pain was reported by duloxetine-treated patients at POD1, POD 3, POD 7, POD 14, postoperative week 6, and postoperative month 3. Conclusions Duloxetine appears to decrease postoperative pain and opioid consumption following total joint arthroplasty. However, definitive conclusions are limited by small sample size and study heterogeneity. While there is a need for follow-up studies to determine the optimal dose, duration, and patient population, strong preliminary data provide robust support for future large-scale efficacy studies.
Introduction
Postoperative pain control is a critical component of comprehensive postsurgical patient care, as it affects patient satisfaction and operative outcomes and can result in pathophysiologic neural alterations that evolve into chronic pain syndromes [1,2]. Tissue trauma resulting from surgery is thought to lead to both central and peripheral nerve sensitization, resulting in an activity-dependent increase in spinal neurons excitation and a decreased threshold of nociceptive afferents, respectively [3,4].
Historically, opioids have been the preferred drug of choice for the management of postoperative pain following joint arthroplasty [2]. However, when used in excess, opioids can lead to deleterious side effects and have the potential for both addiction and abuse [1,5,6]. These risks are particularly important in orthopaedics given that orthopaedic surgeons are highest prescribers amongst all surgeons [7].
In light of the concerns surrounding excessive opioid use, multimodal analgesic regimens utilizing a combination of opioid and nonopioid analgesic drugs targeting different sites within the central and peripheral nervous system have emerged as the new standard in managing postoperative pain [2,8]. Among commonly used interventions, such as acetaminophen and nonsteroidal anti-inflammatory drugs, there is a growing body of evidence suggesting duloxetine may have utility in the management of postoperative pain [9,10].
Duloxetine is a relatively balanced serotonin and norepinephrine reuptake inhibitor shown to be effective in managing neuropathic pain [11,12]. Several reviews have evaluated the effect of duloxetine on postoperative pain and opioid consumption [9,13]. Most notably, Branton et al. [10] recently published a review assessing whether duloxetine reduced pain and opioid consumption following elective orthopaedic surgery. While their review was a catalyst for continued inquiry into the use of duloxetine in the postoperative setting, it only included 2 studies that evaluated duloxetine in total joint arthroplasty (TJA). Given the rising interest in duloxetine use during total knee (TKA) and total hip arthroplasty (THA), this systematic literature review and meta-analysis aims to examine the current evidence regarding duloxetine use in patients undergoing lower extremity TJA. The primary aim is to assess whether perioperative administration of duloxetine is effective in reducing postoperative opioid consumption and pain. The secondary aim is to aggregate data on the methodology, safety, and primary outcomes.
Methods
This systematic review was conducted in accordance with the Joanna Briggs Institute (JBI) System for the Unified Management, Assessment and Review of Information methodology for systematic reviews of effectiveness evidence [14], which allows for exports and analysis consistent with the Preferred Reporting Items and Systematic Reviews and Meta-Analyses guidelines and the Preferred Reporting Items and Systematic Reviews and Meta-Analyses Statement [15,16]. The review was prospectively registered on PROSPERO (registration ID#: CRD42022309539).
Search strategy
The search strategy aimed to identify both published and unpublished studies. Full details for the search methods for study selection can be found in Figure 1 and Appendix 1.
Assessment of methodological quality and inclusion
Eligible studies were screened by 2 independent reviewers (I.A.J. and A.T.) at the study level for methodological quality using standardized critical appraisal instruments from JBI for experimental studies. Domains assessed included JBI standard questions for the assessment of clinical trials. (Appendix 2) When necessary, authors were contacted to request missing or additional data for clarification. Any disagreements were resolved through discussion. A third reviewer (N.H.) served as a tiebreaker to discuss and resolve any discrepancies.
Following critical appraisal, outcomes from studies that were found to have both clinically and statistically significant differences between treatment and comparator groups at baseline were excluded from the meta-analysis portion of this review. Clinically significant differences were considered to be those that were highly likely to influence the validity of clinical outcomes, such as differences in baseline pain score for an unblinded study. Otherwise, all studies, regardless of their methodological quality, underwent data extraction and synthesis when possible.
Data extraction
Data was extracted from by 2 independent reviewers (I.A.J. and A.T.) using the standardized JBI data extraction tool. In addition to extracting quantitative values necessary to perform the metaanalysis, information pertaining to trial registration, type of surgery performed, number of patients per study arm, screening questionnaire(s) used, and dosing schedule were obtained. Registered primary outcomes were compared to reported primary outcomes to evaluate overall study success. Additionally, secondary outcomes were extracted from all studies, regardless of whether they contributed to the qualitative synthesis.
Data synthesis
Studies were pooled in statistical meta-analysis using JBI System for the Unified Management, Assessment and Review of Information. Effect sizes were expressed as weighted final postintervention mean differences and 95% confidence intervals (CIs) were calculated for analysis. Meta-analysis was only performed for outcomes that were comparable at a specific time point across !3 included studies. For the outcome of postoperative opioid use, only 24-hour, 48-hour, and 72-hour opioid use analyzed as long-term data were reported in <3 of the included studies. Values were converted to morphine milligram equivalents as needed. The outcome of VAS pain with movement (VAS-M) and VAS pain at rest (VAS-R) were obtained baseline, postoperative day (POD) 1, POD 3, POD 7, POD 14, postoperative week 6, and postoperative month 3, as these were reported in !3 studies. VAS scores reported on a 0-100 scale were converted to a 0-10 scale to maintain consistency across studies. Additionally, for 2of the included studies, reported average pain severity was used for pain at rest and reported pain with general activity was used for pain with movement [17,18]. When studies did not report the standard deviation (SD), they were calculated from the standard error or 95% CI using the Cochrane method. The SD was calculated from the 95% CI using critical values from the tdistribution because studies tended to have small sample sizes. In cases where the SD was not reported or could not be calculated, the corresponding author was contacted via email. The mean was calculated from the median and interquartile range as suggested by Wan et al. [19].
Statistical analyses were performed using inverse variance with random effects [20]. Heterogeneity was assessed statistically using the standard chi-squared and I-squared tests. A funnel plot was not utilized to assess for publication bias as there were fewer than 10 studies included in the meta-analysis.
Results
Nine randomized controlled studies qualified for inclusion. However, 2 of these studies used the same dataset, leaving 8 unique study populations for final analysis (Table 1). Seven authors were contacted to solicit missing information. Three authors responded with data, 2 of which could be included in the meta-analysis. The remaining were excluded because values lacked information needed to perform meta-analysis (eg, no SD or 95% CI reported). All but 1 study at least partially registered the study trial and prespecified the primary outcome. Six studies investigated duloxetine for TKA, 1 study investigated duloxetine for THA, and 1 study investigated both TKA and THA. Three studies failed to use a true placebo. There was also variable use of screening questionnaires, with the Central Sensitization Inventory and Hamilton Depression Scale most frequently used.
Qualitative review of outcome data demonstrated a high degree of heterogeneity, variable success, and robust safety profile. Most notably, among the studies that specified their primary outcome (ie, trial registration or published protocol), only 3 studies fully achieved their primary aim [17,25,26]. A fourth study was partially successful [18]. Of the 307 unique patients treated with duloxetine, no significant adverse events were reported.
Quantitative analysis of studies reporting pain scores
All of the included studies had potentially analyzable data for VAS. However, data from Rienstra et al. [23] were excluded from the meta-analysis as baseline VAS scores between duloxetine and control groups showed statistical significance. As such, 7 studies were included overall in the final analysis of VAS scores between duloxetine and comparator treatments.
Discussion
Our review of the literature indicates that the use of perioperative duloxetine in lower extremity TJA may effectively decrease pain and postoperative opioid use. These findings are similar to a meta-analysis published recently by Branton et al. [10], which found lower postoperative opioid use with duloxetine at 24 and 48 hours in patients undergoing elective orthopaedic surgery. The nonsignificant difference in 24-hour opioid use in this metaanalysis is due to the inclusion of unpublished data from a recently published study [26]. The safety data and lack of severe adverse events observed presently is also consistent with other reviews, which have shown that duloxetine is generally safe and well-tolerated, with few serious side effects reported, particularly at doses not exceeding 60 mg/d [27e30]. While study heterogeneity precludes strong recommendations regarding the optimal patient population and dosing schedule, this quantitative metaanalysis provides the strongest evidence to date that duloxetine improves postoperative pain without causing major adverse events in lower extremity TJA. The findings presented in this meta-analysis should be carefully considered in the context of the dosing regimen used in each individual study, which includes the total dose given as well as the dose duration and timing relative to surgery. The dosage of duloxetine should be ! 60 mg when treating neuropathic pain [12]. However, almost a third of the studies included in this review used 30mg. Interestingly, Koh et al. was among the studies that used a 30-mg dose yet produced some of the most promising data [17]. This could be due to a greater relative importance of the duration and timing of duloxetine employed than the dose utilized. It has been shown that it takes !6 weeks of duloxetine use before peak improvements in osteoarthritic pain are attained [12,31,32]. The findings of Koh et al. [17] and Kim et al. [26] (both of whom used 30 mg of duloxetine daily) provides support for this hypothesis. Koh et al. started dosing patients 1 day before surgery. While they failed to show a difference in opioid consumption or pain at 72 hours, they had better performance across pain metrics between 2 and 12 weeks. In contrast, Kim et al. [26] dosed patients for 2 weeks prior to surgery and observed an inverse findingddecreased opioid consumption and postoperative pain at 72 hours but no difference in pain at 12 weeks.
Further support for the importance of dosing schedule comes from the largest study included in this review [26]. Patients were given a 60-mg dose for 14 days starting on POD 0. The primary outcomes of pain and opioid use at 14 days showed significant benefit compared to the placebo, but pain scores at earlier time points were noninferior. Prior to inclusion of the data by YaDeau et al., several earlier time points had been significant and the effect size for pain scores at POD 14 had been notably larger. In their study, the general lack of significant differences during the first postoperative week is reasonable given that dosing did not start until POD 0. As discussed, the benefits of duloxetine would be expected to start around POD 14 and would not be expected to peak for another several weeks. This hypothesis is supported by the fact that they found significant improvements knee pain and the Knee Dysfunction and Osteoarthritis Outcome Score for Joint Replacement at 3 months. It should also be emphasized that these improvements occurred in patients well after the effects of treatment should have subsided. This suggests long-term benefits of treatment beyond the effects of the drug alone, which have yet to be explained.
One of the prevailing questions when considering the use of duloxetine in the surgical setting is whether it decreases central and/or peripheral sensitization. This was explicitly investigated by Rienstra et al. [23], who hypothesized that targeted treatment aimed at desensitization prior to surgery would reduce chronic residual pain postoperatively. In their unblinded trial, duloxetine was given for 10 weeks then stopped prior to surgery. A difference was not demonstrated, leading the authors to conclude that preoperative targeted treatment with duloxetine does not influence postoperative chronic, residual pain after TKA or THA. In contrast, Koh et al. [17] dosed patients for 6 weeks postoperatively and found significant differences in pain at 12 weeks, well after the drug had been discontinued. Similarly, YaDeau et al. [26] dosed patients for 14 days after surgery and found significant improvements in opioid consumption. The apparent difference suggests that managing pain in the postoperative period may be more important than in the preoperative period for limiting pain sensitization. Future large-scale follow-up studies aimed at reducing long-term pain should strongly consider continuing treatment during the postoperative period. However, preoperative dosing should not be discounted entirely. The dosing regime by Rienstra et al. [23] was atypical in how dosing was tapered in the weeks leading up to surgery. As discussed, preoperative dosing may be an important strategy for decreasing postoperative opioid use [24].
Among all surgeries, TKA has 1 of the widest ranges of postoperative pain [33]. As such, the success of future large-scale clinical trials will likely require target population optimization, which can be achieved through screening questionnaires. These questionnaires fall broadly into 2 categories: (1) those focused on identifying patients with underlying psychiatric pathology (eg, anxiety, depression) and (2) those aimed at identifying patients with preoperative pain catastrophizing. Attempts to target and/or exclude patients with psychiatric illness is reasonable given the proven efficacy of duloxetine in treating depression and anxiety, as well as potential transitory worsening of some symptoms when starting treatment [34]. Moreover, preoperative depression and anxiety are associated with heightened pain at 1 year for TKA, even in the absence of clinical or radiographic abnormalities [35]. Pain catastrophizing is a negative cognitiveeaffective response to anticipated or actual pain and has been associated with a number of important pain-related outcomes [36,37]. Surgery patients with high levels of preoperative pain catastrophizing have lower physical function, more pain, and worse overall health both before and after surgery [38e41].
In summary, duloxetine appears to safely decrease postoperative pain and opioid consumption following TJA. The major limitations of this study include inconsistent placebo use and heterogeneous dosing regimens. Nevertheless, this review provides sufficient safety and preliminary efficacy data to support large-scale clinical trials aimed at establishing the optimal dose, duration, and target population for duloxetine use in lower extremity TJA. In addition, the available data suggest that 3 principal factors be considered when designing future clinical trials. First, dosing should continue for at least 2 weeks postoperatively, and preoperative dosing should be considered for studies that aim to decrease opioid use in the first 24-to 72-hour postoperative period. Second, a dose of at least 60 mg should be considered, as this is the Food and Drug Administrationeapproved target dose for chronic musculoskeletal pain. Finally, at least 1 screening questionnaire aimed at assessing pain catastrophizing and/or anxiety should be implemented as a way to stratify patients and maximize effect size.
Conflicts of interest
The authors declare there are no conflicts of interest. For full disclosure statements refer to https://doi.org/10.1016/j.artd.2023. 101097. | 2023-02-16T16:10:30.090Z | 2023-02-14T00:00:00.000 | {
"year": 2023,
"sha1": "05a69ff1361f8bb7931efc2aed98b46773a83194",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "33999bef2ccf6fbc2791e9e9297bb80b4fa6f73f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
243466641 | pes2o/s2orc | v3-fos-license | Putting diverse farming households’ preferences and needs at the centre of seed system development
Over recent decades international agricultural research has shown that it can generate agricultural technologies with benefits for societies in the Global South that outstrip the investments many times over. However, it has also been shown that the benefits generated are not evenly spread and do not reach some groups of farmers at all. Too often, segments of the intended target populations are left out and these often tend to be those already ‘left behind’. New seeds and varieties are important elements of agricultural technologies and the development of these relies on seed delivery systems to get new varieties to the farming population. Here we argue that a clear analysis of the preferences and needs of farming households and their inherent heterogeneity is required when setting the goals for breeding programmes and designing seed delivery systems. We characterize the differences in demand profiles, which implies different types of seed delivery models that are tailed to context, crop and preferences and the multiple needs of farming households. We point to the implications for organizing and targeting the seed delivery system in order to cater for all. Recognising the existence of diverse demands, developing different seeds and varieties and delivering them through a variety of models asks for clarity on mandates and opens up the opportunities for coordination that will lead to synergies in meeting the UN's Sustainable Development Goals and reach a wider population of farming households.
Introduction
Food Insecurity and rural poverty remain two of the most challenging problems of global development. Solutions to these problems most often involve building on agriculture to produce sufficient and nutritional foods as well as good returns for the producing households. Agricultural research for development (AR4D) has produced impressive returns to investment in this respect over the past decades. International Agricultural Research Centres (IARCs), as important actors in this field, are estimated to have generated a 10:1 return on investment (Alston, Pardey, and Rao, 2020). Developing new seeds and varieties and disseminating these to farming households around the world through various types of seed systems 1 has been at the heart of this success. This may lead one to the conclusion that the global community is on the right track and simply needs to continue along this path, applying the latest insights and modern breeding and seed multiplication techniques. Indeed, the latest initiatives give the impression that this focus is being envisioned and strengthened, with an emphasis on genetic gains and accelerating variety turnover (Crops to End Hunger, 2021;EiB, 2021).
Unlike many other agricultural technologies, most varieties and seeds, once released, do not suffer from long pay-off periods. Their benefits are more immediate. In addition, the need for capital and potential risks are relatively modest as compared to, for example, conservation agriculture. Still, only an estimated 40% of targeted beneficiaries actually adopt newly promoted varieties, leaving a gap of some 60% (for staple crops: McEwan et al., 2021;Thiele et al., 2021; for grain legumes and dryland cereal crops: Woldeyohanes, Hughes, Mausch, and Oduol, 2021). This highlights the remaining and persistent limitation with the distribution and/or reach of benefits from plant breeding efforts which is often related to the types of farming households and/or their geographic location.
The problematic gap is frequently explained as the result of a series of hurdles to adoption that non-adopters are not (yet) able to surmount. However, it also needs to be asked whether or not there are concealed structural problems that maintain the persistence of the adoption gap: structural problems that result in mismatches. These could be mismatches between the farming households' aspirations and the underlying assumptions used by the developers of the technologies (Mausch et al., 2021a;Verkaart, Mausch, and Harris, 2018), mismatches between the farming households' goals, motivations, and incentive structures and the characteristics of the technologies (Gassner et al., 2019;, or shortcomings in the distribution of the technologies through the existing systems (Almekinders et al., 2019b;McEwan et al., 2021). The mismatches are variable and occur in different combinations and different contexts, leading to the observed 'mixed successes'. Acknowledging these mismatches asks for reflection on the aspirations of breeding and seed programmes to supply seeds and varieties, when making the decisions about who to cater for, and how. If catering for all farming households, then this raises the question on how to deal with the diversity in crops and contexts effectively. It is a question that is central to IARCs' strategies and the new One CGIAR Research Agenda, in which system transformation and delivery of better seeds to the most disadvantaged groups are frequently cited as key areas of impact (CGIAR System Organization, 2021).
In this paper we argue that if the ambition is to cater for all, then a re-orientation of objectives and approaches is clearly needed so that they are appropriate for different target groups. We also argue that we have to consider that farming households make choices based on what is available and accessible to them and then assess how this fits with their needs, preferences and goals. This means that rather than continuing to 'nudge' farming households towards taking up a supply that fits the envisioned desirable development pathway of IARCs and the CGIAR, i.e., increase agricultural productivity to contribute to food security and providing a way out of poverty, we need to reconsider with which objectives, how, and towards what goals we operate and how this relates to the seed systems we support. The objectives of this paper are therefore to explore the variable contexts of farming households, how these translate into households' preferences and needs for seeds and varieties, and how these could be better reflected in a demand-orientated seed supply and seed system approach.
In the following sections we will outline how different contexts and farming household heterogeneity shape a diverse demand for seeds and varieties, and how this connects with seed delivery systems. We then outline a framework that could support the thinking around seed delivery systems that cater to the increasing heterogeneity and thereby reach more people and better deliver on development targets. Before concluding, we discuss implications that may be worth considering for the organization and implementation of international agricultural research and development efforts for the envisioned target groups.
2. The Underappreciated problem: Contexts, farming household heterogeneity and seed systems 2.1 Current Supply modelsand their challenges Seed systems are a crucial bridge between research labs and fields (including those of the national release system) and farming communities, delivering new crop varieties to farming populations. Private sector seed companies and their network of demonstration plots that showcase new varieties are an appealing and easy channel for getting new seeds and varieties to farming households. 'Consumer/customer' demands are also assumed to be best communicated and served through this avenue. This channel appears to work relatively well for commercially grown vegetables, soya beans or, the widely grown staple, maize, but these seed businesses also face important challenges (e.g. for the delivery of hybrid maize seed see Donovan et al., (2021)). This approach, however, has not worked for other major food crops and/or all geographical areas. Similarly, seeds of minor crops have not been able to reach the shelves of these distributors. For vegetatively propagated crops (VPCs), a route via decentralised multipliers, mostly farming households trained in specialised seed practices and promoted as private sector entrepreneurs, is being pursued (Kilwinger et al., 2021;McEwan et al., 2021), but so far without sustained success (Almekinders et al., 2019a;McEwan et al., 2020). For less commercial crops such as sorghum, which tends to be grown in marginal areas in eastern Africa, new or improved varieties remain largely unadopted (ASARECA/ KIT, 2014;Hambloch, Kahwai, and Mugonya, 2021;Kiambi and Mugo, 2016;Mubangizi et al., 2012), except for some cases in which demand from breweries is a major driver. However, there are also case studies that cast doubts over early success stories in adoption, as varieties did not live up to expectations and were abandoned (Simtowe & Mausch, 2019) and demand for grains from breweries is heavily affected by policy decisions (Orr, 2018).
The smaller scale of the commercial seed delivery models and pathways for crops other than maize, vegetables and soybean in most cases is directly related to the difficulty in identifying commercialised seed delivery models that can be sustained by farming households' demand for seed (e.g. Almekinders et al., 2019a;Hambloch et al., 2021;Kilwinger et al., 2021). Considerable advances have been made by making seeds available in smaller quantities or seed packs. Various types of microcredit schemes claim to have increased farming households' use of improved seeds. Insurance for climate-related crop failure is being explored as a way of creating enabling conditions for the use of improved seed (which usually require higher input use as well). However, domestically saved seed or seeds from neighbours may be a more secure alternative and an economically more attractive seed source for those crops and varieties that are not subject to genetic or seed health-related degeneration. Apart from having to compete with the domestically saved and locally sourced seed, the formal seed value chains also face the obstacle of 'last mile delivery' being costly and difficult to organise, especially for minor, bulky and vegetatively propagated crops, thus frequently making it unattractive for private sector actors. Community-based seed supply and decentralised multiplier models are widely promoted but have so far shown little sustainable success (see Almekinders et al., 2019a). As a consequence, the existing seed delivery models continue to mostly make improved seeds and varieties accessible for agricultural producers who are well connected with input and output markets.
Variation in success: Farming household typologies
The models and associated pathways for seed delivery described above cannot be generalised or scaled-up/out to serve all farming households. Despite some countries achieving impressive adoption rates of 90 + % for some improved varieties (e.g. chickpea in Ethiopia, India and Myanmar and maize in some countries), there are strong differences between crops, within and between countries. Overall, adoption rates are below expectations. Additionally, the lack of variety turnover creates concern: farming households who adopted older improved varieties may not necessarily successfully replace them with more recently released varieties (Spielman & Smale, 2017).
Adoption studies have pointed to a lack of capital (despite the relatively low investment involved, some farming households find it hard to make even modest cash investments), limited knowledge or social exclusion as constraints to the adoption of improved seeds, disproportionally affecting already marginalized farming households, particularly poorer producers and women. In other words, the adoption and replacement of improved varieties with better varieties is most prominent in the type of farming households that match the goals and expectations of the dominant breeding programmes and seed delivery pathways. In addition, agricultural input and credit policies and the efforts to better connect farming households to the market, further supports this by creating a better institutional enabling environment (see e.g. Birner and Resnick (2010) for an overview). While such 'enabling' may have helped farming households who have the capacity to 'step-up' and generate higher returns (see Figure 1), it has not helped a large proportion of households to become 'adopters' of the productive seeds being offered (e.g. Birner and Resnick, 2010;Dawson, Martin, and Sikor, 2016).
We argue, along with others (Hazell, 2019;Hazell & Rahman, 2014), that farming households are not homogeneous and may need to be served by different technologies and delivery mechanisms. On-going processes of economic development, urbanization and population growth have contributed and continue to contribute to socio-economic differentiation and greater heterogeneity among the rural population. This potentially increases the range and demands of seed clients. Some of these clients are farming households striving for higher agricultural productivity, whereas others prefer to increase labour productivity, or lower their (time or cash) investment or risks. Some of them are full-time farming households with opportunities to improve their livelihood by increasing agricultural productivity, whilst others are not. Nevertheless, we still refer to all of them as farmers (Dorward et al., 2009). Many households may have substantial off-farm income sources, out of choice or need. Some cannot expect to have a decent livelihood on the basis of their farm and need off-farm incomes as a supplement (Giller et al., 2021). The poorest may not be helped by agricultural technology at all as they face existential problems of, for instance, security and health, or their farms are simply too small to generate sufficient returns from farming alone (Alwang et al., 2019;Giller et al., 2021;. The visions, rationales and challenges of these farming households are diverse, generating different sets of aspirations: thus, the applicable underlying theories of change and impact pathways will vary substantially (Mausch et al., 2021a), which needs to be reflected in development approaches, and the development and assessment of technologies, as well as appropriate intervention entry points and project designs. Figure 1 presents a basic conceptualization of groups of farming households, i.e. commercial farmer, smallholder family farmer, and subsistence farmer, which should be understood as running along a continuum rather than discrete categories. This helps delineate the heterogeneity of farming households, their needs and preferences and, thus, their different seed demands. These typologies are highly heterogenous, variable and fluid across time and space. In addition, the heterogeneity is becoming increasingly diverse and complex, resulting from dynamics such as increased integration into the global economy, urbanization, a rising middle class and population growth. Differences between crops and crop combinations, agro-ecologies, market access and integration and several other contextual drivers make farm/seed demand typologies increasingly difficult to define. The role that agriculture and improved seeds play and can potentially contribute to improving their livelihood and global food security is a world full of differences. Hence, Figure 1 represents a broad categorization of farming households, which need to be understood in connection with the heterogeneity of farming households across time and space as well as other contextual factors (as elaborated later). However, it should not be understood as the entry point to design three seed systems to cater for the three groups.
Evidence of socio-economic variation shaping seed and variety needs
The contextual heterogeneity and different ways of engaging with agriculture are sources of socio-economic variations that shape the demand for seed and varieties. While breeders and seed system researchers have paid attention to agro-ecological variations, those of the socioeconomic type have featured less prominently on their radar. These socio-economic variations not only exist between regions but also within regions and communities. The ways in which this variation can lead to different demand for seeds and varieties have received some attention recently, mostly in the form of identification of genderrelated variety trait preferences (e.g. BMGF, 2012;Teeken et al., 2018;Weltzien et al., 2019). There are some other examples, but these are scattered across many crops and there has been no systematic approach in the way in which these differences have been studied. Pircher, Almekinders, and Kamanga (2013) sought to understand why some farming households in a rural community in Malawi adopted improved maize varieties and others did not. They found that the vouchers for seeds of improved varieties could not be relied on by those who need them most, that some could not afford the fertilizer that the improved seeds need, and others provided on-farm labour to ensure food and income rather than weeding their own field. In Mexico and Kenya, there are traditional farming households that have cultural and economic motivations for their preferred maize variety traits which are often not found in improved varieties (see e.g. Almekinders et al., 2021;Keleman, Hellin, and Flores, 2013). In Ecuador, some potato smallholders are more reluctant than others to invest in healthy potato seed, despite the many research reports showing the benefits, probably because they lack capital, knowledge, and their traditional management practices keep seed degeneration at acceptable levels of yield reduction (Navarrete, forthcoming). In a study that deliberately explored possible differences in seed and variety preference between groups of farming households within a community, Kilwinger et al. (2020) found that female farmers in Uganda have, against expectations, not taken up the use of tissue-culture bananas. This was in part because the tissue culture bananas are not their favoured varieties, and the sources where such planting material are available, often project-supported nurseries, do not suit them. The calculations of profitability of using improved seeds, which often forms the basis of recommendations and promotion, can also vary significantly between farming households, as for many much more is at stake than just what is reflected in the market price (see e.g. Cleaver, 2005). There are motivations related to labour use, cash expenditures and risk exposure, which may differ between households in a single community and result in a preference for the use of one's own seeds or those of a neighbour and rejecting more 'productive and profitable' agricultural practices.
3. Towards a solution: From types of farming households to demand profiles and models of seed delivery
Recognizing the broader context
We suggest investigating seed demands and delivery systems and associated interventions in a larger system perspective and moving beyond a linear development pathway and seed supply chain perspective. This becomes especially important when looking at the envisioned development outcomes of breeding efforts (within and beyond IARCs and the CGIAR). One currently advocated wider system perspective is that of the agri-food system. Using this perspective, one can elucidate how seed systems are embedded within different parts of the food system, in particular food value chains (i.e. ranging from seed to consumption), and the contextual/local drivers and associated outcomes (economic, environmental, social, and political) (Béné et al., 2019;Tendall et al., 2015). For example, the agrifood system perspective highlights how a demand pull from certain sectors can initiate demand for specific varieties, how soils and agro-ecological conditions influence which varieties are suitable, how the political economy and institutional environment (including power relations) shape the structure and functioning of the food and seed systems (such as oligopolistic input market structures), and how social and cultural norms influence farming households' preferences for specific crops and varieties.
Understanding seed systems and interventions within the agri-food system enables the identification of different types of seed demand in different contextual conditions, as well as the inherent trade-offs at the micro-level that are faced by farming households with regards to investments in production and consumption, trade-offs at the meso-level faced by seed companies and agro-dealers in terms of seed investment decisions, as well as trade-offs at the macro-level between, for instance, different agricultural sectors or economic sectors. If the objective is to reach different SDGs through seed-based interventions, most notably SDG 1 (No Poverty) and SDG 2 (Zero Hunger) 2 , development planners and practitioners need to take into account the various direct and indirect impacts of proposed interventions within the seed and food systems, as well as acknowledge and manage the inherent trade-offs between the different development goals (Mausch, Hall, and Hambloch, 2020). Specifically, seed delivery pathways and models should respond to the relevant agri-food system drivers of seed systems and acknowledge that improved (productivity maximising 3 ) varieties may not be the most appropriate and cost-effective solution for increasing farm productivity and production of nutritious and diverse food crops. Figure 2 depicts the various contextual agri-food system drivers shaping farming households' needs and preferences for different seeds and varieties. In the following, we focus on the interrelations between farming household types, demand types, demand characteristics and seed delivery systems while linking these back to the various contextual drivers within the agri-food system.
Catering for diverse farming households and seed demand types
If we recognise that the crops, regions and people create unique agri-food system contexts with farming households that engage with agriculture in different ways (Hazell, 2019), then the challenge is to identify combinations of demand type and characteristics, and seed delivery models that suit both the crop and these farming households. The challenge of designing seed supply becomes one that can be thought of composing a set of menus that represent the tastes of different consumers (see Figure 3)not only a menu of a 5-star fine dining restaurant but more like options in a food park where many types of restaurants service a heterogenous customer base and where different delivery models exist, be it dining in, take away, home delivery or combinations thereof.
In practical terms, this analogy 4 of creating a food park for seed supply means operationalizing different seed delivery models that fit particular groups of households. In addition, the items on the menu, e.g. the volume of seed and packaging, character of the varieties or transaction conditions, may vary depending on the demand type and characteristics of the client households ( Figure 3). Some of the seed clients are farming households producing for an urban market or processing industry who may be looking for high-input responsive varieties that can be machineharvested. These farming households may have larger landholdings and buy planting materials yearly in large volumes, are able to overcome logistical hurdles and to invest, they can access credit or may prefer to pay cash. Other farming households with smaller landholdings in non-irrigated and/or drought-prone areas may prefer lowinput regimes without mechanisation or with more modest labour demands or cash inputs. Their preferences would be for small volumes of seed of weed suppressing and drought tolerant varieties. The majority of smallholder farming households need modest quantities of planting material and are cash constrained. Like for other inputs, cash purchase of seed plus the costs, such as a trip to the nearby agro-dealer, may be prohibitive. These farming households, particularly those headed by women, in small, remote and marginal communities may prefer to make use of their social relations for seed and ultimately food security. Even though clean, disease-free propagation material (e.g. seed tubers, banana suckers, fish fingerlings) is a condition for high productivity, the cash investment in clean seed may not be a priority or possibility for a subsistence farming household. Seed delivery models may or may not easily cater for these important aspects of the menu. The agro-dealer does not provide seed on credit, whereas the local trader may, while seed from the neighbour can be acquired in exchange for day's labour during harvest time. Some farming households may be able to use microcredit to step out of poverty, whereas for others all formal loans are equally onerous. Culturally important seeds and varieties, often with unique culinary qualities, that are not commercially available require other maintenance and delivery models, as do seeds and varieties that are important in niches too small for public or commercial breeding and seed production (Hambloch et al., 2021). Households' different engagements with agriculture can mean different agricultural technology choices, including different seeds and/or varieties, and ways of acquiring them. It is unlikely that the existing diversity in demand types and characteristics can be met by one seed delivery model: a range of delivery models is needed in the food park to cater to the needs and preferences of all.
Adding to the complexity of seed needs and preferences to be served it needs to be remembered that agricultural production is increasingly part of a portfolio of (often multiple) household livelihood activities. In addition, agricultural production is not only related to rural development but increasingly is also a consideration for urban food and nutrition-focused projects. This makes the prioritization of the SDGs as beacons for agricultural technology development even more complex. Focusing on SDGs 1 and 2 (and the possible trade-offs between them) may require a menu of different types of seeds and varieties, depending on the types of producers and/or consumers involved. Different seeds and varieties, such as those that occupy important niches, may need different delivery and business models. The search for inclusiveness (SDG 5), environmental health (SDG 15), and climate actions (SDG 13) will probably need further diversification of seed delivery and funding models.
Recognizing the implications
3.2.1 Demand orientation of the supply and multi-directionality of menus. How can a system that supplies seeds of a diverse set of crops cater for all these demand types? How can a complete set of menus be composed and served? Who should serve the various demands? How can decisions be made that prioritize different types of demand or menus? Currently, much importance is given to the demand-orientation of breeding programmes and the creation of farming households' demand for seeds. The demand orientation of breeding programs is supported by the call for more and better trait elicitation work that informs breeders about traits that farming households seek in seeds and varieties Figure 3. An adaptive menu of demand profiles and seed business models related to SDG-based impact pathways. Seed demand profiles are defined by the combination of demand type and characteristics. Demand type refers to the seed buyer. Demand characteristics includes variety traits (like variety or product profiles 5 ) as well as characteristics of the seed (e.g. its quality), seed packaging, characteristics of transaction and delivery points. The seed delivery models (or seed business models; used as synonyms here) as the points/actors where/from whom farmers access seeds, include commercial (formal and informal) as well as non-commercial seed delivery models such as those involving neighbours, friends, NGO-or community-based initiatives and mechanisms. The seed delivery pathways (McEwan et al., 2021) comprise and link different commercial and non-commercial public and private sector breeding and seed actors to the delivery point.
and which fit with current and emerging market demands (e.g. Thiele et al., 2021). In particular, better information on what women-farmers prefer and how that differs from what men prefer is expected to enhance the relevance of breeding programmes, as embodied in Weltzien et al.'s (2019), the Gender in Breeding Initiative and the Gender-Responsive Product Profile Development Tool (see also Voss et al., 2021). This may potentially create space for inclusion of traits other than enhanced productivity and for less productivity-focused breeding programmes. Other requirements and traits, previously considered to be unimportant, that are actually important for women, such as the need for weeding and processing related qualities, may become included in client profiles and among the limited number of prioritized traits that breeders can handle. Yet at the same time, in a recent white paper (Crops to End Hunger, 2021), the intention of catering for diverse groups of clients, each with their own trait preferences is, combined with the ambition to develop fewer but relatively better varieties. This paper does not pay much attention to the way that this ambition will affect the diversity of variety traits made available for farming communities that may have varying preferences, or in a target area where differences in ethnicities, genders and other social dimensions play out in different preferences for agricultural technologies and seeds.
In addition to getting the traits right, another important factor that is not considered is the importance of designing delivery models that fit the demands of household with varying characteristics. It is important to think about and set up the right conditions of the seed delivery model at the point or actor from which farming households acquire their seeds of planting materialif not using their own saved seeds. Commercially viable seed delivery models are options in maize and other major crops, but in other crops the commercial models have unclear value propositions and need adaptation to accommodate the crop type (i.e. vegetatively propagated crops are fresh and bulky, and face logistical challenges, whereas small grains are easily saved for the next harvest) and 'last mile' requirements (i.e. remote places, no purchasing power). Moreover, the perspectives and visions of many IARCs, the CGIAR, and other development agencies add further dimensions to the context, and thus the diversity of delivery models. Some IARCs are more inclined to conventional market economics and favour commercial models, whereas others see the value of social network-based models in which sharing and reciprocity play a role and thus build their strategies and delivery models around such a vision.
3.2.2 Understanding 'demand' better. To become aware of the diversity of seed demands and, consequently, delivery conditions that have to be met, we need to take into account that 'adoption' represents a change from one technology to another. These changes may be complex and will need to incorporate different elements that play a role in the process. The 'encounter' with the newly proposed technologies needs to be moderated by 'dispositions' and then lead to 'responses' (Glover, Sumberg, Ton, Andersson, and Badstue, 2019). Dispositions, in this context, are the different perceptions that users generate in response to the different propositions being offered to them (as a result of differentiated contexts). If we consider the diversity of needs and goals as a variety of dispositions that inform the likely and most promising ways of encountering new technologies, we are more likely to get better responses. In addition, we can generate better insights from this process and arrive at a better understanding of what is involved from the side of the farming household, what preferences they hold, and how these preferences are shaped. This, in combination with methods that are specifically suitable to capture demand (Almekinders et al., 2019b;Pircher & Almekinders, 2021), will enable us to better understand how farming households' demands are shaped and shift over time, plots, crops and family members under changing climate and market conditions and available alternatives for income generation and labour allocation. When paying more attention to how we capture the demand of farming households, we may discover our bias to be 'nudging' them into assumed productive practices and come to see the benefits of existing varieties over the new proposed ones. For example, in the case of some primarily commercial crops, such as pigeon pea in Malawi, one may easily overlook on-farm benefits that exist in addition to the main purpose of selling grain, for example the provision of firewood in the degraded southern part of the country plays an important role in farm household's varietal preferences (Orr, Kambombo, Roth, Harris, and Doyle, 2015). In addition, the benefits of a local cattle breed may only be assessed in terms of productivity, ignoring its value as dowry and a saving account (Crane et al., 2016).
Ultimately, while we need to improve our understanding of the diversity in demands and how these change over time, we also need to acknowledge that the response to the diversity in demand is not a silver bullet-not at the level of crop variety, the seed delivery model, or that of the individual farmer or household. Instead, as Ronner et al. (2021) puts it, we need to broaden and deepen the basket of options available, aligning with the food park concept. Yet, we would also caution that making food parks too large or making a broad and deep basket available to everyone to pick from would not be the solution as 'choice overload' would undermine farming households' ability to select the right option for them. The visits to the food parks, like encounters with the basket of options, have to be carefully moderated and tailored, with effective accompanying information: the colourful vegetables on the logo of the vegetarian restaurant, the turning meat spit at the entrance of the kebab joint or the ice cream cone on top of the gelato place.
4. Diversified seed delivery models and seeking synergies to cater for varied seed demands: Summary and conclusions International agricultural research increasingly recognizes the multitude and inherent complexity of impact pathways to reach the SDGs, addressing issues of malnutrition, inclusion and the uncertainty of futures and the different roles that agriculture plays in them (Caron et al., 2020;Mausch et al., 2020). However, the impact pathway to 'modernize' the agricultural sector by focusing primarily on increasing agricultural productivity continues to dominate the agenda (IPES-Food, 2020). It is evident that this approach/ pathway has not worked for all, and alternative approaches/pathways are needed if agricultural development is to be inclusive. In the light of this, it is important to emphasise that not only agro-ecological, but also socioeconomic contexts diversify the demand for seeds and varieties and the preferred ways to access them. Global developments indicate that this diversification will only increase.
In this paper we argue that the inherent, and increasingly diverse, socio-economic contexts shape aspirations, needs and demands and must be addressed when considering variety development and seed delivery. We have distinguished two dimensions in this socio-economic variation that are important for seeds and variety preferences: socioeconomic differentiation related to poverty, and the increasingly varied way in which rural households engage with farming. The conceptual assumption that all farmers produce (or potentially produce) solely for profits on market-terms seems to create a structural problem in serving seed and variety demands. Households that are too poor and/or whose farms are too small for an agricultural route out of poverty may be better supported through other channels and modalities. However, such strategic choices should be clear starting from the formulation of objectives and target populations, how these relate to institutional mandates, and ultimately how they translate to contributions towards global goals such as the SDGs.
Thus, it is important to acknowledge and be clear about who those people are that we call 'farmers' and which of those we decide to target with international and other breeding and seed programmes. Where needed, a variety of different seed products and delivery models need to be established and supported in order to cater for as wide a range as possible of seed and variety demands and be resilient to unpredictable and interacting changes in these demands. Coming back to the food park analogy, we argue that a variety of delivery models and menus is needed to cater for the preferences and needs of all. The food park will allow all of them to dine, in the same place or not, to acquire the taste, volumes and catering of their preference, even facilitate exchange as they could share their meals seated at the same table. It is less important who supports what seed delivery model, as long as there is financial and policy support for the various models that cater for the various demands. The international breeding and seed programmes are strong and successful in supplying seed and varieties for an important segment of the farming households. However, the focus on productivity and commercial pathways and seed delivery models does not meet all types of demand. Adding to the existing strong engagement with private sector actors in this arena, an effective engagement with civil society actors can provide important opportunities for targeting different farming households: these actors have different strengths and weaknesses that can be built on that together can strengthen a demand-oriented and inclusive seed supply. Returns on investment and the measures of returns will vary between the different seed delivery models, depending on their goals and mandate, the targeted households, and their associated incentive structures. Adoption and variety turn-over figures and financial indicators cannot be the only measures of success since the benefits of inclusion and equity and cultural values are hard to measure.
Acknowledgment of the strengths and weaknesses in serving the different demands, and communication between the different seed delivery models and associated breeding and delivery pathways can lead to synergies hardly yet explored. Coordination in the testing and promotion of products and knowing the clientele can be especially useful when the menus, goals and aspirations of the different delivery models are transparent. Rather than a fragmented seed delivery landscape, we would see a co-ordinated network of seed delivery models. Connecting the dots between knowledge and action, instead of living with a permanent disconnection between researchers and decision makers on the one hand and beneficiary communities on the other is absolutely essential for the future resilience of food systems (Caron et al., 2020). At a policy level, the recognition of the importance of seed systems within the agri-food system and the various linkages to many other policy targets would allow a consideration of an integrated seed system development approach that creates coherence among seed practices, programmes, and policies (Louwaars & De Boef, 2012). Once clarity on mandates and responsibilities is in place, this would open up analytical space to consider the other challenges and questions faced by rural households, as well as reframing development problems and appropriate solutions. Less linear impact pathways that would reach more people could then be accommodated and serviced. If taken seriously, this would also require wide participation in the management and operations of the food park in order to continuously adapt, adjust and update the types of seed delivery models as well as the types of seed that they offer and how. | 2021-11-05T15:22:29.293Z | 2021-11-03T00:00:00.000 | {
"year": 2021,
"sha1": "fe0aa39cbda01cce5a07ca3bd2992e20a5e05f28",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/00307270211054111",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "853d14ec86d08a9c0100ff69e17dc5782783dc80",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
261394696 | pes2o/s2orc | v3-fos-license | Interfacial Tension–Temperature–Pressure–Salinity Relationship for the Hydrogen–Brine System under Reservoir Conditions: Integration of Molecular Dynamics and Machine Learning
Hydrogen (H2) underground storage has attracted considerable attention as a potentially efficient strategy for the large-scale storage of H2. Nevertheless, successful execution and long-term storage and withdrawal of H2 necessitate a thorough understanding of the physical and chemical properties of H2 in contact with the resident fluids. As capillary forces control H2 migration and trapping in a subsurface environment, quantifying the interfacial tension (IFT) between H2 and the resident fluids in the subsurface is important. In this study, molecular dynamics (MD) simulation was employed to develop a data set for the IFT of H2–brine systems under a wide range of thermodynamic conditions (298–373 K temperatures and 1–30 MPa pressures) and NaCl salinities (0–5.02 mol·kg–1). For the first time to our knowledge, a comprehensive assessment was carried out to introduce the most accurate force field combination for H2–brine systems in predicting interfacial properties with an absolute relative deviation (ARD) of less than 3% compared with the experimental data. In addition, the effect of the cation type was investigated for brines containing NaCl, KCl, CaCl2, and MgCl2. Our results show that H2–brine IFT decreases with increasing temperature under any pressure condition, while higher NaCl salinity increases the IFT. A slight decrease in IFT occurs when the pressure increases. Under the impact of cation type, Ca2+ can increase IFT values more than others, i.e., up to 12% with respect to KCl. In the last step, the predicted IFT data set was used to provide a reliable correlation using machine learning (ML). Three white-box ML approaches of the group method of data handling (GMDH), gene expression programming (GEP), and genetic programming (GP) were applied. GP demonstrates the most accurate correlation with a coefficient of determination (R2) and absolute average relative deviation (AARD) of 0.9783 and 0.9767%, respectively.
■ INTRODUCTION
The demand for energy is surging, and with the current carbon dioxide (CO 2 ) and methane atmospheric emission, the occurrence of an environmental catastrophe originating from global warming is inevitable. 1 To mitigate CO 2 emissions, considering a diverse energy portfolio with renewable sources (solar, wind, and electrochemical energy systems) is crucial.However, the dependence of these energy sources on the weather or geological position of the site can lead to a mismatch between seasonal energy demand and supply. 2onsidering a sustainable energy source, hydrogen (H 2 ) has proved to be a low-carbon energy carrier, a cleaner energy source, and a suitable alternative to fossil fuels. 3Nevertheless, providing a reliable energy supply and balance in energy demand and supply necessitates an enduring storage of H 2 . 4,5nderground salt caverns have been used in the past for the underground storage of H 2 6,7 and have higher and cheaper storage capacities compared to those of surface-based H 2 storage facilities.−11 Nevertheless, successfully storing H 2 in subsurface geological formations necessitates a comprehensive knowledge of the behavior of H 2 in the porous storage area under actual thermodynamic conditions. 12Two crucial parameters control the H 2 −brine movement in subsurface porous rocks and capillary trapping, which are the interfacial tension (IFT) between H 2 and water/brine 13 and the contact angle. 14owever, the latter results from the interaction between the interfacial tensions at the contact point.In definition, IFT is the key property of the boundary that exists between immiscible fluids and is related to the intermolecular interactions of phases on each side of the boundary. 15IFT determines the amount of energy required to spread the boundary 15 and is considered to be an influential factor in determining the storage capacity, multiphase fluid dynamics, and operational indices. 16onsidering that the topic is recent, the literature lacks thorough data sets for the IFT of H 2 −water/brine in actual subsurface geological formations.So far, few experimental evaluations have attempted to measure the IFT values (summary in Table 1).Chow et al. 17 measured the IFT of H 2 −water systems for different temperature and pressure ranges and concluded that an increment in both pressure and temperature leads to a decrease in IFT values.They also provided an empirical correlation for H 2 −water systems covering various thermodynamic conditions (298 < T < 448 K and 0.5 < p < 45.5 MPa) with an average absolute deviation (AAD) of 0.16 mN•m −1 from 129 data points.In addition, Hosseini et al. 18 considered a brine solution containing NaCl and KCl and noted that increasing salinity increases the IFT of H 2 −brine.The similar effects of pressure and temperature changes on IFT values were also reported for H 2 −brine systems. 18,19Al-Mukainah et al. 20 showed that the IFT of H 2 − brine decreased slightly as the pressure increased.Higgs et al. 21easured the IFT of H 2 in both pure and brine systems and approved that IFT decreased with increasing pressure in pure water while no relationship was observed between IFT and NaCl brine salinities.
Experimental measurement of the IFT for H 2 in contact with other fluids can be time-and resource-demanding with high safety risks.An efficient complementary approach to estimating IFT is the molecular dynamics (MD) simulation that can provide accurate information about the interfacial and transport properties of various systems from atomic insight as reported for H 2 in the literature. 12,22Recently, van Rooijen et al. 23 estimated the IFT properties of the H 2 −NaCl−H 2 O system under various conditions of temperature, pressure, and salinity.They used force fields of TIP4P/2005 24 for H 2 O, Madrid-2019 25 and the Madrid-Transport 26 for NaCl, and the Vrabec 27 and Marx for H 2 with about 10% average deviations from experimental data.Doan et al. 28 predicted H 2 −water IFT values at two different temperatures of 300 and 323 K and pressure ranges of 1−70 MPa.Compared to the experimental data, they underestimated IFT values by about 10−14% due to employing less accurate force fields.
The advent of high-performance computers has facilitated the integration of MD with machine learning (ML) approaches as an alternative to sophisticated mathematical correlation for predicting the desired properties.Recently, coupled MD-ML techniques have been used for forecasting various interfacial properties.For example, Kirch et al. 29 used MD simulation to create an IFT data set for brine−oil systems under ambient thermodynamical conditions by considering various salinities, brine compositions, oil compositions, and oil density.Then, several ML algorithms, including linear regression (LR), random forest (RF), extra trees (ET), gradient boosted (GB), and elasticNet regression (EN), were implemented to provide a robust model for predicting oil−brine IFT.Zhao et al. 30 developed a comprehensive database for binary and ternary diffusion coefficients of multicomponent supercritical water (SCW) mixtures using equilibrium MD simulations.The produced data were also used for exploring ML and transfer learning (TL) in predicting the aforementioned types of diffusion coefficients.
This Study.From the above, it is clear that an extensive study of the IFT prediction for the H 2 −brine system via an accurate force field combination is a missing part.Therefore, a comprehensive evaluation is carried out to assess the accuracy of various force field combinations for the H 2 −brine system.Then, a comprehensive series of MD simulations over a wide range of temperatures (298−373 K), pressures (1−30 MPa), and NaCl salinities (0−5 molality (m)), representing the actual conditions of saline aquifers, 31,32 were conducted.Given that a mixture of salts is present in saline aquifers, the effect of cation type, i.e., K + , Ca 2+ , and Mg 2+ , on the IFT and their behavior at the interface of H 2 −brine is scrutinized.Finally, an accurate correlation for IFT prediction with respect to the temperature, pressure, and salinity is presented by integrating the MD, for generating the database, and ML, for training algorithms.This study aims to highlight the importance of these combined techniques as an efficient way to determine the H 2 properties applicable to subsurface applications.In summary, the following items are the key attributes of this study with respect to the literature: • Comprehensive accuracy assessment of various force field combinations for the H 2 −brine system to predict the IFT over the actual conditions of saline aquifers.• Elucidating the effect of cation type, i.e., Na + , K + , Ca 2+ , and Mg 2+ , on the IFT and their behavior at the interface of H 2 −brine.• Providing an accurate correlation for IFT prediction with respect to the temperature, pressure, and salinity by integrating MD simulation and ML techniques.
■ METHODOLOGY Molecular Dynamics (MD) Simulation.All MD simulations were conducted through the GROMACS (version 2021) simulation package. 33The accuracy of the MD simulation results significantly depends on the selected atomic force field parameters.In this regard, five H 2 force fields developed by Vrabec et al., 27 Hirschfelder et al., 34 Alvai et al. 35 Langmuir from modified Silvera−Goldman parameters, 36 Cracknell, 37 and Marx and Nielaba 38 and six potential water models, namely, TIP4P/2005, 39 modified TIP4P by Rahbari et al., 40 TIP4P, 41 SPC/E, 42 TIP3P, 41 and TIP5Pe 43 that are the most frequent models in the literature to represent H 2 and water molecules were used to determine the most accurate combined force field parameters to predict H 2 interfacial properties.The various combinations of the above-mentioned force fields were tested.Their results were compared to two available experimental works on the measurement of IFT for H 2 − water systems conducted by Chow et al. 17 and Hosseini et al. 18 Further information about preliminary assessments is provided in the Supporting Information (SI).According to the results and comparisons reported in Table S1, the lowest deviation of IFT predicted by MD simulation from experimental data was obtained by a combination of H 2 force field parameters developed by Marx and Nielaba 38 and the TIP4P/2005 39 water model.Our IFT results showed that the type of water force field plays a critical role in the accuracy of combined force fields.Among all considered cases, only the TIP4P/2005 model could produce IFT data close to experimental values.
After validating the accuracy of potential parameters for the H 2 −water system, the most compatible force field parameters for Na + and Cl − ions were also deliberated among those developed by Smith and Dang, 44,45 Joung and Cheatham 46 (referred to as SD and JC models), Zeron et al. 25 (so-called Madrid-2019), and Loche et al. 47 The results of various combinations for the predicted H 2 −brine IFT, reported in Table S2, showed the smallest difference from experimental data 18 for the SD force field parameters in conjugation with Marx−TIP4P/2005.Since no experimental data have been reported for the H 2 −brine IFT in the presence of other types of ions, we employed the most frequent force fields available in the literature for predicting the IFT of the gas−brine system. 48,49In summary, the Lennard-Jones (LJ) parameters of K + ions were considered from Dang's work, 45 and for ions of Ca 2+ and Mg 2+ , they were obtained from Aqvist's study. 50To prove the accuracy of the selected force fields, the density profiles of H 2 , water, a brine solution of NaCl (1 and 5 m concentrations), KCl, CaCl 2 , and MgCl 2 (1 and 2 m concentrations) were precisely compared with the literature.
As Figure 1 shows, our predicted density results are in perfect agreement with values reported in the literature 51 with overall AAD less than 1% from experimental values.
−57 First, a water or brine solution box with dimensions of 6 × 6 × 6 nm 3 was prepared, followed by applying energy minimization of the system using the steepestdescent algorithm to reduce bad contacts among molecules.Next, a 1 ns semi-isotropic NPT ensemble was run to bring the system's density to the desired thermodynamic point.Since the cross-section of the solution box was kept constant, two cubic boxes with dimensions of 6 × 6 × 6 nm 3 were placed at each end of the water box, as shown in Figure 2. The number of H 2 molecules was determined based on the density at the desired temperature and pressure. 52After the preparation of the initial configuration of the simulation box, the whole system underwent energy minimization again, followed by a 10 ns production run in a canonical (NVT) ensemble.IFT values were calculated based on the last nanosecond where the system was totally stabilized.The IFT values were calculated following the Kirkwood−Buff relation 58
as follows
where γ is the IFT, L z is the length of the simulation box in the z direction and P xx , P yy , and P zz are the main diagonal values of the pressure tensor.Pressure and temperature were maintained by a Parrinello−Rahman barostat 59 and a Nose-Hoover thermostat, 60 respectively.Periodic boundary conditions (PBCs) were applied in all three directions, and a 2.9 nm cutoff was assigned for nonbonded interactions.Also, for calculating the interatomic potential parameters between unlike atoms, the Lorentz−Berthelot combing rules were employed. 61It should be noted that several considerations were taken into account to improve the precision of our model and outcomes.Due to the significant importance of both simulation time and box size in the accuracy of predicted IFT by MD simulation, it is clear that the larger simulation box and the longer simulation time lead to more accurate predictions.However, the computational cost considerably increases.In this regard, further assessments were conducted by evaluating three different simulation times of 3, 5, and 10 ns, four different box sizes, and various cutoff values.As Figure S1 shows, a 10 ns simulation time is long enough to obtain steady IFT results, and that is why the IFT calculation was based on the last nanosecond.According to Table S3, the simulation box with dimensions of 6 × 6 × 18 nm 3 and with a cutoff value of 2.9 nm, the maximum allowable cutoff, provided the lowest absolute averaged relative deviation in the prediction of IFT compared to the experimental work. 17,18Also, all reported results are the average of two repetitions to decrease the statistical uncertainty.
Machine Learning (ML).
In recent years, with the advent of high-performing computing systems, machine learning (ML) methods have been widely applied in various industries as robust alternative predicative gadgets in order to tackle different aspects of engineering issues. 62In this study, the obtained IFT values were considered to be feeds for training ML models to provide an explicit expression for the reliable estimation of the IFT values of H 2 −brine systems as a function of temperature, pressure, and salinity.We employed three white-box ML approaches, including a group method of data handling (GMDH), gene expression programming (GEP), and genetic programming (GP) algorithms.The details of each model can be found in the literature, 63,64 and a summary of them in addition to the set parameters of each model is presented in the SI.The following structure is acknowledged for the three correlations and T c = 33.20 K. y NaCl , T r , T c , and Δρ are the NaCl mole fraction, reduced temperature, H 2 critical temperature, and density difference between H 2 and brine (kg•m −3 ), respectively.The data set obtained by MD simulation was divided into a training set with 80% of the data set and the remaining 20% used for the testing set.In the development of the GEP and GP correlations, the mean square error (MSE) was considered to be the fitness function to evaluate the chromosomes, which is defined as follows where x i and y i represent the measured and the predicted IFT values.
■ RESULTS AND DISCUSSION Molecular Dynamics Simulation.In this section, we first explain the predicted IFT values and the impacts of the temperature, pressure, and NaCl salinity on them.Then, the impacts of salt type (KCl, CaCl 2 , and MgCl 2 ) on the IFT are elaborated.In the final step, a comprehensive comparison is established among the accuracy of employed ML approaches trained by the obtained MD simulation data in predicting IFT.
H 2 −NaCl−Water Systems. Figure 3(a) displays the simulated IFT variation for H 2 −NaCl brine systems as a function of the temperature and salinity at the same pressure.Generally, the dependence of IFT on temperature and salinity is analogous to the previous experimental research for H 2 − water/brine, 18,19 in which at constant pressure and salinity an increase in temperature reduces the IFT and at constant temperature and pressure a higher concentration of NaCl brine leads to an IFT increase.All of the predicted IFT values can be found in Table S6.
In further detail, our results demonstrate that the IFT values are in the range of 55.85 to 77.54 mN•m −1 for temperature and NaCl salinity variations of 373 to 298 K and 0 to 5 m, respectively, at a pressure of 20 MPa.For instance, as Figure 3(b) shows, the predicted IFTs are 67.52 and 55.85 mN•m −1 at 298 and 373 K in pure water, a 17% reduction in IFT by increasing the temperature.At the given pressure and NaCl salinity of 5 m, IFT decreases by about 15% at 373 K with respect to the IFT of 77.54 mN•m −1 at 298 K. Generally, at each pressure, the variation of IFT by temperature increases is between 13 and 18% for salinity ranging from 0 to 5 m.The effect of pressure on the IFT of H 2 −brine was also investigated (Figure 3(c)).IFT has no considerable dependence on pressure.Nevertheless, considering all of the cases, the pressure effect can go up to a 6% change in IFT values.At temperatures of 298 and 373 K, the IFT decreases by 3.5% and 2%, respectively, as the pressure increases from 1 to 30 MPa.It is worth mentioning that a slight increase in IFT values with increasing pressure at lower pressures and at some temperatures has also been reported by Chow et al. 17 for the H 2 − water system.The impact of the NaCl salinity on the IFT is also demonstrated in Figure 3(d).As expected, a greater salinity leads to higher IFT values in all studied cases.Under constant temperature and pressure, a 12−17% increase in IFT occurs as the salinity increases to 5 m compared to pure water.Note that at higher temperatures (373 K) the salinity change has a larger impact on the IFT than at lower temperatures (298 K).Also, at the lower salinity of NaCl, the IFT has a more intense response to temperature change than at higher salt concentrations.The IFT values trending with temperature, pressure, and salinity are in agreement with previous experimental work. 17,18In comparison to CO 2 and CH 4 , H 2 −brine IFT has a much lower dependence on the pressure alteration which is due to the lower dependence of the density difference of the H 2 −brine system on pressure. 65Generally, increasing the pressure will result in lower IFT values; however, an increase in IFT with increasing pressure at relatively high pressures has been reported for the CH 4 −water system.In addition, an inverse relationship between IFT and salinity or temperature has been reported in both CO 2 − and CH 4 −water systems. 55onsidering the dependence of IFT changes on the atomic structures of the components involved in the interfacial region, molecular concentrations of water, ions, and H 2 under the effect of temperature, pressure, and salinity are illustrated in Figure 4(a−c).To have a better comparison between the distribution of components, we use the reduced density (ρ*), which is defined as follows 49 where ρ i is the density of component i and ρ i bulk is its bulk density.Since we previously showed that the differences between the density obtained by simulations and references 17 and 18 are less than 1%, the reference for the bulk density of each component is from our simulations.For better visualization in Figure 4, the middle of the brine solution box on the z axis is considered to be an origin, and since the simulation box is symmetric, only the right-hand side of the structure is highlighted.Figure 4(a) shows that the effect of temperature on the distributions of water and H 2 is pronounced, in which the accumulation of H 2 in the interfacial region decreases as temperature increases.
The Gibbs dividing surface (GDS) was applied to establish the location of the interface.According to the definition, the GDS is placed where the number of excess water molecules on the gas side, i.e., H 2 , equals the deficiency of water molecules on the water side.The distance between the two surfaces at 10% and 90% of the bulk density of the water phase is considered to determine the GDS width. 66In Figure 4(a), the red and green shadings represent the interface of H 2 and water at 298 and 373 K, respectively.As the temperature increases, the GDS width broadens, which means the higher engagement of two phases and, consequently, a lower IFT.
The radial distribution function (RDF), g(r), curve was used to further demonstrate the engagement by comparing ion pairings between H 2 and brine compositions, i.e., water, Na + , and Cl − .According to Figure S5(a), a stronger interaction between H 2 and water molecules is observed at higher temperatures, indicating a greater possibility of water molecules around the H 2 interplays since a broadened interfacial width is created.Comparing the left-hand side with the right-hand side of GDS, it is evident that temperature significantly impacts the GDS position and shifts it toward the H 2 phase.Similarly, an increment in pressure influences the distribution of H 2 and, to a lesser extent, water molecules.As is evident in Figure 4(b), there is a higher accumulation for H 2 at 1 MPa than at 30 MPa and faster water depletion at the interface at higher pressure.The thicker interfacial width of H 2 −water at higher pressures indicates the adverse impact of pressure on IFT.The slight variations in intermolecular interactions of H 2 and water by enhancing pressure, which results in a thicker width, are also approved by RDF peaks, as shown in Figure S5(d).Indeed, there exists a higher RDF peak at higher pressures.
The dependence of composition distributions on NaCl salinity is illustrated in Figure 4(c).When salinity increases, the H 2 distribution moves toward the brine phase associated with higher accumulation at the interface while water is depleted quickly from the bulk state toward the H 2 side of the interface.In contrast to the temperature and pressure, less interfacial coverage occurs between H 2 and water as the NaCl salinity increases, which indicates a direct relationship.In other words, when the enrichment of H 2 at the interface is significant, a thinner interface width is also observed, implying less mixing of the two phases.Generally, the accumulation of more H 2 molecules at the interface, either by increasing salinity or decreasing temperature and pressure, leads to milder interactions between H 2 and water (see Figure S5(a,d,g)), identifying the presence of less water around hydrogen due to thinner interfacial width.
Langmuir
The distribution of ionic compositions in brine is another imperative factor that should be considered for evaluating salinity's effect on IFT.As is illustrated in Figure 5(a), cations have more tendency toward the bulk phase while a higher depletion rate is observed for anions at the interface.The arrangement of anions at the interface brings about stronger interactions with H 2 compared to the intensity of interaction between H 2 and cations, as RDF values show in Figure S5(b,c,e,f,h,i).Similarly, when NaCl salinity increases, ions tend to accumulate in the bulk rather than the interface, as is shown in Figure 5(b).These arrangements of ions influence intermolecular interactions of H 2 and water, causing a decrease in the possibility of engagement.
Effect of Cation Type.We previously showed that the higher IFT for H 2 in the presence of NaCl brine is achieved compared to pure water.Literature highlighted the significant effect of the cation type on CO 2 /CH 4 -brine IFT. 67As presented in Figure 6, the impact of the cation type is also pronounced for the IFT of H 2 −brine systems.In the general description, under the same thermodynamic conditions and As shown in Figure 6(a), the same as in the NaCl brine case, an increase in temperature brings about a lower IFT for H 2 − brine solutions containing other types of cations.intensity of the temperature impact on IFT for various brine solution types, namely, CaCl 2 , MgCl 2 , NaCl, and KCl, is the same, in which, under the constant pressure of 10 MPa, the IFT reduction as a result of an increment of temperature from 298 to 373 K is about 15 ± 3% for both salt concentrations, 1.1 and 1.91 mol•kg −1 .Nevertheless, it can be concluded that the effect of salinity on the IFT is more significant for CaCl 2 .Regarding the impact of pressure, the same as for the NaCl brine, the higher the pressure, the lower the IFT that is achieved for all types of cations.Although the intensity of the pressure impact on the IFT of H 2 −brine containing various cations is different and for the provided case in Figure 6(b), for example, the more considerable the impact that is evident for CaCl 2 and the minor effect occurs in NaCl brine.This observation cannot be extended to the rest of the conditions, and similar to the two other factors of temperature and salinity, no accurate relationship can be introduced for the intensity of the pressure impact of the IFT vs cation type.All IFT values predicted are listed in Table S7.
The effect of ion types can be divided into two categories, mono-and divalent cations.It can be discussed from two points of view of how various ion types impact the water configuration and H 2 distribution at the interface.Although the IFT values are close for both mono-and divalent cations and this lowers the discrepancy between the behavior of the systems, some differences can be detected.Since differences in IFT values become more pronounced at higher salt concentrations, we chose 1.91 m for better comparison.Considering the same temperature, pressure, and salinity for all cases (323 K, 10 MPa, and 1.91 m), for monovalent cases, KCl shows a different behavior regarding the ion positioning at the interface.As can be seen from Figure 7(a), K + tends to stay on the H 2 side, and this is confirmed by RDF curves as well (see Figure S7(b and c)).Such behavior for KCl has been reported previously. 68Comparing the RDF curves indicates that K + interacts more with H 2 than does Na + , which explains the larger width on the right-hand side of GDS for the KCl system (Figure S6).On the water side, there is a smaller distance between Na + −water than the K + −water peak, revealing stronger electrostatic interactions between them.In addition, ions in the NaCl system tend to remain in the bulk rather than at the interface compared to KCl, as there is a higher percentage of K + near the GDS (Figure 7(a)).These lead to lower IFT values for the KCl system.In the CaCl 2 and MgCl 2 systems, there is a lower interaction between the water system and H 2 than for the NaCl and KCl cases (see Figure S7).Indeed, the lower engagement for the brine solutions containing divalent cations results in higher IFT values in the H 2 −brine systems.As seen in Figure 7(d), Ca 2+ has a peak near the interface and accumulates near the interface.Mg 2+ has weaker but similar behavior.The higher layering structure of divalent ions at the interface has been observed, and their contribution to IFT has also been reported in the literature. 49n summary, a comparison among all cases by considering the effects of both salinity and cation types shows that the existence of higher cation molecules at the interface lowers the IFT values.An increase in salinity that results in the presence of a greater number of cations in the bulk compared to the interface causes higher IFT values.For monovalent cations, as K + cations have lower hydration enthalpy (−322 kJ mol −1 ) compared to that of Na + (−406 kJ mol −1 ), they tend to stay at the interface rather than in the bulk.Hence, lower IFT values are obtained for KCl.In addition, the same is true for comparing divalent cations, i.e., Ca 2+ (−80 kJ mol −1 ) and Mg 2+ (−74 kJ mol −1 ).The significant difference in the distribution of monovalent and divalent cations at the interface where a higher accumulation of cations observed for the monovalent type is mainly due to cation valency and the layering structure of divalent cations as well as their contributions to IFT. 49 Machine Learning.To quantitatively assess the accuracy and performance of GMDH, GEP, and GP algorithms, several statistical parameters including the coefficient of determination (R 2 ), root-mean-square error (RMSE), absolute average relative deviation (AARD), and graphical assessments such as cross plots of predicted vs measured IFT values, percent relative error distribution, the cumulative frequency diagram, and Williams plot were used.
In the present context, the three statistical indexes are shown for the three ML algorithms in Figure S8.As illustrated, GMDH and GP show very similar accuracy while the GEP algorithm seems to be inferior.As seen in Figure S8(a), the smallest R 2 value is maintained by the GEP algorithm, 0.9672, whereas the corresponding highest value is assumed by the GMDH algorithm, 0.9793, and the GP algorithm has a coefficient of determination of 0.9783.The smallest AARD(%) value, which is more indicative of the accuracy of the predicted IFTs for the GP algorithm, is 0.9767%.However, GDMH has a very close value of 0.9845% and the GEP algorithm shows the highest value of 1.5197%.In addition, the least to greatest RMSE values are for GP, GMDH, and GEP algorithms, with values of 0.6649, 0.6694, and 1.0446, respectively.Overall, the GP algorithm shows better outcomes in predicting the IFT of the H 2 −water−NaCl system.
From the graphical performance measurement perspective, as shown in Figure S9, a good alignment of predicted vs measured data around the slope of unity for GP and GMDH algorithms indicates their better performance, while GEP shows a deviation more notably at lower and higher IFT values.This can be approved by considering the percent relative error graph of the GEP algorithm which is distanced from the zero line at lower and higher IFT values.GP and GMDH show a good distribution around the zero line and consequently more reliability of their correlations.The cumulative frequency diagram in Figure S10 indicates that a higher share of predicted IFT values by GP, GMDH, and GEP algorithms falls into a lower AARD of 1.4%, respectively.
The relevancy factor of input parameters determines which input variables, in our case, density difference (Δρ), reduced temperature (T r ), and NaCl mole fraction, are most important for predicting the target variable, IFT of H 2 −brine.The relevancy factor is defined as follows where I and I̅ are the input parameter and its average, o and o ̅ represent the predicted output and its average, and i and j refer to the data index and the variable, respectively.Therefore, the relevancy factor of the governing input variables is checked on the IFT predictions using the GP algorithm, and it is found that the density difference, reduced temperature, and NaCl mole fraction have the most to least relevancy in IFT values, as shown in Figure S12.In addition, T r has a negative impact on the IFT values, while Δρ and y NaCl have a positive impact.
Based on the aforementioned statistical error estimation and model performance assessment, mathematical correlations are developed for each ML algorithm.The GP algorithm showed the best results, and the details of the correlations based on the GMDH and GEP algorithms can be found in the SI.The GP algorithm provides a simple-to-use mathematical expression that can accurately predict the IFT based on the input variables.The developed correlation based on the GP algorithm is shown by eq 6.
in which y NaCl and T r are the NaCl mole fraction and reduced temperature, respectively, and IFT* and Δρ* are reduced with respect to a reference point of 298 K, 1 MPa, and 0 mol•kg −1 .This equation has been validated for the ranges of 298−373 K, 0−5.02 m, and 1−30 MPa.Equation 6was used to predict previous available experimental data by Hosseini et al. 18 and showed an AARD of 3% over the range covered by our correlation (36 points).This is an acceptable error interval considering that they considered a NaCl−KCl mixture as their brine solution.Recently, van Rooijen et al. 23
■ CONCLUSIONS
In this study, MD simulations were conducted to predict the IFT of H 2 −water/brine in a wide range of temperatures (298− 373 K), pressures (1−30 MPa), and NaCl salinity (0−5.02mol•kg −1 ).The effect of cation type on the IFT was also assessed by considering a single salt of KCl, CaCl 2 , and MgCl 2 .
In addition, the obtained IFT values were used for training three white-box ML approaches of genetic programming (GP), gene expression programming (GEP), and group method of data handling (GMDH) to provide a robust correlation.
Langmuir
Prior to providing the data set for H 2 −NaCl brine systems and investigating the impacts of the aforementioned factors, we performed an extensive analysis of various combined force fields in predicting IFT using MD simulations.The results of combined force fields were compared with available experimental data in the literature.Among all cases, a combination of Marx-TIP4P/2005-Smith and Dang force fields that describe H 2 , water, and NaCl, respectively, showed the most accurate values against experimental data with a deviation error lower than 3%.
Our findings showed that IFT values had a direct relationship with salinity and an inverse relationship with temperature and pressure.The most and least affecting factors on the IFT of the H 2 −water/brine system are the temperature and pressure, respectively.The impact of the temperature change is more pronounced at lower salinity, while salinity affects IFT values more significantly at higher temperatures.The pressure increment commonly lowers the IFT, although its impact is rather insignificant.This is due to the lower dependence of the density of the system on pressure.
Regarding the cation type effects, CaCl 2 had the highest IFT values, while KCl had the least influence in most of the cases.The impact of cations is generally a function of their valency and their configuration at the interface.A higher presence of cations at the interface rather than in the bulk lowers the IFT, and each affecting factor, i.e., temperature, pressure, or salinity, changes the IFT through this.Additionally, the GP method showed the best performance in predicting the IFT values with R 2 and AARD% of 0.9783 and 0.9767%, respectively.In summary, MD simulation is a valuable tool not only for providing atomic insight into the interfacial phenomena but also for generating a reliable database that can be considered to be a feed for training ML algorithms in order to expand the database.
■ ASSOCIATED CONTENT
* sı Supporting Information
Figure 1 .
Figure 1.Comparison of the MD-simulated H 2 and brine densities with the available data in the literature 51,52 for (a) H 2 −NaCl systems with 1.09 and 5.02 mol•kg −1 and (b) H 2 −KCl, H 2 −CaCl 2 , and H 2 −MgCl 2 systems with a salinity of 1.1 m.All comparisons are made at 373 K and 10 MPa.Dashed lines represent experimental values.
Figure 2 .
Figure2.Snapshot of the initial configuration of the simulation system, including NaCl brine solution in the middle of the box surrounded by H 2 molecules from the two sides with the color coding of red, hydrogen; white, oxygen; blue, sodium; and yellow, chloride.
Figure 3 .
Figure 3. Variation of the predicted IFT values for the H 2 −brine system as a function of (a) temperature (T = 298−373 K) and salinity (m = 0−5 mol•kg −1 ) at 10 MPa, (b) temperature and NaCl salinity at 20 MPa, (c) pressure and temperature at 1 m NaCl, and (d) pressure and NaCl salinity at 323 K.In the general description, the dependence of IFT on both NaCl salinity and T is more significant compared to P. An increase in T and a decrease in NaCl salinity result in an IFT reduction, while an increase in pressure would not significantly change the IFT.
Figure 4 .
Figure 4. Density profiles of water and H 2 molecules under the effect of (a) temperature at 1.09 m NaCl salinity and 10 MPa pressure, (b) pressure at 1.09 m NaCl salinity at 323 K temperature, and (c) salinity at 323 K and 10 MPa pressure.The red and green shadings signify the interfacial width under the mentioned conditions based on the definition of the 10%−90% criterion.For better visualization, first, the center of the brine solution in the z direction is considered to be the origin of the figure, and also due to symmetry, only the right-hand side of the simulation box is presented.
Figure 5 .
Figure 5. Density profiles of Na + and Cl − ions at the interface of H 2 −water for (a) a salinity of 5.02 m to show the tendency of Cl − ions toward the H 2 phase and (b) salinities of 0.5 and 5.02 m to show the tendency of ions' accumulation in the bulk rather than the interface at higher salinity.This behavior can be examined by comparing radial distribution function (RDF) curves between H 2 and ions (see FigureS5).
Figure 6 .
Figure 6.Predicted IFT values between H 2 and various brine solutions as a function of (a) T (K) and salinity (m) at P = 10 MPa, (b) P (MPa) and salinity (m) at T = 323 K, and (c) (MPa) and T (K) at a salinity of 1.91 m.
Figure 7 .
Figure 7.Comparison of ion distribution at the interface of H 2 −brine solution between (a) NaCl and KCl, (b) NaCl and CaCl 2 , (c) NaCl and MgCl 2 , and (d) CaCl 2 and MgCl 2 under the same thermodynamic conditions of 323 K, 10 MPa, and 1.91 m salinity.
Table 1 .
Summary of Interfacial Tension (IFT) Data Experimentally Measured for H 2 -Water/Brine Systems 18ovided a correlation based on MD simulation of the H 2 −water−NaCl system which has an AARD of 7.77% in comparison to the Hosseini et al. experimental work18(considering the same 36 data points).Even though our correlation was developed over the previously mentioned condition, we examined it up to 423 K and 34.47 MPa, which are out of the range of training data.Our correlation obtained an AARD of 2.62% over 12 data points of Hosseini et al.18at 423 K and an AARD of 3.3% over 12 points of their data at 34.47 MPa.This gives a total AARD of 3.11% for all 64 data sets generated by their experimental measurement.These AARD values indicate that our correlation can be used to up to 423 K temperature and 34.47 MPa pressure as well (Table2).Although the mentioned correlation was developed by only NaCl data, it can be used for other systems of ions provided in this work (up to 1.91 mol• kg −1 ) with 1.65, 2.68, and 2.26% AARD for KCl, CaCl 2 , and MgCl 2 systems, respectively. | 2023-09-01T06:16:12.996Z | 2023-08-31T00:00:00.000 | {
"year": 2023,
"sha1": "45a454ec633a0f0da28790cfd2b5ce18abf9fa70",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1021/acs.langmuir.3c01424",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ddbe76d62a394ff76bdb21f76df51274ddc15d3",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249722938 | pes2o/s2orc | v3-fos-license | Is the post-communist transition over? Support for economic liberalization reforms is essential, but it grows stronger only where societies experience the effects of reversing these reforms
It is entirely justifiable to ask whether the post-communist transition, which began in 1989 in Central and Eastern Europe and Central Asia, can be considered to be over. Perhaps equally important is to understand the impact and perception of reforms implemented since then. Interestingly, reform success seems to weaken societal support for future reform. In contrast, experiencing the consequences of reform reversals (corruption, inequality of opportunity) appears to strengthen support for reforms. A key takeaway is that new waves of economic reforms should aim not only at high GDP growth but also at eradicating corruption and cronyism, strengthening the rule of law, and strengthening social mobility. ELEVATOR PITCH
Pros
The majority of transition countries have grown economically, and they have all become functioning markets.Transition countries have become institutionally more similar to countries at their level of GDP per capita, and many, but not all, have become more democratic.Inequality, as measured by Gini, is today fairly low by world standards in transition economies.Support for economic liberalization reforms only increased in societies that experienced the effects of reversing these reforms (corruption, inequality of opportunity).
MOTIVATION
The year 1989 started with round table talks between the, at that time, underground Polish trade union Solidarity's leaders and the Polish Communist Party Politburo in February, and finished with the destruction of the Berlin Wall in November.That year symbolically marks the beginning of "transition," which has both political and economic aspects.First, it relates to the process of intense political transformation in Eastern Europe and Central Asia away from Communist Party dictatorships.Second, it relates to a process of change in economic institutions, away from the centralized command and control system known as "central planning." But what was the end point?When would the transition be over?The lack of clarity or consensus over what the end of transition would look like continues to be an issue for academics, policymakers, and societies today.It is thus a worthwhile endeavor to critically discuss different possible "end points," how they compare to where societies are today, and why it matters.Is the transition over?
Most transition countries have not caught up to Western European living standards
A first possible end point for transition is to align the process with a full catch-up to Western European living standards.While this seems like an ambitious objective, researchers have described this as a credible "popular view" in the region [1].The Illustration on p. 1 presents GDP growth paths from the early 1990s to date for the nine most populous transition countries, plus Turkey, relative to Belgium, taken to represent a reference growth profile within the EU.Indeed, performance patterns have been complex, but three key factors can be highlighted [2].First, countries which made good progress in their institutional reforms grew faster (typically all the new EU countries such as Czechia, Hungary, and Poland, but especially the faster reformers; compare with the weaker growth in Romania).Second, countries which did not experience war or ethnic conflict (unlike e.g.Serbia and Tajikistan) performed better.Third, among countries where reforms have been partial or delayed (e.g.Belarus, Russia, Romania, Serbia, and Uzbekistan), those with natural resources have fared better in terms of GDP growth (e.g.Russia), even if their institutional reforms have been weaker.With this in mind, performance in every transition country has fallen short of a catch-up with Western Europe, even if the rate of convergence has progressed.Accordingly, differences in institutional outcomes require further investigation, as they appear highly relevant to the growth paths observed since the beginning of transition.
Transition countries' regulatory environments are similar to market economies at comparable income levels
A second approach is to consider the end point of transition as becoming "a typical market economy," focusing on the development of market-supporting regulatory environments.This is in line with the European Bank for Reconstruction and Development (EBRD) Transition Indicators (TI), which provide a rating of progress in reforms from 1 ("central ELODIE DOUARIN AND TOMASZ MICKIEWICZ | Is the post-communist transition over?planning") to 4 plus ("conditions typical of an advanced market economy").The TIs further split internally between two components: (i) liberalization (internal price liberalization, external trade liberalization, plus small-scale privatization and conditions for business entry) and (ii) structural reforms (privatization of large enterprises, governance and finance, and competition policy).
Notably, the post-communist transformation may have been over for a while in most of the former Soviet bloc according to the liberalization component.Indeed, already in the CA CEB late 1990s most transition economies, except for a few laggards (Belarus, Turkmenistan, and Uzbekistan), had reached a score of 4 for liberalization, thus demonstrating that the bare-bones structures of markets were in place.
A decade later, most of the Central and Eastern European countries plus the Baltic republics were also scoring 4 (or close to 4) on the more challenging aspects of reforms as captured under the term "structural reforms," demonstrating impressive progress in institutional development.Yet things looked different in most of the former Soviet Union republics.
Figure 1 captures the reform timeline until the year 2014, the last year for which the comparable scores are available (the Czech Republic no longer wants to be considered in transition by the EBRD metric and is thus not included).Conveniently, there is also a benchmark from outside the former Soviet bloc; five countries from the Middle East and North Africa (MENA) region were added to the EBRD program later on.Regional comparison suggests that, by 2014, the post-communist countries did not look different from the MENA group on the core liberalization measures.However, progress was more uneven on the structural reforms indicators.Central European countries and Baltic republics (CEB) exceeded MENA levels after only a few years of transition, and South Eastern Europe (SEE) achieved them by around 2010.Meanwhile, former Soviet republics in Eastern Europe and the Caucasus (EEC) and Central Asia (CA) were still lagging behind.Thus, structural reforms have been considerably slower, and exhibited striking regional differences in terms of progress; these cross-regional differences appeared early on, and persisted over time.
The emerging divide is thus between countries that either became EU members or aspire to (typically with association agreements), and those that did not.However, EU integration is not an external factor and the causality is complex: EU integration could well be a function of reforms, as much as reforms follow from (or preempt, but are motivated by) EU integration.For example, the three Baltic republics (Estonia, Latvia, and Lithuania) were initially not considered frontrunners in the EU accession race, but their progress with economic and political reforms, with the explicit objective of joining the EU, moved their position up.
Economic indicators of transition countries are now mostly "typical" of countries at similar income levels
A third approach to conceptualizing the end point of transition is to look not at the regulatory dimension but at structural economic features: querying whether transition countries are now showing economic characteristics that are typical of a country at their level of development, that is, with similar income or GDP per capita.This is the approach chosen by a number of researchers [3], [4], who view the end of transition as a point at which post-communist countries no longer bear the economic scars of their communist experience.Ten years into the process, transition countries still differ slightly from countries that were at the same level of development but did not share their communist past.The differences include that transition countries have "a larger share of their work force [...] in industry, use more energy, have a more extensive infrastructure and invest more in schooling" [5]; p. 1.But, by 2014, transition economies were "normal" in a structural economic sense by most indicators [3].
ELODIE DOUARIN AND TOMASZ MICKIEWICZ | Is the post-communist transition over?
Re-examining the question of normality over a large set of indicators, a recent study shows that transition countries are now comparable with other market economies at similar levels of economic development [4].Reforms have allowed for most of the economic distortions of the past to be remediated, and post-communist countries are now in many ways comparable to their neighbors without a communist past (i.e.countries with similar geographic conditions, trade potentials, and geo-political constraints).To some extent, these are signs that transition countries have returned to their long-term development paths.The author of this recent study does note two points of distinction: the financial sector (less developed), and the share of the government in ownership of enterprises (larger) [4].Furthermore, two important (and related) topics-the heritage of high energy use [2] and environmental pollution-are not addressed.
While transition countries are generally considered "normal" in terms of structural economic considerations, the dimension of financial development is an exception; it is lower than what could be expected given their GDP levels [4].In an earlier investigation, important distortions were highlighted in the financial structure of transition economies compared with OECD countries [6].The study also shows that medium-term growth was negatively affected by these distortions, thus empirically demonstrating the negative impact of underdeveloped or unbalanced financial institutions and markets [6].This underdevelopment is perhaps unsurprising considering that formal private finance did not exist in these countries 30 years prior.Underdeveloped financial sectors may also limit social inclusivity and mobility, therefore indirectly damaging the perceptions of economic fairness.
Another difference is related to the size of the state sector [4].By international standards, transition economies generally continue to have large state-owned firms.This is what still links a number of Central Eastern European countries to China and other South East Asian states with histories of communism.It also links them to other countries with totalitarian heritage of a different brand, for example Italy [7].
The communism experience was variable, so are its legacies
It has been posited that applying a uniform "post-communist" label to all transition countries may be misleading.The experience of communism varied, which may have a lasting legacy.For example, while being unemployed or holding even small private commercial property was illegal in some communist countries, this was not the case in some others (e.g.Hungary, Poland, or Yugoslavia).In particular, the Stalinist period was characterized by very specific patterns of industrialization, creating industries that later proved unviable, especially once relative energy prices started to increase in the late 20th century [2].This period's impact on both long-term growth and willingness to accept reforms may thus be crucial.For example, social trust remains lower in areas close to former Gulag camps [8].
To detect such effects, within-country variation needs to be examined.
Looking for a single quantifiable indicator, researchers have focused on the amount of time that a country spent under communism, which strongly correlates with the amount of time spent under some of the most damaging periods or forms of communism, especially under the Stalinist system.In that vein, it was found that 58% of the variation in the implementation of regulatory reforms can be explained by the amount of time a country spent under a communist system [9].Thus, the differing pre-transition conditions have strong differentiating effects on both reforms and structural economic features.
ELODIE DOUARIN AND TOMASZ MICKIEWICZ |
Is the post-communist transition over?
Not only the regulatory reform but the wider issues of inequality and institutional quality matter
A final approach regarding the end of transition is to consider it as the point at which higher-order institutional quality is on par with advanced economies.For example, corruption can be seen as a central institutional outcome, key to the overall assessment of institutional quality, and closely and inversely related to the rule of law [10].Society's experience with corruption matters.First, it affects productive activities, leading to lower growth ambitions, and less economic dynamism [11].Second, it affects life satisfaction, with one byproduct being higher propensity to emigrate [11].
Researchers indicate that the perception of corruption in the region remained high following transition, yet argue that experience with bribe-paying appeared to be no higher than in some other comparable countries (e.g.comparing Russia to Brazil) [3].However, recent observations show that more progress has been made with regulatory reforms than with overall institutional quality and corruption [11].Moreover, there have been tangible reversals associated with increased corruption in a number of countries [11].Prime examples of institutional reversal include Hungary, Slovenia, and Moldova (see Figure 2).Moreover, while institutional progress in transition countries has slowed in recent years, it has continued elsewhere.The net result is that the gap between transition countries and both comparator countries and the G7 group (Canada, France, Germany, Italy, Japan, the UK, and the US) is widening [11].
Another important channel of the societal impact of corruption is through inequality of opportunity.Inequality of opportunity is that part of inequality which is explained by an individual's circumstances at birth: place of birth, sex, ethnicity, and parental background [12].Market mechanisms alone may award acquired human capital characteristics with better jobs.However, if instead the major factor explaining earnings and quality of jobs is an individual's parents' level of education (followed by gender and place of birth), as is the case in transition economies, that indicates a systemic failure [12].Low institutional quality, as captured by corruption indicators, is a prime factor here.
This implies a complex role of rising inequality on support for economic liberalization.On the one hand, if inequality rises because efforts are better rewarded, as should be the case in a functioning market economy, this socially acceptable form of inequality should be associated with increased support for reforms [12].But on the other hand, rising inequality reflecting greater inequality of opportunity is more likely to translate into decreasing support for reforms, especially if reforms are thought to be the cause [12].Importantly, the transition experience has illustrated that partial reforms are associated with greater inequality of opportunity (as well as greater levels of corruption and lower institutional quality) compared to more extensive reforms.
This gives rise to a complex situation as the general public in transition countries may not necessarily distinguish between complete, consistent reforms and partial reforms scenarios, associating both with movement toward "the market."It is the incomplete reforms, for example partial trade liberalization that left scope for arbitrary government decisions and therefore cronyism, which have led to corruption, emerging oligarchies, and strong perceptions of inequality [1].Furthermore, when economic liberalization reforms have been implemented, and then reversed, worsening of outcomes is more easily attributed to the reversal and thus can lead to increasing support for reforms.
ELODIE DOUARIN AND TOMASZ MICKIEWICZ |
Is the post-communist transition over?
Relatedly, there is evidence that rising perceived inequality can damage trust and lower life satisfaction, but will also impact people's preferences toward redistribution or government interventions [12].In the transition region, perceived inequality is particularly high, with social perceptions largely overestimating the extent of real inequality.This likely links back to the distinction between economically explained inequality and inequality of opportunity.It is inequality of opportunity that may have the strongest impact on the perceptions of inequality.In many transition countries, there has been rising inequality of opportunity, a perceived lack of fairness, and a reduction in social mobility since the 1990s.When such inequality is perceived as resulting from cronyism, favoritism, and corruption, which all represent aspects of low institutional quality [10], social tolerance and acceptance of inequality is likely to be lower.In turn, low tolerance for inequality leads to an increase in its perceived level.
Support for market reforms
The evolution of public support for market reforms (from EBRD "Life in Transition" surveys) is shown in Figure 2. As seen, the share of the population supporting market reforms in Eastern Europe and Central Asia is varied but not out of line with what is observed in old EU countries and even Turkey.However, support appears to have fallen Yet support for reforms increased in countries where corruption increased.The effect is driven primarily by Central Asian republics, but also Hungary and Moldova.In contrast, improvement in institutional quality was not associated with increased support for market reforms.Thus, asymmetry is observed.Societies respond more to negative changes than to positive ones.This is in line with the lessons from cognitive psychology, and in particular with "prospect theory," which posits that losses loom larger than gains.As documented in [13], individuals respond much more strongly to institutional deterioration than they do to institutional improvement.This literature implies that instead of the vicious circles where poor reforms lead to poor outcomes and low support for further reforms, support for reforms actually increases if worsening outcomes can be understood as being caused by reversals, leading over time to corrections in policies and to institutional improvement.Of course, the scenarios are less optimistic if parallel with deteriorating institutional quality, the corresponding regimes build their capacity to resist reforms and change.
LIMITATIONS AND GAPS
There are some major limitations both to the discussion in this article and in some aspects of the transition literature at large.First, are institutional measures provided by intergovernmental organizations like EBRD and the World Bank unbiased?To what extent are they subject to political pressure from the countries these institutions are supposed to evaluate?Do they measure what they intend to measure?Second, these are highly complex issues.There is a circle of mutual dependence that is difficult to disentangle.Both regulatory reforms and more fundamental change of higherorder institutions affect societal attitudes and satisfaction, but how exactly, and in which direction?In turn, how and under which conditions do satisfaction and attitudes translate into political decisions, thereby modifying the course of reforms?And finally, do reform reversals and reform persistence generate equally strong responses?Third, 30 years after the transition started, different issues may matter more for the youngest generation.Climate change and protection of the natural environment are an obvious suggestion here.Are researchers therefore still asking questions that are seen as the most critical to the regions' populace?
All this indicates that there is still plenty of work to do in the fascinating line of research on transition, and more broadly on institutional change.
SUMMARY AND POLICY ADVICE
To conclude, the devil is in the details.Transition countries have experienced significant progress with respect to regulatory environments, and the structural features of their economies have become similar to economies at their level of development elsewhere.However, it is the lived experience of citizens that critically matters, with unfairness, inequality of opportunity, and low institutional quality and corruption all being associated with lower life satisfaction [11].Indeed, outcomes of institutional change and policies should not only be assessed by GDP per capita.Institutional "improvements" could be achieved, but still be associated with social disappointment.In particular, outcomes perceived as unfair are problematic, as oligarchic structures (especially around natural resources), cronyism, favoritism, or corruption can lead to societal cynicism and dissatisfaction.
Many early economic reforms were successfully implemented by economic technicians, who-often out of necessity-focused on regulatory frameworks.The agenda today is thus to tackle the remaining regulatory issues, but mostly to address higher-order institutional quality.Indeed, corruption stands as a barrier to further growth and serves to exacerbate inequality of opportunity.Better institutional quality would thus mean more growth and greater satisfaction with the outcomes of reforms.
Figure 2 .
Figure 2. Change in institutional quality and change in support for market reforms, 2006-2016 Liberalization and second-stage reforms , Romania, Serbia).MENA: Middle East and Africa (Egypt, Jordan, Morocco, Tunisia, Turkey).This is the comparator group. | 2022-06-17T15:06:44.736Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "7d5315ff70c835974e02fe894aaeeb2b685c3283",
"oa_license": null,
"oa_url": "https://wol.iza.org/uploads/articles/616/pdfs/is-post-communist-transition-over.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d9f0d4d26a885851f7c7da224c82a9ffec8e466c",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.